SECURITY

Finding Islands in the Stream (of Data)...

      

This is part two of the "Hunting with Splunk: The Basics" series.

A couple of years ago I wrote a blog post about how to use Splunk Stream to find and alert on compromised SSL certificates. It was a great example of how to leverage wire data to alert on threat indicators beyond IP addresses and file hashes. Today, we want to look at Splunk Stream for another purpose: hunting.

For those not familiar with Splunk Stream, it’s a free application that extends Splunk Enterprise to collect data off the wire and break down the contents based on protocol (similar to how Bro or Suricata creates wire metadata). As this is being written, Stream supports over 28 different protocols across the OSI stack including TCP, UDP, DNS, HTTP, FTP and many others. Within TCP and UDP, we leverage deep packet inspection to detect protocols running at the application layer like tor, rdp and sharepoint, just to name a few. After Stream extracts and identifies network data, it then maps that data to the Common Information Model (CIM). For example, SSL certificate information extracted from TCP maps to the Certificates CIM and HTTP data maps to the Web CIM.

Many of the 28 protocols map to various elements of CIM and can be found in the Splunk Stream Installation and Configuration Manual. Additionally, Stream can parse pcap files, capture full packet streams and collect Netflow over 10Gbps interfaces. Stream installs with its own listener to intercept data off the local interfaces, but it can work with taps and span ports as well before forwarding the data to a Splunk Indexer.

When it comes to hunting, Stream complements other data sets you may already be collecting. “But wait!,” you say, "I can’t collect all the wire data in my network. I don’t want to overwhelm my analysts and I certainly don’t have the disk space, and also 10k other reasons..." In this case, you’re in luck, because Stream allows for protocols to be selectively captured. For example, if you only want to gather FTP and NOT HTTPS, you can do that. Not only can you select the protocols to capture, you can specify individual protocol fields to capture within a specific protocol, apply filters, and even aggregate values to get certain statistics. You can also use the estimate function to preview your event count and ingest for a specific protocol before you start collecting.

Alright, I’ve spent a few paragraphs going on about some of the cool parts about what Stream is and why you should collect it. Let’s get down to some practical applications. In the paragraphs below, we are going to focus specifically on collecting DNS and HTTP data and what they can help us see.

So, first question: Do you collect DNS data today? If so, how do you collect it?

DNS can be very helpful when hunting; all the way from the A record to the AAAA record (HA!). There is a wide variety of methods to ingest DNS logs from both the hosts and network, but this post assumes that you have access to DNS logs and that they are in Splunk. So now that you have DNS data, you might ask “What could I do with this DNS data?”.

Suppose you had a hypothesis that you could find suspicious domains in DNS and then pivot back to the systems generating these DNS requests. To test this hypothesis, you might end up examining the domain or sub-domain fields in your Splunk instance in an attempt to find high levels of Shannon entropy or potentially dissect the various aspects of the FQDN. These techniques and others for monitoring DNS were presented at .conf2015 by Ryan Kovar and Steve Brant in the presentation "Hunting the Known Unknowns (with DNS)," where they leveraged the very helpful URL Toolbox written by Cedric Le Roux. More information around entropy and DNS can be found in the blog posts "When Entropy Meets Shannon" and "Random Words on Entropy and DNS" written by Sebastien Tricaud and Ryan Kovar.

Let’s use DNS as our first example of hunting with Stream. How do I begin my hunt to prove my “suspicious domains have a high entropy value” hypothesis? Perhaps the entropy of the domain itself isn’t a big deal, but the subdomain is. How can we calculate the entropy of the subdomain itself? Let’s brush off the URL Toolbox and find out.


In the above search, you can see that I am looking for A records from the stream:dns sourcetype. After identifying the query value, I use the URL Toolbox to break the query domain name into pieces. Then, using the search command, I filter domains that don’t have a TLD and specific domains that I know are not interesting. Incidentally, we could have streamlined the above search by using the lookup command and a list of common domains (like the Alexa 1 million!).

However, in this example the reason I used both of those fields was to show how I can iteratively narrow down my results. Keep in mind you are seeing the final product—I didn’t do this all in one search; I am hunting an adversary with a systematic approach. I execute the macro provided by URL Toolbox and that calculates the entropy of the subdomain (though I could calculate it against any value) with a count. I then sort by the entropy score because the higher the entropy value, the more random the subdomain. The point behind this is that highly entropic (random) strings are much more likely to be created by a machine, NOT a human. From here, I can pivot from my results back to the host or IP address and start doing additional investigation of the workstation in order to validate or invalidate my hypothesis.

Now that we’ve discussed DNS, let’s talk a bit about HTTP. There are a variety of different ways to monitor HTTP traffic. You could use the logs from webservers like IIS and apache to provide insight into what was happening on web traffic from your servers. If you have a web filtering gateway, those logs could give you insight into the web traffic going across edge devices—provided you are monitoring all egress points—but they don’t show other HTTP traffic bouncing around the network. When you look at a number of multi-stage threats today, HTTP is a protocol that needs to be monitored; while firewalls may provide some level of understanding, monitoring HTTP on the wire provides the best visibility on your network.

Let’s say I wanted to see web traffic that was starting within my RFC1918 address space and going somewhere else. Of this web traffic, I wanted to see just the HTTP GET and then I wanted to sort by the bytes_out and see what the URI was that the GET was to. This is what we have in the below search. From here, we could run additional stats on these values to identify outliers.

What else could you use Stream with HTTP for? Perhaps examining form_data for passwords being sent in the clear? Maybe even determining which web sites users and their browsers are requesting, but are being blocked at egress. Just because the communication path was blocked to a site doesn’t mean intelligence can’t be gleaned. Additionally, knowing a user and host attempted an outbound connection via HTTP could point to a malicious call back and provide additional opportunities for a hunter to hypothesize and look for systems that have been compromised; or perhaps see the requests that are coming into your enterprise via HTTP. Funny thing about that is you may see things like SQL injection and other web-based exploits this way.

We could go on forever about using Stream for hunting but we are going to stop here for now. That said, if you are at DEF CON in the Packet Village, you can see Splunk Stream in action at the Wall of Sheep. For details on what Splunk did last year, check out "Splunk at the Wall for DEF CON 23 – Part II."

As always... Happy Hunting :-)

Posted by

TAGS
Show All Tags
Show Less Tags