Naturally, we all want to detect every threat to our network as soon as it manifests itself. That’s why we spend a ton of money every year on tools that can detect things automatically.

But what do we do when traditional automatic detection isn’t enough? Perhaps there’s a new attack that doesn’t yet have a signature for what you’re interested in detecting, or maybe the threat you’re after can’t really be found using traditional detection methods. Are the tools you use less effective than you assumed they were? Do they struggle to keep up with the evolution of attacker techniques?

These scenarios are great examples of where threat hunting comes into play. With hunting, you send your most experienced analysts into the unknown, searching for threats that the machines failed to find. You send your most experienced because, to be successful, the hunter is going to need to know how to coax your toolset into finding the threats they’re after. They’re also going to need an intimate knowledge of different types of malware, exploits and network protocols to navigate the vast heap of data consisting of logs, metadata, and PCAP.

The problem with sending our best and brightest is that we just don’t have enough of them. There’s a projected cybersecurity job shortage of 3.5 million by 2021—and that’s all security personnel, not the uber-experts. I would venture the shortage there is even more acute. So how do we hunt then? Well, perhaps a good place to start is to simply define “hunting” in the first place.

What is Hunting?

Ever since I began my career in cybersecurity, in a dark room lit only by the stinging light of computer monitors, I have heard many disagreements over the definition of hunting. I’ve been in and witnessed interviews where the interviewer scoffs or strongly disagrees with what the interviewee has defined as hunting. Many people add all kinds of qualifiers in an attempt to define hunting; some of which make me scratch my head, wondering why we’re so caught up in semantics while there’s a network to defend.

My first Security Operations Center (SOC) was a smaller one. We had a good number of devices to keep secure, but our smaller staff made do. It probably had something to do with the tools-to-analysts ratio, which fluctuated between 2:1 and 3:1.

This approach worked well for the most part. I haven’t worked in such a well-funded enterprise SOC environment since. However, despite the fact that we were drowning in browser tabs, there were still things that slipped through the cracks (as there always will be). Newer malware and exploit attempts were often not picked up by the security tools we employed. In order to compensate, we would write custom parsers/rules, and run custom queries for newer Indicators of Compromise (IoCs). We would also search for patterns of activity that seemed suspicious; anything from POSTs without referrers to lying content-type headers.

For our mandatory timesheets, we would label any of these activities as “Hunting.”

But my team and I never had a conversation about the definition of hunting. We just had to write something on the timesheet, and at least some of the things we were doing fell under the industry definitions of hunting at the time, so rather than write, “I spent 30 minutes searching for IoCs, 30 minutes hunting and 30 minutes experimenting with new rules/parsers,” everything got looped under “hunting” and we called it a day. This likely has a lot to do with how I define hunting now.

Issues Surrounding the Misconception of Hunting

The first time the hunting definitions and procedures I had grown accustomed to were put into question was when I began interviewing for another job. When one interviewer asked what I do to hunt for malicious activity on a network, he scoffed at my response of pulling open-source IoCs from places like, zeustracker, etc., responding, “Well that’s not really hunting, is it?” I then continued to list other indicators, like POSTs without referrers, HTTP traffic to IPs, and other noisy starting points that I had used in the past to begin an open-ended investigation. When I asked if the interviewer considered those activities to be “hunting” and the response was, “Yes.”

So, according to this person, searching for IoCs is not hunting; searching for things that are sometimes indicative of malicious activity but will require you to sift through benign traffic is.

I suppose I can understand the logic. With the former, you’re searching for something that is known-bad, so any hit should indicate (assuming your IoCs are from a good source and have been vetted appropriately) malicious activity. Some IoCs will, due to the lack of context, require you to do some more digging into the event to see if it is, in fact, malicious, but this isn’t any different than vetting alerts from your IDS. The latter, sifting through the results of less specific queries, does seem to follow the analogy that people were going for by calling the activity hunting.

Like a tiger stalking his prey through the brush, you sift through network metadata until you see something that catches your eye… then you pounce! To your dismay, what you have sunk your teeth into is not succulent prey, but instead an ad informing you of a limited-time-only BOGO sale. You spit it out in disgust and continue your hunt…

The fact of the matter is, if you’re not catching things that IoC searches will catch and you have time to do said searches, you should be doing them. Is it low hanging fruit? Yes. Is it impressive, or complicated, or high-five worthy? No. But it still remains a useful tool in defending your network. Ideally, if you are using IoCs in your security stack (through your firewall, web proxy, IDS, EDR, etc.), it’s highly automated and doesn’t take much time away from your analysts.

However, not all SOCs are created equal. Sometimes it’s necessary to supplement your tools with something like I describe above, whether you define that as “hunting” or not. It matters more that you know it’s something that should be covered in some way, even if your answer to blocking new IoCs is to have a whitelist. If your only option is to manually collect the IoCs and plop them into your SIEM, then so be it.

Third Time is NOT the Charm

There is at least one more definition of hunting that I’ve encountered, and it certainly created a variety of questions in my head – most of them sarcastic. One of the SOCs that I worked in was an MSSP (Managed Security Service Provider) that serviced a large number of customers using the same security stack. The methods of automated detection were often not enough, so more manual methods were required in order to find malicious incidents. These activities were often (if not exclusively) called hunting amongst the staff in the SOC: “looking for malicious activity that is not automatically caught by your security tools.”

At least one person, however, seemed to disagree with this definition. They offered up something they called “PCAP Hunting” as legitimate hunting. This was described as pulling down a large amount of PCAP and “searching through it” for malicious activity. The visual that definition creates is painful to imagine, let alone actually subject yourself to. If you don’t have any way to extract network metadata, then I suppose you might have reason to download a PCAP and sift through it using Wireshark filters or scripts. However, I believe that there are much better ways to hunt, especially when you have logs to search through. Even without a better way to search for the data, I’d grep the logs before I downloaded a bunch of PCAP to sift through manually.

This is a good example of what we see sometimes when people are trying to define hunting: they believe that hunting is only hunting based on its difficulty. If the activity is easy, like querying for known-bad IOCs, then it’s not hunting. If you’re searching for POSTs to IP hosts without referrers, that’s not hunting either. It’s only hunting if your query looks like dwarven runes, or if your eyes bleed while you’re doing it.

From these data points and just my own experience, I have come to the conclusion that hunting is an open-ended effort that takes a time investment to successfully identify something of interest (if it was easy or already-known, there would be signatures for it). But what does the Internets say about this? More on that in part deux!

By Troy Kent
Threat Researcher
Security Research