By Gary Golomb
Co-founder and Chief Research Officer

Sometimes as an industry I think we chase the latest methodology as a panacea rather than thinking of it in terms of what it will – and will not – be good at and useful for. What happens eventually is that the bubble bursts, and we’re no closer to the goal of improving threat defense. IMHO this is what I fear is happening with all the AI buzz. For a broader discussion on this topic join me for a webinar on the 17th – click here to register.

A walk down memory lane

“Intrusion detection systems are a market failure… Functionality is moving into firewalls, which will perform deep packet inspection for content and malicious traffic blocking, as well as antivirus activities. Firewalls are the most-effective defense against cyber intruders on the network…” –Gartner, June 15, 2003.

“Enterprises buy too much threat prevention and not enough detection and response technology.” –Reporting on Gartner, 11 years later.

In June of 2003, I was a member of the research team working on the Dragon IDS. Literally only four weeks after the quote above about IDS and Firewalls was published, I was in the meeting when our team was told by management to cease all projects focused on advancing analytics and to stop seeking the next generation of detection capabilities.

Instead, all resources were instructed to focus on developing inline blocking capabilities – that is, becoming a DPI Firewall of sorts. I’ve since spoken with researchers from other companies at that time and it seems the effects had a similar reverberation throughout the industry.

Firewalls circa 2003 were already introducing new methodologies for performing varying degrees of pattern matching in hardware. Pattern matching itself was the leading detection methodology of the time, although by then it was being augmented by additional methodologies, including protocol anomaly detection and continually evolving (still to this day) generations of behavioral analysis. Methodologies evolved over the following years as we watched the expansion of capabilities available for end point telemetry collection and analysis, virtual detonation, process instrumentation, and a number of other methodologies later marketed as “APT detection.”

AI and ML for Security

It’s very accurate to say that AI (really, ML, but more on this below) is just another methodology in the evolution of tools available to address certain aspects of the kill chain or InfoSec workflow.

“When you’re asking for funding, say: AI. When you’re designing, say: ML. When you implement: if-then-else.” -A joke currently circulating in Silicon Valley that, like many great jokes, is rooted in reality.

First and foremost, let’s agree on generally accepted definitions: AI is defined as having machines do “smart” or “intelligent” things on their own without human guidance, and ML is the practice of machines learning from data supplied by humans (modeling based on computational statistics). If that is the case, then AI does not exist in InfoSec and won’t exist for a very, very long time – no matter what the marketing is telling you.

As an industry, we get high with excitement on headlines like Google’s AI learning to play Go and defeating the best players in the world in that game. Amazing, yes. However, it took 1,000s of computers analyzing hundreds of thousands of games to learn the best possible strategy to use at any given point in the game. Compared to the data problem in InfoSec, that’s simple.

But what about all the marketing that talks about AI in InfoSec? In the best of cases you can mentally substitute AI with the term ML. And I should say, this is not a bad thing. ML can solve a subset of well-defined security challenges far more effectively than other existing methodologies can, as explored more below. However, I said you can substitute ML for AI in the best of cases, but the reality is that most the time you hear AI/ML, it’s not even computational statistics being performed under the hood – it’s heuristics. Why? This may come as a surprise to hear, but because heuristics, while much simpler, work very well for a wide variety of security-sensitive activities while being far less computationally intensive than data science-based methodologies. Btw the heuristics are the if-then-else in the joke above!

So, if ML is simply one tool in a toolbox full of methodologies for identifying undesirable activity, what activities is ML best suited to address? Simply put, ML is most successful tackling well-bounded and well-understood (or, at least well-understandable) problems.

While I may sound like a critical pundit of AI/ML, as one of the first employees of Cylance I was able to witness first-hand the development of a stunningly successful (technically and otherwise) application of ML to the problem of malware detection (while that vision has expanded greatly, early versions of the technology focused on malware detection).

However, that technical success was anchored in the boundedness of the problem being studied and solved. More specifically:

  • The problem is structurally bounded: The type of data and structure do not change, or change extremely slowly, generally over the course of many years. In the case of Cylance’s initial focus, the structure of the data is defined by file format specifications that evolve/changes very slowly (relatively speaking).
  • The problem is behaviorally bounded: In a good use case for ML, the data to be modeled/analyzed will only appear as an effect of a small number of actions, meaning data points can be reliably mapped to predicable behaviors. When modeling the characteristics of files, the fields defined by specifications tend to define allowed data types and interpretations quite rigidly.
  • And the most significant factor: The problem is free of subversive influences: This is a special case that applies almost exclusively to the InfoSec context. Here, there exists a malicious human who is both capable and incentivized to discover and exploit weaknesses in the AI/ML model or the implementing software. The reason my previous company was so technically successful, is because it is extremely difficult to make significant enough changes to a file that both obscure it from statistical analysis while remaining in a format valid enough to be loaded by the operating system.

Next Generation AV/EDR is a great example of an InfoSec challenge that meets the three constraints above, so ML/AI has been applied quite successfully there.

However, there is a chasm of difference between successful EDR types of applications, and the application of ML to network and other types of data. The nextgen AV use case, like other successful InfoSec use cases, ultimately starts with extremely large datasets of “bad” and “good” file samples the algorithms can “learn” from (really, the algorithms profile as opposed to actually learn, but we’ll save that discussion for beers) – yet this is the very point that determines the best use cases for ML.

On the other hand, large datasets (in practice, mostly logs) where each feature has volatility in interpretation, the data set itself contains “volatility” in data inputs, and the integrity of the data itself is influenceable by the attacker make it impossible to create models of the same accuracy and sophistication as shown by the many endpoint security players.

Specifically, as we have seen in enterprises across industries, applying ML to InfoSec use cases that are not explicit and precise can lead to worse problems for the enterprise instead of solving them (see press release above again). If the solution to the use case cannot be precisely defined (in the case of ML: modeled), then the results output by the technology will tend to be equally imprecise, meaning a human is required to analyze the results before taking action – however that lands us squarely back in the skills crisis epidemic we were trying to solve in the first place

For example, a number of “intelligent” solutions on the market will profile user behavior. The “AI/ML” will then flag the behavior as “changed” because the user ran a binary they’ve never run before. The reality is this condition happens all the time, especially considering the frequency in which user processes can legitimately trigger a cascade of legitimate subsystem binaries being run in response to new conditions.

The side effect though is that now more escalations to highly skilled analysts are required to disambiguate user behavior and intention from the flagged deviation. This task, by the way, is not only getting more common but unfortunately also more complex as operating systems continue to get more componentized.


Not all InfoSec ML use cases are created equally – technically or philosophically. Like the firewall of 2003, ML does have some well-matched use cases that are advancing the state of the art in enterprise protection. However, like the firewall of 2003, an over-rotation on ML for not-well-matched use cases will not only burden the enterprise with unnecessary additional risk and expense, but could also contribute other lasting negative impacts like the atrophying of methodologies that compensate for ML’s weaknesses.

P.S. Shameless plug – join me for a webinar as part of BrightTALK’s Data Driven Security Summit. Click here to register.

Advanced Security Analytics