AI security refers to tools and techniques that leverage artificial intelligence (AI) to autonomously identify and/or respond to potential cyber threats based on similar or previous activity.
What is AI Security?
Artificial intelligence is defined as having machines do “smart” or “intelligent” things on their own without human guidance. As such, AI security involves leveraging AI to identify and stop cyber threats with less human intervention than is typically expected or needed with traditional security approaches.
AI security tools are often used to identify “good” versus “bad” by comparing the behaviors of entities across an environment to those in a similar environment. This process enables the system to automatically learn about and flag changes. Often called unsupervised learning or “pattern of life” learning, this method results in large numbers of false positives and negatives. More advanced applications of AI security can go beyond simply identifying good or bad behavior by analyzing vast amounts of information and helping to piece together related activity that could indicate suspicious behavior. In this way, AI security behaves in a manner that’s similar to the best and most capable human analyst.
According to a survey by Capgemini Research Institute, 69 percent of enterprises believe AI is necessary to respond to cyberattacks.
Common Usage and Adoption
AI security tools work to discover, predict, justify, act, and learn about potential cybersecurity threats, without needing much human intervention. Common AI security tool capabilities include:
- “Learning” based on past behavior to make quick, actionable context and insights when presented with new or unknown information/behaviors
- Making logical, inferred conclusions based on potential incomplete subsets of data
- Presenting multiple solutions to a known problem to empower security teams to select the best path towards remediation
Why Does AI Security Matter?
According to Gartner, AI security is a 2020 technology trend to watch. Here are a few reasons why AI security will continue to matter moving forward:
AI Security Augments the Shrinking Cyber Workforce
Resourcing has historically been a challenge in many SOCs. When it comes to manpower alone, the cybersecurity industry’s projected talent gap is expected to see 3.5 million unfilled jobs by 2021. While some argue that AI machines can or will fill this gap, a more scalable solution is to adopt AI security tools that augment the workflows of existing employees. This can greatly free up sparse resources by cutting down on time needed for threat hunting and alert triage or correlation, for example. Cybersecurity workers are then able to focus on other important tasks that cannot be automated through AI.
AI Security Helps Save Time Hunting for Threats
In addition to the growing talent gap, it’s clear that current security analysts often struggle to find the time needed to detect new threats. Respondents to a recent SANS Institute SOC survey admitted to relying on time- and resource-intensive methods for threat hunting, which often results in alert fatigue. The consequences of which can be dire:
- 73 percent reported a single alert investigation can take hours or even days
- 53 percent said they use three or more data sources to get to the bottom of an investigation
- 54 percent said critical alerts go completely uninvestigated
- 30 percent of their alerts that have been prioritized never get investigated
In part, this can be attributed to the fact that most event correlation is still being conducted manually within SIEM and big data products. Conversely, AI security tools are inherently capable of correlating events and triaging them, which again cuts down on the time needed for incident response and remediation.
The Capgemini Research Institute’s recent cybersecurity with AI report further supports this idea, with 64 percent of respondents stating that AI lowers the cost to detect and respond to breaches and reduces the overall time taken to detect threats and breaches up to 12 percent.
Potential AI Security Disadvantages
AI Security is Not a Silver Bullet
The cybersecurity industry is infamous for latching on to methodologies as silver bullet solutions, rather than considering the most useful applications. This has been the case for AI security tools, with some cybersecurity pros considering AI security to be the end-all/be-all for threat detection. An over-reliance on AI for poorly matched use cases – like fully replacing cybersecurity workers, for example – only burdens the enterprise with unnecessary risk, and should be avoided.
Awake’s Take / Difference
Rapid Insight and Context
The Awake platform is an AI security tool that provides insight and context that enables security teams to detect modern threats and take action quickly. Awake can thus significantly enhance and unify the customer’s security toolset and ultimately make the lives of the security team easier while making their organizations more secure.
Playbooks For Autonomous and Manual Response
Awake’s AI security platform learns from past incidents and is enriched by customized cyber security, governance, risk and compliance playbooks to provide the security analyst with both autonomous and manual triage and response options. These can trigger workflows within integrated solutions or simply recommend remediation steps such as evidence collection.
The Awake Advantage
The first generation of AI security behavioral analytics approaches relied on baselining to establish a “pattern of life”. By simply using unsupervised learning to detect anomalous behavior, these legacy technologies generate a high volume of alerts, most of which are false positives, while also missing threats. Conversely, Awake has been ranked #1 for time to value because of its frictionless approach that delivers answers rather than alerts.
- An AI solution to the cyber labor squeeze?
- AI For Security: What can we Learn from the Human Brain?
- AI in 2019: 8 trends to watch
- From Hype to Practical: What’s Next for AI?
Dig Deeper with These Resources
Awake Security 2 Minute Explainer Video
What if security could think? What if it could sense danger, calculate risk, and react quickly based…
The Internet’s New Arms Dealers: Malicious Domain Registrars
This report dives into the results of a multi-month investigation that uncovered a massive global surveillance campaign…