The growth of data and the increasing complexity of extracting intelligence from information have led businesses and governments across the globe to implement artificial intelligence and machine learning technologies. AI and ML applications to supplant and enhance human capabilities range from image recognition in healthcare to failure prediction in industrial to natural language processing in customer service to identifying fraud or criminals in financial services or governments.
So what roles have AI and ML played in enterprise cybersecurity? On the face of it, cybersecurity seems like a perfect use case for AI and ML techniques. Security threats continue to grow by the day. The amount of security data collected by multiple cybersecurity sensors is growing exponentially, and there is a significant shortage of trained cybersecurity professionals. The industry has a clear need for AI and ML technologies.
Yet AI and ML are not as prominent in cybersecurity as they have been in other fields. But why are their applications in cybersecurity restricted to niche use cases, and how should organizations consider these technologies from their cybersecurity or AI/ML vendors?
What Limits the Effectiveness of AI and ML in Cybersecurity Today?
The answers to those questions are complex and multifaceted but revolve around three key issues.
Lots of data yet very little trainable data: In this environment, security professionals have relied on vendors to provide alerts on any abnormalities, but those alerts add up to several tens of thousands per day. Very few, if any, of these alerts would translate to attacks, often with over 99% false-positive rates.
From a threat-detection perspective, this means that organizations could really apply predictive AI and ML to only 0-1% of transactions. ML models need data to predict accurately, so this limits the ability to train models on actual threats. New threat patterns, which attackers are busy exploring every day, render useless any prediction based on past attacks.
Insufficient business risk context: The cybersecurity industry is fragmented with vendors for each security problem. It is not atypical for a large organization to rely on 50-plus security tools, often from different vendors. Each tool collects data relevant to its use but misses the overall context outside of its intended use.
For example, firewalls are used very widely and collect transactional network flow data, but rarely can they attach that data to the application and user context simultaneously. Similarly, endpoint systems collect lots of information on processes that run or can run on the endpoints but lack the context of how critical an application that the endpoint is trying to access is or the vulnerabilities within it.
The net result is that the basic context needed to apply ML becomes fragmented. A simple business question such as, “Should you allow or block a specific transaction into an application?” becomes difficult to answer at scale. It requires information on the inherent security risk posed by that transaction, estimated business risk from allowing such a compromised transaction, and the level of business disruption that could happen from blocking the transaction. Typically, this information does not exist or is scattered across multiple tools and is not available at the time such a decision needs to be made.
After-the-fact analysis vs. new attack vectors: Companies have addressed their data fragmentation through data-aggregation mechanisms like security information and event management, known as SIEM. While this is better than having no information, after-the-fact analysis of such aggregated data is passive by definition — meaning that it allows you to analyze a past threat and prevent that exact attack pattern if it recurs, but it won’t help prevent threats that don’t follow historical attack patterns.
So How Do You harness AI/ML in Your Cybersecurity Posture?
Some companies have recognized the challenges above and stated to address them in their offerings. Broadly, there are three stages to the application of AI and ML tools in terms of their effectiveness in cybersecurity:
Stage 1: AI/ML to simplify operation – Identify assets or users and detect abnormal behavior
AI can be an effective way to automate routine areas that usually take a lot of manual effort, particularly if those tasks are routine and don’t need broad business context and increased data sources to facilitate better training over time.
Automatic identification, or tagging categorization of assets being secured, is a surprisingly complex problem in large organizations with tens of thousands of assets under management. Here, an AI/ML model can identify but tag the asset based on processes or software running on other assets with which it communicates. This is similar to how photos can be tagged based on visual recognition.
More data from an organization or multiple organizations improve these predictions over time. Similarly, flagging anomalous behavior or deviations from known “good” or “trusted” behavior is a good application area, where just the sheer volume of good transactions makes AI effective and manual analysis difficult at the same time.
Stage 2: AI/ML for security policy definition – With appropriate business risk context
If a cybersecurity tool can embed the context of business risk from a security threat by using knowledge of the request (user and device) and what is being accessed (an application or data) and the medium (a network), then ML would have enough contextual intelligence to be powerful in security policy setting.
To achieve this in practice, cybersecurity vendors must assign “business security risk scores” to transactions, either to one transaction or to a group to which one transaction belongs, all while the transaction is executed. This is like how a bank would assess any online request for a transaction.
Assigning a business security risk score is not always simple. It requires holistic information on whether the user, network, or transaction is known or trusted; how vulnerable the asset being assessed is in terms of exploitability; and how business-critical a compromise of that asset could be in terms of if its breach exposes customer or employee confidential data that could harm the company. A system that can do that will be really powerful in using AI to define and recommend automated policies or dynamic policy updates as risk changes.
Stage 3: AI/ML to adapt security posture – Continuously trading off business velocity vs. security risks
Most companies have invested in many security tools, yet CIOs find it difficult to determine whether their security posture is better than before. Zero Trust-based policies and tools address this by allowing nothing except trusted interactions, processes, and users. By definition, they enable organizations to block out new threat vectors and unknown interactions instantly instead of allowing time for such interactions to happen. They then learn over time, as ML models or human analyst models would typically do.
While this enables a higher security posture, narrowly defining trust zones could impact business and prevent or slow down low-risk transactions that enable the business. However, AI/ML can get that tradeoff right. Organizations can start with very small zero-trust zones in critical applications and use ML over time to expand the trust zones based on risk and behavior patterns.
AI and ML have not been utilized to their full potential in cybersecurity yet. However, when used appropriately, AI and ML can play a very effective role in your cybersecurity. Proper applications can assist humans in security analytics and operations, recommend low-security-risk policies, and enable CISOs to maintain the best security posture that allows the business to operate at the necessary velocity.
About the Author
Vats Srivatsan is president and chief operating officer at ColorTokens Inc., a SaaS-based Zero Trust cybersecurity solution. As a member of the ColorTokens leadership team, he uses his extensive knowledge of cloud and cybersecurity across multiple industries to help customers in their Zero-Trust cybersecurity journey. Srivatsan’s previous three decades of experience include executive roles at leading companies including Palo Alto Networks and Google Cloud. At Google, Vats founded and led the Advanced Solutions Lab that helped apply Google’s AI to core business problems for some of the leading enterprise customers.
Views expressed in this article are personal. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.