Home Features Cybersecurity and Artificial Intelligence – It’s Complicated

Cybersecurity and Artificial Intelligence – It’s Complicated

558
0
SHARE
AI
SHARE

In this article, I will look at how Artificial Intelligence (AI) can help improve cybersecurity practices in an environment of ever-increasing threats and discuss the role of AI in alleviating the perennial talent shortage in the field of cybersecurity. Remember that the current wave of AI, driven by advances in deep learning, started around 2015, but the talent short- ages in cybersecurity precede that. I also caution that if we are not careful, AI can even be a double-edged sword when it comes to cybersecurity.

By Kashyap Kompella, CEO at RPA2AI Research

Let me start with a flashback. About a decade ago, I used to audit the information security practices and cybersecurity preparedness of large global enterprises. We found that the main concern and the weakest link in preparation was the talent shortage for security teams. This was true for different organizations across multiple geographies. Imagine if that was the situation in the pre-smartphone and pre-IoT age – and in the current scenario, when the threat landscape has become that much more complex!

Exponential growth in digital infrastructure and connected devices

Every year, millions of connected cars, hundreds of millions of wearable and IoT devices, plus more than a hundred billion lines of new software code are added to the existing digital infrastructure of our world. No doubt, digital technologies and smart devices have vastly improved customer experience, increased business agility, and ushered in an era of rapid digital innovation. But at the same time, we must acknowledge that from a cybersecurity point of view, there are now that many more threat surfaces and attack vectors.

If we had talent shortages in the pre-digital era, imagine the extent of the shortage now. Not surprisingly, industry surveys year after year reach the same conclusion – that there simply are not enough cyber- security experts to staff the roles required to navigate enterprises through this com- plex threat landscape and to safeguard our digital infrastructures and systems.

Cost to enterprises of attacks has gone up considerably

Compounding this problem are a few other factors related to the cost of attacks, data privacy concerns, regulatory compliance requirements, and fines for non-compliance or failure to adequately safeguard user data. So much so that cybersecurity insurance is one of the fastest growing segments of the insurance industry!

As more and more of our lives from commerce to citizen services move online, the cost of security breaches has gone up – not just from a regulatory point of view, but also from a brand reputation and a lost business opportunities point of view. There are in- stances of CEOs having lost their jobs because of cybersecurity breaches. Without doubt, cybersecurity is definitely a C-suite and a boardroom issue now.

What excites cybersecurity leaders and CXOs about AI?

At the current scale, because of the complexity and diversity of enterprise technology estates and infrastructures, traditional, manual-based cybersecurity approaches are coming apart at the seams. Given the talent shortages discussed above, it is as if cybersecurity leaders are fighting battles with their hands tied. AI, and the potential to automate repetitive tasks and relieve overworked teams to instead focus on value-added and pro- active analysis, is very attractive in this context.

AI Use Cases in Cybersecurity

In addition to AI’s potential for automation, there is also a great amount of interest in exploring the usage of AI to improve the current practice of cybersecurity. Note that by AI here, I refer more specifically to one of its branches called machine learning.

The majority of the use cases of machine learning for cybersecurity rely on supervised machine learning techniques (where human cyber analysts initially train the machine learning application using existing data). Unsupervised machine learning (where there is no such human training) use cases are still in an emerging phase and experimentation. This is true of use cases outside the realm of cybersecurity as well.

With that context, here are some examples where AI is being used to improve the current approach to cybersecurity:

Intrusion Detection: Machine learning helps detect and defend against intrusions, going beyond simple rules-based logic. Once the typical behavior is “learned by AI” for example, based on factors such as number of access attempts, frequency of queries, amount of data per query, outliers are automatically flagged as suspicious without the need for any human intervention.

Malware Detection: Typically, new malware is manually created by bad actors but once that is done, the creation of subsequent variants (that are intended to evade detection) is automated. Enhancing traditional signature-based systems of malware detection with machine learning techniques can identify such future versions and variants of malware and prevent their spreading.

Discovery of code vulnerabilities: This is a relatively new application area, where machine learning is used to scan vast amounts of code and automate the process of identifying any potential vulnerabilities (before the hackers do).

Enhanced Threat Intelligence: By combining traditional threat intelligence (i.e. using a list of all known threats to date) and using machine learning to detect new threats, better overall threat detection rates can be achieved.

Fraud Detection: Fraudulent transactions and activity can be flagged and prevented in real-time by detecting patterns and identifying deviations from the expected baseline behavior. Anomaly detection, as this technique is commonly known, is one of the best-known applications of machine learning. Manually sifting through the vast amount of event logs to identify outliers is not only humanly impossible but is also best left to AI.

As you can see from the above discussion, these use cases are not entirely new. These are certainly things that practitioners have been doing for a long time. The difference is that now, AI is being applied to these existing use cases with the goal to make them more robust and more secure. In this manner, enterprises can reduce the time taken to identify, analyze, and respond to threats by complementing and extending existing approaches with AI.

But AI can also increase the threat surface. AI systems, just like other IT systems, come with their own vulnerabilities. Attacks on AI systems mostly involve confusing the underlying ma- chine learning model and bypassing what the AI system is supposed to do. For example, generative adversarial networks (GANs are a type of artificial neural network technique) can be used to fool facial recognition security systems. GANs can even be used to attack speech applications and subvert voice biometric systems. Another example is that by fooling the AI system in a subtle way, a malware file may be made to be incorrectly classified as a safe file. As AI applications get more widely adopted, such risks will also increase. These risks first need to be understood before they can be mitigated. This also means that cybersecurity specialists need to have a very good understanding of how such applications work, what their susceptibility to adversarial attacks is, and how to become well-versed in machine learning technologies.

AI can be weaponized by malicious actors. Another note of caution is also in order. AI is a dual-use technology. That means it can be used for both good and evil. In the context of cybersecurity, we need to realize that the same AI technologies are also available to the malevolent actors and they are becoming adept at using AI and have started to employ them in a variety of ways. One such example is spear phishing, where emails are personalized using AI to maximize the chance of victims opening the emails and clicking through to unsafe links and sites. Not only that, hackers are also choosing their victims based on the likelihood of them “converting” – just like a regular marketer using AI.

Such risks can become even more heightened in a “work from home” and remote working scenarios where the workforce is much more likely to be distributed and outside the organizational security perimeter. How to mitigate such risks is going to be a big area of concern for cybersecurity teams in the future and a first step would be to hire AI experts into cybersecurity teams.

We discussed a wide range of topics and themes. But in the final analysis, the nature of cybersecurity is a constant cat-and-mouse game. There is no definite end point but it is a continuous cycle of identifying, preventing, and guarding against emerging threats and new risks. Artificial intelligence does not change this fundamental dynamic of cyber-security, but hopefully it provides an edge for enterprises that wield it smartly. Ultimately, the question boils down to whether your enterprise effectively harnesses artificial intelligence technologies or whether the hackers leverage AI better than you do.

About the Author

Kashyap ompellaKashyap Kompella is the CEO of RPA2AI Research, a global technology industry analyst firm. He is also the co-author of the bestseller “Practical Artificial Intelligence: An Enterprise Playbook.”

 

Disclaimer

CISO MAG did not evaluate/test the products mentioned in this article, nor does it endorse any of the claims made by the writer. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same. CISO MAG does not guarantee the satisfactory performance of the products mentioned in this article.

SHARE

Subscribe Now to receive Free Newsletter

* indicates required


By submitting this form, you are consenting to receive marketing emails from: EC-Council, 101 C Sun Ave. NE, Albuquerque, NM, 87109, http://www.eccouncil.org. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact