Industry Insights

Assessing AI Threats: Revolution or Evolution?

April 15, 2024

We need to talk about “AI threats”. 

Yesterday, Lloyds – the British insurer – released a report on how GenAI is “transforming the cyber landscape”. 

This is the latest in a long list of headlines over the past few months, which have demonstrated either growing threats against LLMs or how AI is enabled threats: 

These all make great headlines, but how new are these as threats? Who should be concerned? More importantly, what does this mean for security teams?

TLDR: These headline threats are mostly just headlines right now. For the majority of us, it’s an evolution of what security should already be doing rather than a complete revolution. Beyond that, many of the ‘new’ AI threats are only relevant to those building their own LLMs. 

Threats Against Large Language Models

Two of the most popular frameworks for understanding GenAI threats are Mitre Atlas and OWASP’s Top Ten for LLMs

Atlas is the sister of Mitre’s ATT&CK framework and borrows plenty from it. At the time of writing, there were 56 techniques defined. This is a helpful framework but it does apply more to companies developing their own AI models. Overwhelmed security teams shouldn’t feel the need to try and understand and worry about all the risks identified here unless their company is building and deploying their own LLMs.

OWASP Top Ten for LLMs OWASP is not dissimilar to Mitre Atlas in its applicability to security teams; the top ten threats are specific to those building their own GenAI applications. Prompt injection, data poisoning, and model theft are real problems, but they largely don’t apply to many security teams. 

As with any cyber threat, the extent to which you should worry about it ought to depend on your own threat model.

AI Enabled: Use of Generative AI Tools for Planning Campaigns

Remember the story of WormGPT, advertised on hacking forums last year? The compelling offering of the tool was less about generating zero-days, and more about creating incredibly convincing phishing emails. 

This has been reflected in what has been observed “in the wild”. In February, Microsoft and OpenAI produced a joint piece of research called ‘Staying ahead of threat actors in the age of AI’. As part of this research, they observed a handful of nation-state actors using LLMs to enhance reconnaissance, social engineering, and scripting support.

The one story that stands out, of course, is the successful deepfake attack against a Hong Kong bank in February. This may well be a sign of what is to come, but we expect conventional attacks to still be the norm in 2024.

In short, attackers are using GenAI in a very similar way to defenders; as a productivity booster, rather than a hacking tool. Attackers are not “weaponizing” AI; they’re just using it. 

Will we see more convincing phishing emails? Probably. Will the planning stage of attacks be shortened? Probably. Is this a fundamental shift? Probably not.

AI Enabled: Insider Threats

There has been a fair amount of chatter around the impact of GenAI on insider threats, mostly from insider risk management providers. According to Code42’s 2024 Data Exposure Report, there has been a  “28% average increase in monthly insider-driven data exposure, loss, leak, and theft events” since 2011, although the correlation to AI is unclear. 

This week, the latest edition of Return on Security revealed some interesting insights on this topic. Faced with the question “How do you think the usage of AI will affect insider threats”, only 27% of respondents thought that AI would make insider threats worse.  

Source: Return on Security: https://www.returnonsecurity.com/p/security-funded-134

While this doesn’t appear to be a huge threat right now, this can develop quickly. In particular, we should keep an eye on the impact of Copilot for Microsoft Studio, which has the potential to present an awful lot of sensitive data in the wrong hands. More on that in a future blog!

Looking Beyond the Threats: Data Privacy the Number One Concern

So what should we focus our attention on, if not threats against LLMs and AI enabled threats? 

The one constant challenge that comes up time and time again is around data privacy and regulations. There’s almost too much to reference here, so here’s a snapshot: 

  1. 87% are concerned their employees may inadvertently expose sensitive data to competitors by inputting it into GenAI (Code 42, 2024 Data Exposure Report)
  2. 85.7% of respondents listed “data loss” as top AI risk (Harmonic Survey)
  3. 77% of the respondents cited regulation, compliance, and data privacy as key barriers to the rapid employment of generative AI. (MIT Technology Review)
  4. “Uneasiness over data security, privacy, or compliance” is listed as the top barrier to adopting GenAI”. (Gartner, ‘Crossing the Chasm: Tech Provider Plans for Generative AI in 2024’) 

Rather than new threats, it’s data privacy that is a real challenge for most organizations; one that’s about people and processes as much as it is about technology. 

Although it’s not a revolution, there’s plenty to improve here. First, we must learn from the mistakes made with SaaS and Shadow IT. Locking down access with broad blocks drives users underground. Instead, get close to users. Understand their needs, and find secure ways to enable them. 

Second, traditional data protection tools are ill-equipped to prevent the vast swathes of unstructured sensitive data from being shared outside the business. We need to find new ways to protect our crown jewels. 

Summary

For the majority of security teams, the impact of GenAI on threat detection will be about attackers improving what they are already doing. This means better phishing campaigns, refining scripts, and gathering information.

While this clearly lowers the barriers to entry and will increase attacker productivity, we need to make sure we don’t over-rotate on shiny new GenAI threats.

For most organizations, it’s about protecting sensitive data, adhering to regulations, and refining the controls already in place.

We’ll soon be publishing some actionable guidance on best practices for overcoming these data privacy challenges, so stay tuned!

Request a demo

Concerned about the data privacy implications of Generative AI? You're not alone. Get in touch to learn bore about Harmonic Security's apporach.
Michael Marriott