If there’s one thing that’s remained constant this year, it’s this: employees are doing whatever they can to use the latest AI tools.
They’re not waiting for policy or training. They’re discovering new tools in Slack threads, newsletters, Chrome extensions, and giving them a go before you’ve even heard of them. And as they do, they’re bouncing between work and personal accounts, uploading everything from spreadsheets to customer contracts, often without even realising the risk.
Some of that experimentation is brilliant. It’s how teams innovate and move quickly. But it creates a real challenge for anyone trying to protect sensitive data in real time.
30X increase in GenAI site coverage
The average company is now dealing with 254 different AI apps across its user base. That’s not a typo. Two hundred and fifty-four.
This month at Harmonic we’ve taken a big step forward in helping security teams keep pace. We’ve expanded our coverage to 30 times more GenAI tools than before. That includes the well-known ones, sure. But also the odd and obscure. The niche tools that pop up one week and are somehow embedded in daily workflows the next.
Detect sensitive content in files
And it’s not just about detecting prompts anymore. We’re now able to scan for sensitive data across all key file types. PDFs with customer contracts, spreadsheets with financial data, text files containing source code or access keys. If someone uploads them into an AI-enabled tool, you’ll know about it immediately.
Here’s the reality. People aren’t doing this with bad intent. Often they just want to rewrite a document or analyse some figures quickly. But intent doesn’t stop data from leaking. And legacy DLP tools still struggle to pick this up without overwhelming you with noise.
Personal Accounts Still Fly Under the Radar
Another growing concern is personal account usage. Nearly half of all sensitive data exposures, 45.4%, are happening through personal accounts.
We’ve made it easier to respond to that. With our latest update, you can now tailor what happens based on the account someone is logged into when an alert fires. If someone’s using a sanctioned work account on ChatGPT Enterprise, you might decide to just monitor. But if they’re on a personal Gmail account using a tool you haven’t approved, you can set different actions in motion.
You’re no longer stuck with one-size-fits-all controls. You can shape your response to fit the real context.
Why This Matters
What we’ve seen across our customers is that shadow AI rarely starts with malice. It starts with people just trying to get work done. The risk is real, but locking everything down often drives usage underground. It doesn’t stop it.
The answer lies in visibility, context, and control that adapts to real working patterns. This update gives you exactly that. Broader coverage. File-level protection. Smarter account-aware policies.
You’re not just reacting. You’re keeping pace.
If you’d like a walkthrough, we’re always happy to show you how this works in practice.