Quick thoughts after RSA.
It was hard to go more than five feet last week without running into someone talking about multi-agent workflows, agentic AI, or quantum. But under all the buzz, there’s a quieter, more grounded reality setting in for enterprise security teams.
I spoke with around 30 or 40 CISOs last week. The consensus? Securing agentic workflows is a million miles away from the realities and risks today.
Most security teams are well into the AI journey, but have hit stumbling blocks with visibility and controls. It’s not sustainable and employees are not waiting to catch up.
Everyone is on the same AI journey
When it comes to AI adoption, the requirements are really crystallizing. Everybody is somewhere along the same journey.
Step 1: Create an AI Policy
Step one on the enterprise GenAI journey is always the same: write a policy. Nearly every organization has done it by now. The issue is that most of those policies aren’t enforced, barely communicated, and often generated by ChatGPT itself. They’re designed to check a box or protect against legal exposure, not to meaningfully reduce risk.
Usually they say something like, “Don’t paste sensitive data into public tools.” That’s fine in theory. But no one reads these policies, let alone follows them.
And they definitely don’t stop a well-meaning employee from uploading M&A information to a personal Gamma account on Friday afternoon.
Step 2: Put your steering committee in place.
Once the policy is in place, most organizations form a steering committee. That’s usually step two. A small group meets regularly to discuss risks, priorities, and where AI should or shouldn’t be used.
But they quickly run into a visibility problem. They don’t know what tools employees are using or how they are using them.
Step 3: Visibility
Lack of AI visibility is the number one complaint we hear from CISOs. They can’t tell which apps are GenAI-enabled, the use cases their employees are putting them to or how sensitive data is flowing through them.
Most teams turn to their SASE tools by default. But those weren’t built for this kind of visibility. They can’t show you what prompts are being entered or what the use case actually is. Instead they’ve created a fictional, out of date ‘GenAI’ category of a couple of hundred chat apps. This category is already obsolete since almost every SaaS app has already incorporated GenAI in its subprocessor list e.g. Hubspot, Grammarly, DocuSign etc. Meanwhile new tools like Gamma and GenSpark are not chat apps but are GenAI based and have tens of millions of users. So what’s the distinction SASE tools are making? There is none to be made.
So security teams are stuck doing the work manually. Digging through logs, chasing down obscure tools, and trying to assess risk app by app. It’s taking three to four days per tool to triage.
And with an average of 254 AI apps in use across the enterprise, that’s just not sustainable.
Step 4: Controls
The final, most important step, which is to put sensible controls and protection around the use of AI.
Interestingly, I think there’s actually scope to go very fast here if organizations want to. Where we end up with most organizations is to restrict:
- Use of China-based Apps (DeepSeek, Baidu Chat, Manus, Kimi Moonshot, ERNIE, etc)
- Use of personal accounts (employees logging in with gmail)
- Use of free tiers (data training risks)
- File uploads to high-risk sites
- Restricted movement of highly sensitive IP and customer data.
Unfortunately, current tools are surprisingly bad at this.
Limited viable options for GenAI controls
At this stage, security teams look at two traditional security options, they often already have at their disposal:
- Turn on SASE DLP. This floods teams with false positives and misses context. It didn’t work before GenAI and doesn’t work now. It’s a compliance tick box, not a security measure.
- Label all your data. Platforms like Microsoft Purview sound good in theory, but in practice, it’s complex, time-consuming and a tall order for most companies to pull off.
Once they realise those approaches do not work, they are left to pick from two bad choices:
- Approve one enterprise tool (Copilot, Gemini or ChatGPT), block everything else. This sounds simple but breaks quickly. AI categories are too broad, and employees find workarounds. They use their phones, personal laptops, or shadow IT. They get frustrated and complain to security which is once again a blocker rather than an enabler. We are back to being ‘the department of no’. Security teams are now in ‘exception hell’ with a long backlog of manual app approval requests.
- YOLO mode. The least sustainable path. You send out a policy, buy some enterprise licenses, maybe block a few tools like DeepSeek, and hope nothing bad happens. At least you’re not a blocker, but who knows where the data is flying and how big a risk you are running.
None of these give you visibility into what employees are actually doing with GenAI tools or how sensitive the outputs might be. And none help users make better decisions in real time.
Harmonic is designed to complement your SASE
We’re not asking anyone to rip out their SASE. But we do believe something new is needed alongside it…something designed for how employees actually use GenAI. Something you can put live in POC in under 30 minutes.
Harmonic sits in the browser, right where prompts are being typed and files are being uploaded. It understands context, flags risky behavior, and nudges employees before the data leaves the enterprise boundary.
And because it’s powered by language models, it doesn’t rely on regex or brittle policy rules. It understands nuance. That means 96 percent fewer false positives than traditional DLP, with latency under 200 milliseconds. Fast enough not to be imperceptible to users. Accurate enough not to be ignored.
I posted last month about how soon after deployment one of our clients reduced data leakage by 72% while boosting GenAI adoption 300%. It’s time to make the CISOs the heroes, leading the charge in enabling the business with GenAI.