If you're wondering how many AI tools employees are really using at work, you're not alone. It turns out the answer is: a lot more than most IT and security teams realize.
Our new research report just dropped (The AI Tightrope: Balancing Innovation and Exposure in the Enterprise), which looks into the latest GenAI usage trends. In Q1 2025, we analyzed over 176,000 AI prompts and thousands of file uploads from a sample of 8,000 enterprise users. The findings were eye-opening: the average company interacted with 254 distinct AI applications. This does not even count tools accessed via mobile or APIs.
That’s not a typo. Two hundred and fifty-four.
This is what tool sprawl looks like in the era of generative AI. From ChatGPT and Claude to niche domain-specific apps and fast-growing tools out of China, employees are experimenting, iterating, and (increasingly) doing it on their own terms.
The Rise of Shadow AI: When Governance Stops at the Login Screen
One of the most concerning trends to emerge is how often employees are using personal accounts to interact with AI platforms. According to Harmonic’s research, 45.4% of sensitive AI interactions came from personal email accounts. Of those, 57.9% were Gmail addresses.
This means sensitive content (everything from legal documents to source code) is being routed through accounts that sit entirely outside corporate control.
This isn’t just a hypothetical risk. In fact, 21% of all sensitive data captured in the study was submitted to ChatGPT’s free tier, where prompts can be retained and used for training purposes. And while companies may assume their internal AI policy has things locked down, it’s clear that employees are finding workarounds because they want the productivity benefits of AI, and many don’t realize the security implications of how they’re accessing it.
Why Are Employees Using Personal Accounts for AI Tools?
The answer is simple: it’s frictionless.
Using AI at work feels like second nature for many knowledge workers now. Whether it’s summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast. If the official tools aren’t easy to access or if they feel too locked down, they’ll use whatever’s available. And what’s available is often whatever tab they had open last: ChatGPT, Gemini, Claude, Perplexity, or a Chrome extension someone in their Slack channel recommended.
In many cases, they’re not trying to be reckless. They’re just trying to get work done.
But the result is that organizations are facing an enormous governance gap.
Are Employees Using Too Many AI Tools?
Yes, at least from a security and visibility standpoint.
The average of 254 AI-enabled apps per company doesn’t just represent diversity, it represents chaos for governance and risk teams. Many of these apps are completely unsanctioned. Some are connected to cloud-based services with unclear data retention policies. A few are built overseas, with questionable compliance with regional data privacy laws.
Among the more eyebrow-raising findings from Q1:
- 7% of users accessed Chinese-built AI tools, including DeepSeek, Ernie Bot, and Qwen Chat — tools which often come with unclear or state-owned data handling policies.
- Image files accounted for 68.3% of uploads to ChatGPT, suggesting a growing comfort with uploading multimedia content into LLMs, regardless of policy.
- Standard document types like .docx, .pdf, and .xlsx continue to flow freely into public models, even when they contain proprietary business data.
Shadow AI and the Challenges of LLM Governance
If you're researching “shadow AI usage in enterprises” or looking for real-world data on “LLM tool governance,” this is where things get tricky.
Many companies have written GenAI usage policies. But few have the tooling to enforce those policies. Especially when activity is happening in personal Gmail accounts or through browser-based tools that don’t show up in conventional endpoint monitoring.
The net result: companies are sitting on a false sense of security.
They think risk is under control because they've issued guidelines. But when nearly half of all sensitive AI interactions are happening outside managed environments, that confidence is unfounded.
Why Personal Account AI Use Matters
There’s a significant difference between an employee using ChatGPT in a corporate-sanctioned, enterprise-controlled workspace, and that same employee uploading sensitive files to ChatGPT Free under a personal Gmail login. The latter is a blind spot for compliance teams, a nightmare for legal review, and a real-time data loss risk.
Even if an organization is trying to control what gets sent to public LLMs, the moment the interaction moves to a personal account, there is no oversight. There’s no logging, no data retention management, and no real way to know what was shared.
So What Can Companies Do?
Blocking AI tools altogether doesn’t work. Employees will find a way, just like they did with shadow IT during the early SaaS era.
Instead, organizations need to move beyond policy and focus on enforcement and behavior shaping at the point of use. That means:
- Real-time detection of sensitive data in AI prompts and file uploads, even when they originate from personal accounts in corporate browsers.
- Browser-level visibility and enforcement, since this is where most GenAI tools are accessed.
- Employee-friendly interventions that nudge users toward safer choices, rather than punishing them after the fact.
There’s a way to enable AI without opening the floodgates to uncontrolled data exposure. But this all starts by acknowledging how employees are really using these tools.
Final Thoughts
If you're looking to understand the true scale of GenAI tool usage, or researching how employees are using AI tools outside IT control, the Q1 2025 research paints a clear picture: this isn’t a fringe issue. It’s mainstream. It’s growing. And it’s happening in nearly every enterprise, whether or not there’s a formal AI policy in place.
The time for passive monitoring is over. It’s not enough to know that ChatGPT is popular. You need to know who’s using it, what they’re uploading, and whether they’re using personal or corporate accounts to do it.
Snag a copy of our full research findings here.