A Year after DeepSeek Launch
1 in 12 employees used China-based AI tools in the last month but Kimi Moonshot way ahead of all other Chinese AI Tools Hiding in Your Enterprise
7.95% of employees in the average enterprise used at least one Chinese GenAI tool in the past 30 days. That's nearly 1 in 12 workers sending data to platforms with minimal transparency around retention and training.
Worse still, the data shared with these tools is shocking: Code tops the list, followed by Financial Projections, Legal Discourse, M&A Data, and PII.
And here's the twist most security teams miss: DeepSeek isn't your biggest shadow AI problem. It's not even close.
Kimi Moonshot dominates usage volume at roughly 3.5x DeepSeek's traffic. But when it comes to sensitive data exposure, DeepSeek is disproportionately high. While it only has 25% of usage it accounts for 55% of sensitive exposure due to its popularity with coders. Therefore, while your employees are using Kimi far more, DeepSeek is where the most sensitive data ends up.
Let’s dig into the data.
The DeepSeek Anniversary: What's Changed in 12 Months
One year ago, DeepSeek's R1 model rewrote the AI industry's rules. Within days, it overtook ChatGPT as the most downloaded iOS app, triggered an 18% collapse in Nvidia's share price, and prompted a $593 billion sell-off.
For security teams, that moment sparked a challenge that's only intensified: managing shadow AI adoption of Chinese tools faster than policies can keep pace.
Shadow AI Data: Which China-Based Tools Employees Actually Use
Usage Volume: Kimi Moonshot Leads
2025 enterprise AI usage for China-based tools:
- Kimi Moonshot — ~700k interactions
- DeepSeek — ~200k interactions
- Youdao — ~150k interactions
- Tencent Hunyuan — ~100k interactions
- Kling AI, Baidu Chat, ERNIE Bot, Manus, Qwen Chat, ChatGLM — lower volumes
If your China AI policy focuses only on DeepSeek, you're missing where most activity happens.
Sensitive Data Exposure: DeepSeek Dominates
The ranking flips for sensitive data:
- DeepSeek — ~4k instances (55% of total)
- Kimi Moonshot — ~2.8k instances
- Tencent Hunyuan — ~350 instances
- Youdao — ~300 instances
DeepSeek has 25% of usage but 55% of sensitive exposure. Its strength in coding tasks likely drives disproportionate risk.
What Sensitive Data Types Are Being Exposed to Chinese AI
- Code — dominant category (~2k instances)
- Financial Projections (~1k instances)
- Legal Discourse (~900 instances)
- M&A Data (~450 instances)
- Investment Performance, PII, Billing/Payment, Proprietary Business, Sales Pipeline, Settlements — all present
Code leads because developers and "vibe coders" unintentionally leak source code, API keys, and system architecture into foreign-hosted models.
The China-Based AI Landscape Beyond DeepSeek
DeepSeek grabbed headlines. Kimi quietly captured traffic.
Kimi Moonshot emerged as the dominant tool, backed by $1 billion+ from Alibaba and Tencent. By November 2025, Moonshot released Kimi K2 Thinking with 1 trillion parameters.
Manus made headlines in December when Meta acquired it for $2+ billion. Meta will wind down Chinese operations, but historical data exposures don't disappear when ownership changes.
Baidu Chat, ERNIE Bot, and Alibaba's Qwen continue expanding reach among developers seeking Western alternatives.
Common appeal: powerful, free or cheap, browser-based, no IT required.
Common risk: minimal data handling transparency.
DeepSeek Security Vulnerabilities and Privacy Concerns
Security Issues Discovered
Exposed Database: Wiz Research found a publicly accessible ClickHouse database with one million+ records including chat logs and API keys.
Unencrypted Transmission: NowSecure found the iOS app transmitting device data without encryption.
Weak Cryptography: SecurityScorecard identified hardcoded encryption keys and outdated algorithms.
ByteDance Integration: Multiple ByteDance libraries enable data collection and remote behavior changes post-installation.
Global Regulatory Response to DeepSeek
Taiwan banned it January 27, 2025. Texas followed the next day. NASA and US Navy issued internal bans. Italy blocked access over GDPR concerns. Australia, Canada, Netherlands, and South Korea imposed restrictions.
By February 2025, US Representatives introduced the "No DeepSeek on Government Devices Act."
DeepSeek Data Privacy Reality
Data stored on servers in China. Subject to Chinese legal jurisdiction. Policy permits using content to "provide, maintain, develop, and improve" services. Many interpret this as training on inputs.
For enterprises with proprietary software or regulated data: unacceptable exposure.
Why China-Based AI Tools Create Enterprise Risk
Data Sovereignty Concerns
Content falls under Chinese jurisdiction regardless of user location. China's cybersecurity laws allow government data access.
Training and Retention Opacity
Most platforms don't specify retention limits or whether inputs train models. Even Kimi's API terms permit using content to "improve services."
Compliance Gaps
GDPR, CCPA, HIPAA require controlling how sensitive data is processed. Platforms without clear compliance documentation create liability.
NIST's CAISI found both DeepSeek and Kimi K2 exhibit measurably higher alignment with CCP positions on sensitive topics. That's relevant for organizations concerned about information integrity.
Why Blocking Shadow AI Fails
Blocking everything Chinese rarely works.
When organizations implement restrictive policies, employees find workarounds: personal devices, personal networks, tomorrow's new tool. With 665 AI applications in our dataset and new tools emerging constantly, comprehensive block lists are operationally impossible.
Blanket blocking also destroys value. Asia-Pacific teams lose regional workflows. Developers lose open-source learning. Translation use cases (18.3% of all AI traffic) get pushed to higher-risk tools.
Over-blocking doesn't reduce AI usage. It reduces visibility.
How to Govern China-Based AI Tools Effectively
1. Establish Shadow AI Visibility
You can't govern what you can't see. Monitor which China-based tools employees use, usage frequency, sensitive data transmission, and highest-adoption teams.
Don't assume DeepSeek is your biggest problem. Our data shows Kimi Moonshot likely has far more traffic.
2. Create Explicit China AI Policies
Don't rely on general shadow AI policies for jurisdictional concerns. Specify which platforms are prohibited, which allow non-sensitive use, what data types can never be transmitted, and how to request exceptions.
Explain the "why." Employees respond better to data sovereignty rationale than blanket bans.
3. Block Sensitive Data, Not Applications
Deploy real-time detection for code, credentials, PII, and confidential information. Warn users before sensitive submissions. Reserve hard blocks for high-risk combinations like API keys to non-approved tools.
Protect against data exposure while preserving autonomy for low-risk use.
4. Provide Approved AI Alternatives
Employees use Chinese tools because they're free, fast, and capable. Removing them without alternatives drives shadow AI underground.
Provide enterprise coding assistants with credential detection. Offer ChatGPT Enterprise, Claude, or Gemini with clear guidelines. Make approved tools genuinely better.
5. Target Developer Education
Code dominates sensitive exposure. Engineering teams need training on IP risk from foreign-hosted models, credential extraction from prompts, approved tool options, and environment variables over hardcoded secrets.
6. Monitor Continuously
The landscape shifts fast. Manus went from Chinese startup to Meta subsidiary in weeks. Track new applications entering your environment, measure exposure trends, assess intervention effectiveness, adjust as needed.
Key Statistics: China-Based AI Enterprise Risk
Shadow AI Adoption
- 7.95% of employees used a China-based AI tool (30-day analysis)
- Kimi Moonshot leads usage at ~3.5x DeepSeek traffic
- 21 distinct China-based AI applications tracked
Sensitive Data Exposure
- DeepSeek leads exposure despite lower usage
- Code is the #1 exposed data type
- Financial Projections and Legal Discourse rank second and third
Regulatory Action
- 7+ countries banned or restricted DeepSeek for government use
- Multiple US states restricted usage (Texas, New York, Virginia)
- NASA and US Navy issued bans within weeks of launch
Market Context
- 96.88 million monthly active DeepSeek users (April 2025)
- $1 billion+ raised by Moonshot AI at $4 billion valuation
- $2+ billion Meta paid for Manus (December 2025)
Looking Ahead: China AI Governance as Permanent Program
DeepSeek's anniversary marks an inflection point, not an endpoint. Chinese AI development is accelerating. The technical gap continues to narrow. The Manus acquisition shows how fast the landscape shifts.
China-based AI governance must become permanent. Not a one-time headline response.
Success means moving beyond reactive blocking toward proactive, data-aware governance: understanding where sensitive information flows (hint: it's not just DeepSeek), implementing contextual controls that protect without destroying value, and adapting continuously.



