Industry Insights

AI Governance and Control in the Healthcare Industry

September 25, 2025

AI Adoption in the Healthcare Industry

AI in healthcare is booming. Last year, the number of physicians using AI tools increased to 66%, up from 38% in 2023. The number this year is likely to be even higher.

It’s easy to understand why, given the number of industry pressures, including staffing shortages, administrative complexities, and the drive for greater efficiency and precision.  

The Growing Risk of Shadow AI in Healthcare

Despite clear benefits of AI adoption, security teams are becoming increasingly concerned about data privacy risks associated with unapproved AI tools. A warning from NHS England demanded that hospitals and GPs cease the use of these "non-compliant" AI tools, which could breach data protection rules and compromise patient safety.

As in most cases, this behavior is driven by a genuine desire to increase efficiency. However, there are some real concerns around data privacy when it comes to employees adopting off-the-shelf AI tools without official approval or oversight from their organizations (Shadow AI). 

Harmonic Security research found key risks associated with Shadow AI use. Specifically:

  • A staggering 26.3% of all employee generative AI usage still goes through the free, consumer-facing version of ChatGPT.
  • Nearly 8% of employees (7.95%) are using at least one Chinese-based generative AI tool, raising geopolitical and data sovereignty concerns.
  • Nearly 22% of all files uploaded and 4.37% of prompts contain sensitive content, including intellectual property and source code. 

In a healthcare context, this means that Protected Health Information (PHI), proprietary clinical trial data, and other critical information are being fed into systems with unknown security and governance protocols.

Simply blocking access to AI rarely works, either. The increase in productivity these tools offer provide too much of an incentive for workers to use them. Instead, blocking access on the corporate network more often drives them to use personal devices or circumvent controls. 

Protecting Sensitive Healthcare Data

There’s a staggering amount of sensitive data flowing through healthcare systems that needs protection. Of course, protected health information (PHI) is the most obvious category, and HIPAA sets clear requirements for keeping it secure. That means making sure you have a solid Business Associate Agreement (BAA) in place with any vendor that touches PHI, and holding them accountable for safeguarding it.

But it goes far beyond PHI. Clinical trial data, electronic health records (EHRs), and other research datasets can just as easily make their way into AI tools if you don’t have proper guardrails. This can create significant privacy, compliance, and reputational risks if mishandled. You also need to be looking at a vendor’s broader data handling practices, such as how they train their models, what security certifications they maintain, and whether they have clear approaches to monitoring and mitigating bias. 

Navigating New Regulatory Frameworks

The challenge doesn’t stop with today’s privacy rules. AI regulations are adding a whole new layer of complexity. The EU AI Act and Colorado AI Act both introduce requirements that impact healthcare use cases directly. For example:

  • Biometric categorization now comes with strict obligations, including retention and deletion policies and obtaining explicit consent.

  • Eligibility determination for healthcare services is formally classified as a high-risk use case, meaning systems must be validated for fairness and accessibility.

  • AI-assisted emergency triage must include human oversight to avoid over-reliance on probabilistic outputs for life-or-death decisions.

As AI usage in healthcare grows, so do the data privacy and compliance risks. Organizations that get ahead of this now with clear policies, strong vendor vetting, and proactive governance will be better positioned to safely unlock AI’s potential without putting their data at risk.

AI Usage Control for the healthcare industry

Harmonic Security enables you to gain complete visibility and control over AI usage across your enterprise.

  • Discover Shadow AI: Instantly understand what AI is in use, including personal, free, and unsanctioned accounts, and even apps hosted in restricted regions like China.
  • Protect Your Most Sensitive Data: Easily set policies to prevent the exposure of your most critical data types, including PHI, Electronic Health Records (EHR), and Clinical Trial Data.
  • Ensure Compliance: Our platform helps you proactively address the requirements of regulations like the EU AI Act and the Colorado AI Act by providing granular control and audit trails.

By leveraging a platform that understands and enforces your data policies, you can empower your teams to use AI for innovation while confidently protecting sensitive patient information from the start. 

To learn more about Harmonic Security’s solutions for healthcare, get in touch with our team: harmonic.security/get-demo

Request a demo

Team Harmonic