Product Updates

Harmonic Product Update: September 2025

September 30, 2025

GenAI adoption keeps accelerating, and with it comes two consistent asks from security teams. The first is additional detection functionality that keeps pace with all the new ways people are using AI. Second, they need reporting that makes it easy to cut through noise and show where AI is being used, which teams are at risk, and how patterns change over time.

This month’s Harmonic Product Update delivers on both. We have added new ways to detect sensitive information, including custom keyword detections and healthcare-specific PHI models, along with expanded reporting features that make insights easier to find and share. Together, these updates make it simpler to protect data and surface the information you need most.

Upgraded Detection Features to Prevent GenAI Data Leaks

Custom Keyword Detections

Most Harmonic detections today are powered by small language models trained to understand context, like whether a nine-digit number is a social security number, a phone number or just a random numeric input. This context awareness helps drastically reduce alert noise and false positives, which is something our customers consistently appreciate. 

At the same time, we’ve also heard that in some situations accuracy is not about context at all, it is about certainty. There are terms that should never be shared under any condition, like project codenames, customer lists or the name of a confidential initiative. 

With keyword detections, you can define those sensitive keywords and apply the same set of outcomes available with our Harmonic default detections. That means you can monitor usage, warn employees in real time, or block sharing altogether if those terms are seen in an AI tool. 

This gives you a precise way to protect information unique to your business and ensure it stays where it belongs. 

Health Information Detection Models

In healthcare, the upside of GenAI is huge, but so is the risk. We have consistently heard the same question from healthcare organizations: How do we experiment with AI tools without putting PHI at risk?

This month we are introducing three new healthcare detection models designed specifically for protecting protected health information (PHI):

  • Clinical Trial Data: safeguarding sensitive research records.
  • Electronic Health Records (EHRs): ensuring patient history data stays protected.
  • General PHI: catching personally identifiable health data more broadly.

Built with healthcare teams in mind, these models make it possible to explore the benefits of GenAI while keeping sensitive patient, research and health data protected. Each model is off by default, so you can align activation with your internal compliance process.

For a deeper dive into this space, check out our new blog on AI Governance and Control in the Healthcare Industry

Enhanced AI Usage Reporting

Expanded Filtering and Saved Views

Every customer uses Harmonic a little differently. Some want to track AI usage by department. Others care about specific tools or risky behaviors. But the common thread is clear: nobody wants to dig through data every time they need answers.

To make this easier, we’ve added expanded filtering and saved views across the platform.

You can now filter detection data by a wide range of attributes, such as application type, alert severity, user group, or time range. And once you’ve set it up the way you like, you can save that view for quick access later. This update helps you get straight to what you care about, without starting from scratch each time.

This is only the beginning for expanding our detections and reporting. Looking forward to sharing more in the future. And if you want to check out how this would look for your organization, request time with the team here: https://www.harmonic.security/get-demo 

Request a demo

Madeline Miller