Introducing RiskRubric: A New Standard for Evaluating AI Model Safety

August 4, 2025
In this article

Introducing RiskRubric: A New Standard for Evaluating AI Model Safety

Today, we’re proud to announce the launch of RiskRubric; a public, continuously updated leaderboard for assessing the safety and risk posture of AI models. We believe this will be an extremely valuable tool for the community. 

Built in collaboration with the Cloud Security Alliance, Haize Labs, and Noma, RiskRubric is a free community tool designed to bring structure and transparency to a fast-moving, high-stakes space: public LLMs.

As enterprises race to adopt AI, the conversation often centers on tools and prompts. But the underlying models themselves—their training data, safety practices, and red-teaming exposure—are rarely scrutinized with the consistency that sensitive use cases demand. This becomes even more important as organizations explore a wide mix of open-source models, including those from DeepSeek and other frontier labs.

We’ve seen firsthand how difficult it is to understand which models meet basic thresholds for reliability, security, and privacy. And that lack of insight leaves buyers, builders, and regulators at a disadvantage. You can’t enforce policies if you don’t trust the foundation.


The RiskRubric Leaderboard

That’s where RiskRubric comes in.

It’s an open, automated leaderboard that evaluates public models across six core categories: 

  • Transparency
  • Reliability
  • Security
  • Privacy
  • Safety
  • Reputation

Every model receives a report card based on a standardized rubric, drawing from public documentation, metadata, and red-teaming results. No vendor submission required. No spin.

A Community-Driven Effort

We’re especially excited about how community-driven this effort is. Each partner brings a different strength: CSA’s leadership on AI governance, Noma’s red teaming and risk taxonomy, Haize Labs’ infrastructure, and Harmonic’s focus on practical, secure GenAI adoption. Together, we’ve built something that we hope becomes foundational to the AI ecosystem.

RiskRubric isn’t a compliance checkbox; it’s a living tool. It offers mitigation guidance. It’s free. And it’s meant to evolve as the field does.

At Harmonic, we focus on protecting sensitive data going into AI and AI-enabled SaaS tools. For the most mature organizations who are building their own models, we often hear questions around how to assess the safety of these models. 

Our response? Let’s collaborate with some awesome other AI security companies to provide genuinely useful tools for the community.

Securing GenAI workflows starts with protecting sensitive data, but it doesn’t end there. Trust also depends on understanding the models we rely on. RiskRubric makes that possible, and we’re honored to help lead the charge.

Check out the leaderboard, explore the scores, and let us know what you think?

Request a demo

No items found.