Industry Insights

Understand Users to Inform Your AI Usage Policy: Why Security Teams Should Build User Personas

April 15, 2024

There is no doubt that the sheer number of generative AI (GenAI) tools available for organizations to use is already endless. From code generation, marketing content creation or chatbots for employee support, the possibilities with GenAI are as impressive as they are frightening. 

However, as with all new technologies, it's much easier said than done to use GenAI in a meaningful, productive and secure way. It is critical for organizations to understand their different types of end users; those with  different skills, understanding and willingness to use GenAI tools for good. 

I have grappled with such challenges like trying to control “Shadow IT” and rolling traditional Data Loss Prevention tools. I have always found that writing a well worded and human readable policy has been the first step in trying to get things under control.

In this article I am going to detail the types of GenAI user personas, and more importantly the security risks associated with them. I will also be providing some thoughts on what you could include in your organization's AI Use Policies.

Because at the end of the day SaaS product teams build user personas–and so should your security teams.

The Skeptic

The skeptic is someone who is aware of GenAI but does not fully understand or trust the technology.

Much like what we have seen with the concept of the Metaverse, a skeptic sees the GenAI movement as a fad, a technology that will phase out in time. 

That being said, they are not against using GenAI tools. In fact, their skepticism drives a need to answer questions and see the benefits themselves.

With this comes some risk. In the search for answers they may try as many GenAI tools as they can, dipping their toe in the water and likely using company data to throw into as many tools as possible to see the output or answer if the GenAI tool was successful or not.

Appealing to the Skeptic: It is important to educate these users on how to safely use GenAI tools, and which tools are at their disposal for testing. 

A great starting point for writing a GenAI usage policy is to educate users on GenAI at a high-level and provide context as to why the policy is needed, and how it is there to protect the organization. 

TrailBlazer

A TrailBlazer is the opposite of the skeptic. They have fully drunk the GenAI CoolAid and use as many tools as possible to increase their efficiency (or, dare I say it, do as little work themselves as possible?!).

Often these are the types of end users who like to try and test new technologies frequently, are pretty well read on the topic of GenAI and have strong awareness of the leading tools on the market to use.

However, with the enthusiasm and excitement of GenAI, the amount of tool sprawl is insane. New accounts are created on as many tools as they need to test, data is thrown in willy-nilly without a real understanding of the foundational models in the background.

Appealing to the TrailBlazer: I would recommend that these users are only provided access to company approved GenAI tools, and again, are provided with plenty of policy, training or guidance on how to use these tools properly.

Your policy should include guidance on where users can find company approved GenAI tools. Having a Tech Radar for GenAI Tools is a really useful tool for TrailBlazers to explore. 

Unknowing

As with all technology change there are some people who just don’t follow it. Maybe it’s because they don’t care, or maybe because they are just too busy working to look into the topic.

Unknowing users just don’t have a clue on the GenAI movement. Unaware of ChatGPT, DALL-E or some of the other headline grabbers of recent months.

As an organization wanting to embrace GenAI tools and increase the levels of adoption, these are tough nuts to crack. Often require more hands-on training or workshops to open their eyes to new possibilities, while at the same time not scaring them into thinking robots will replace their job. 

You’d think this would be completely risk free, right?

Well in my opinion no. These employees may be stuck using legacy applications or processes that are rife with security vulnerabilities or issues. They would much rather send company data to their personal email accounts to work on when on holiday, than ask ChatGPT for a nudge in the right direction to get the job done before they get some sun.

Appealing to the Unknowing: You may wish to include examples in your GenAI policy of acceptable, and successful use of AI tools to help positive and secure adoption.

Adopter

Finally, Adopters are the users who have embraced the movement to use GenAI tools effectively, their knowledge and understanding of the benefits and drawbacks are strong enough to allow them to make the best choice on when to use GenAI and when to stick with the good old ways.

Organizations should seek to identify these users, and enable them as a force for good, evangelize and bring the other adoption types with them.

Naturally, we are now looking at the real risks of GenAI adoption, and somewhat it’s the unknown. There may be a loss of company IP if code or details of processes are passed to third parties. This could also mean that users end up training competitors' models.

As a final consideration for Adopters, there could be a large shift in team budgets as GenAI tools begin to hike their prices and lock premium features behind a paywall.

Appealing to the Adopter: Your policy should support these types of users by providing guidance on what data can or cannot be used in the variety of GenAI tools.

It’s all well and good knowing these types of GenAI users exist, but one of the foundational steps in adopting GenAI successfully is to identify and monitor the use of AI tools, and the people using them. That’s where tools like Harmonic Security can help.

Summary

There are a variety of types of GenAI users. Each with their own security risks and require a GenAI policy to provide guidance on how to mitigate or control those risks.

Be sure to include:

  • Education and context on what GenAI and how it can help or hurt the organization;
  • A list of approved GenAI tools, or better still a TechRadar example to make it easy to understand;
  • Examples of successful usage of GenAI;
  • A clear list of acceptable and unacceptable use of GenAI tools, with a focus on what data can or cannot be used. 

Request a demo

Concerned about the data privacy implications of Generative AI? You're not alone. Get in touch to learn bore about Harmonic Security's apporach.
Ed Merrett