Wolf in Sheep’s Clothing? Security Implications of OpenAI’s GPT Store

January 11, 2024

Alastair Paterson, CEO Harmonic Security

Yesterday, after many months of delays, the GPT Store finally was released.

This will, undoubtedly, further increase productivity, but there are some frustrations around data privacy and tiered security that leave a bitter taste.

Let’s explore those.

What’s New

First off, let’s be clear about what has been released:

  • New GPT store. OpenAI have finally released the much-discussed GPT store to “help you find useful and popular custom versions of ChatGPT”. If you use the paid tiers of OpenAI, these GPTs are available to you. We still do not know that much about the plan to monetize these GPTs, nor the security vetting process.
  • New “Teams” plan. OpenAI has released a new pricing tier, sitting between their “Plus” plan and “Enterprise”. This plan gives access to the GPT store and offers organizations a measure of control over their data. The true security features, however, are reserved for the Enterprise plan. More on that later.

Analyzing the Top GPTs: A Looming Data Privacy Nightmare?

We’ve come to know and love OpenAI’s friendly interface, and it can be easy to be left with a misplaced sense of safety around these new GPTs. However, simply because the GPT is nicely wrapped within OpenAI, your data can still be sent to any number of spurious third party websites with unknown security controls.

The screenshot below demonstrates how your data can be sent to external sites.

Analyzing the Doc Maker GPT in OpenAI

Worse still, if you look at some of the “most popular” GPTs, a third of the top productivity apps provide the option to upload files. Where do these files go? What does this mean for data privacy? As the GPT store grows, the answers to these questions become more complex. We still do not know enough about the review process and the security controls they have in place, if any. The privacy policy states that GPTs do not have access to chat history, but little else.

Tiered Security…Sigh

It’s all good, though, because you’ve probably got a bunch of security controls right?

If you want to secure your data from these new GPTs then, of course, you need to pay for the paid tiers. While this is annoying, this is understandable from a business perspective. However, where it becomes really disappointing is how they reserve some important security controls for the Enterprise plan. For example, reserving SSO for higher tiers is equally as unsurprising as it is frustrating.

While the Team plan is billed as an option for businesses, I know many that would prefer to have the option for securing their data and accounts from being taken over, leaving the Enterprise tier as the only feasible option.

Yay for Less Shadow AI?

Shadow AI is real. In the last year, we've witnessed the creation of over 10,000 AI applications, predominantly based on GPT 3.5 and GPT 4. These tools, while enhancing productivity in specific niches, are thin veneers over ChatGPT with a sprinkling of clever prompt engineering. This trend has inadvertently fueled a rise in Shadow IT. Numerous AI applications exhibit questionable privacy policies and security measures, leaving security teams grappling with the ramifications of shadow AI.

The positive news is that the introduction of the GPT store is poised to elevate the standards for startup success, potentially reducing the proliferation of dubious tools. OpenAI has said that they will monetize the store and the most used apps will receive the most money. Leaving the internet to decide what is popular can have some interesting outcomes, so let’s be poised to see some less-than-wholesome GPTs emerge in popularity.

Will this lead to a consolidation towards OpenAI?

While the GPT store raises the bar,  users will still favor specialized AI tools that best fit their unique needs, keeping the market diverse for now. However, this does not mean a mass consolidation to the GPT store. As usual, the needs of the user will prevail.

A segment of AI-powered solutions, integrating AI into broader, domain-specific applications, remains more appealing to users. These specialist tools, with their unique expertise and own datasets, offer a compelling alternative that users are reluctant to forgo. This year, we are likely to see companies bring even more powerful tools to market–benefitting from the ability to solve problems from the ground up.

Given that this is likely what users will want, security leaders should balance their policies, ensuring they don't overly restrict access to these specialized AI solutions.


I have no doubt that this launch will provide access to some pretty awesome GPTs, further boosting what we’re able to achieve at work. But this does come with some added data privacy risks that have yet to be addressed.

Unless you’re willing to dig deep into your pockets, of course.  

Schedule a meeting to discover more
Learn how to accelerate secure AI adoption without risking the security and privacy of your data.
By submitting this form, you consent to receiving communications from Harmonic, agree to our Terms & Conditions, and acknowledge that you have read and understood our Privacy Policy.
Thank you!
Your submission has been received.
Oops! Something went wrong while submitting the form.