Michael Marriott, Product Marketing
Yesterday, Italy’s data protection authority, Garante, issued a notice to OpenAI for potential violations of the EU General Data Protection Regulation (GDPR). This event is not isolated; it reflects an increasing global focus on the regulatory challenges posed by artificial intelligence (AI).
Here, we explore the implications of these developments for technology companies and their use of generative AI tools, against the backdrop of existing data protection laws.
Garante's action, echoing a previous 30-day data processing ban on OpenAI, signals a heightened regulatory interest in AI. TechCrunch highlights concerns rooted in several GDPR articles, ranging from the principles of personal data processing to specific conditions related to children's consent and the necessity of Data Protection Impact Assessments for high-risk processing activities. (We should note that there’s not been confirmation of the allegations, nor a response from OpenAI at the time of writing.)
This scrutiny is not limited to Italy. Poland's Office for Personal Data Protection (UODO) is investigating a complaint about ChatGPT, indicating a broader European concern over AI and privacy.
The regulatory landscape is adapting as we learn more about the risks of AI. In Europe, the imminent EU AI Act, with its risk-based approach, promises to directly impact companies working with Large Language Models (LLMs) and other AI technologies.
Across the Atlantic, the US is also moving forward. The Biden administration's Executive Order and the proposed "AI Bill of Rights" hint at future federal privacy legislation, although its final form remains uncertain.
At regional levels, laws like Illinois’ Biometric Information Privacy Act and Canada’s Directive on Automated Decision-Making are setting precedents for AI usage, particularly concerning consent and automated decision-making in the public sector.
There are plenty of risks associated with AI to unpack, and these laws are trying to balance meaningful data privacy legislation with fostering innovation.
GenAI tools need data. The models underlying these tools need lots of data to train on, which – as we have seen – can lead to compliance headaches.
At the same time, these tools are asking employees to upload documents, spreadsheets, and other files that may easily contain sensitive data that can land companies in hot water. Our own research, for example, found that 40% of AI apps require some sort of data upload.
Because of this, we should note that existing data protection laws like GDPR, CCPA, HIPAA, and various state-specific acts remain crucial for security and compliance teams. In fact, these regulations should remain the immediate focus. (If you want to dig into specifics about GDPR and data privacy, scroll down to the bottom of the page.)
These laws, with their specific provisions on personal data, automated decision-making, and data protection assessments, lay the foundational framework within which AI systems must operate.
While we’re all excitedly following to see what governments do, the industry is not waiting idly. Initiatives like the National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework (AI RMF 1.0) is a fantastic start
OWASP’s Top 10 for LLM is another great initiative that includes measures for protecting against challenges like prompt injection and data leakage.
Finally, Gartner has been advocating for greater efforts around trust, risk and security management for AI (known as AI TRiSM), for several years. More recently they have made this even more specific towards generative AI to cater to the current needs of security leaders.
Garante's GDPR warning is just one example of the increasing complexity in AI compliance.
While it can be hard to keep updated with these emerging frameworks and understand what this means for us, there are some practical things security leaders can do:.