After more than a year of investigations, the Italian privacy regulator – il Garante per la protezione dei dati personali – issued a €15 million fine against OpenAI for violating privacy rules. Violations include lack of appropriate legal basis for collecting and processing the personal data used for training their genAI models, lack of adequate information to users about the collection and use of their personal data, and lack of measures for collecting children’s data lawfully. The regulator also required OpenAI to engage in a campaign to inform users about the way the company uses their data and how the technology works. OpenAI announced that they will appeal the decision. This action obviously impacts OpenAI and other genAI providers, but the most significant long-term impact will be on companies that use genAI models and systems from OpenAI and its competitors — and that group likely includes your company. So here’s what to do about it:
Job #1: Obsess about third party risk management
Using technology that is built without due regard for the protection and the fair use of personal data poses significant regulatory and ethical questions. It also increases the risk of privacy violations in the information generated by the model itself. Organizations understand the challenge: in Forrester’s surveys, decision-makers consistently list privacy concerns as a top barrier for the adoption of genAI in their firms.
However, there is more on the horizon: the EU AI Act, the first comprehensive and binding set of rules for governing AI risks, establishes a range of obligations for AI and genAI providers and for companies using those technologies. By August 2025, General-purpose AI (GPAI) models and systems providers must comply with specific requirements such as sharing with users a list of the sources they used for training their models, results of testing, copyright policies, and providing instructions about the correct implementation and expect behaviour of the technology. Users of the technology must ensure they vet their third parties carefully and collect all the relevant information and instructions to meet their own regulatory requirements. They should include both genAI providers and technology providers that have embedded genAI in their tools in this effort. This means: 1) carefully mapping technology providers that leverage genAI; 2) reviewing contracts to account for the effective use of genAI in the organization; and 3) designing a multi-faceted third party risk management process that captures critical aspects of compliance and risk management, including technical controls.
Job #2: Prepare for deeper privacy oversight
From a privacy perspective, companies using genAI models and systems must prepare to answer some difficult questions that touch on the use of personal data in genAI models that runs much deeper than just training data. Regulators might soon ask questions about companies’ ability to respect users’ privacy rights, such as data deletion (aka, “the right to be forgotten”), data access and rectification, consent, transparency requirements, and other key privacy principles such as data minimization and purpose limitation. Regulators recommend that companies use anonymization and privacy preserving technologies like synthetic data when training and fine tuning models. Firms must also: 1) evolve data protection impact assessments to cater for traditional and emerging AI privacy risks; 2) ensure they understand and govern structured and unstructured data accurately and efficiently to be able to enforce data subject rights (among other things) at all stages of model development and deployments; and 3) carefully assess the legal basis for using customers’ and employees’ personal data in their genAI projects and update their consent and transparency notices appropriately.
Forrester can help: Here’s what to read, and if you have questions, let’s talk!
If you have questions about this topic, the EU AI Act, or the governance of personal data in the context of your AI and genAI projects, read my research — How To Approach The EU AI Act and A Privacy Primer On Generative AI Governance – and schedule a guidance session with me. I would love to talk to you.
Source link
lol