Key Points
- EU regulators are scrutinizing Google’s AI model, PaLM2, for potential GDPR violations related to data privacy.
- The inquiry aims to determine whether PaLM2’s data processing could significantly threaten individual rights and freedoms within the EU.
- The investigation is part of broader efforts by EU regulators to ensure AI systems adhere to strict data privacy standards.
- Ireland’s Data Protection Commission has previously taken actions against companies like X and Meta for AI-related data privacy concerns.
European Union regulators announced on Thursday that they are investigating Google’s artificial intelligence model, PaLM2, for potential violations of the bloc’s stringent data privacy regulations. The inquiry is being led by Ireland’s Data Protection Commission, which serves as Google’s primary regulator in the EU because the company’s European headquarters is located in Dublin.
The commission is examining whether Google has properly assessed if PaLM2’s data processing activities pose a “high risk to the rights and freedoms of individuals” within the EU. This inquiry is part of a broader effort by regulators across the 27-nation bloc to ensure that AI systems, including large language models like PaLM2, comply with the General Data Protection Regulation (GDPR).
Large language models, such as PaLM2, are extensive datasets that form the foundation of AI systems, enabling capabilities like email summarization and other generative AI services. Concerns about how these models handle personal data are prompting regulatory scrutiny. Google did not immediately respond to requests for comment regarding the investigation.
The Irish Data Protection Commission has previously taken action against other tech giants over data privacy issues related to AI. Earlier this month, it secured an agreement from Elon Musk’s social media platform X to cease permanently processing user data for its AI chatbot Grok. This action followed a legal battle in which the commission sought a High Court order to restrict or prohibit X from processing personal data found in public posts by its users.
Similarly, Meta Platforms recently paused its plans to use content posted by European users to train the latest version of its large language model, following pressure from Irish regulators. The decision came after “intensive engagement” between Meta and the watchdog, highlighting the growing regulatory focus on data privacy within AI systems.
In another instance, Italy’s data privacy regulator temporarily banned ChatGPT last year due to data privacy breaches, demanding that OpenAI, the chatbot’s maker, address several concerns to lift the ban. These actions underscore the increasing vigilance of EU regulators in monitoring AI compliance with data privacy laws.