EU Investigates Google’s AI Model PaLM2 Over Data Privacy Concerns

Google Unveils Changes to Comply with EU Digital Markets Act, Empowering App Developers

Key Points

  • EU regulators are scrutinizing Google’s AI model, PaLM2, for potential GDPR violations related to data privacy.
  • The inquiry aims to determine whether PaLM2’s data processing could significantly threaten individual rights and freedoms within the EU.
  • The investigation is part of broader efforts by EU regulators to ensure AI systems adhere to strict data privacy standards.
  • Ireland’s Data Protection Commission has previously taken actions against companies like X and Meta for AI-related data privacy concerns.

European Union regulators announced on Thursday that they are investigating Google’s artificial intelligence model, PaLM2, for potential violations of the bloc’s stringent data privacy regulations. The inquiry is being led by Ireland’s Data Protection Commission, which serves as Google’s primary regulator in the EU because the company’s European headquarters is located in Dublin.

The commission is examining whether Google has properly assessed if PaLM2’s data processing activities pose a “high risk to the rights and freedoms of individuals” within the EU. This inquiry is part of a broader effort by regulators across the 27-nation bloc to ensure that AI systems, including large language models like PaLM2, comply with the General Data Protection Regulation (GDPR).

Large language models, such as PaLM2, are extensive datasets that form the foundation of AI systems, enabling capabilities like email summarization and other generative AI services. Concerns about how these models handle personal data are prompting regulatory scrutiny. Google did not immediately respond to requests for comment regarding the investigation.

The Irish Data Protection Commission has previously taken action against other tech giants over data privacy issues related to AI. Earlier this month, it secured an agreement from Elon Musk’s social media platform X to cease permanently processing user data for its AI chatbot Grok. This action followed a legal battle in which the commission sought a High Court order to restrict or prohibit X from processing personal data found in public posts by its users.

Similarly, Meta Platforms recently paused its plans to use content posted by European users to train the latest version of its large language model, following pressure from Irish regulators. The decision came after “intensive engagement” between Meta and the watchdog, highlighting the growing regulatory focus on data privacy within AI systems.

In another instance, Italy’s data privacy regulator temporarily banned ChatGPT last year due to data privacy breaches, demanding that OpenAI, the chatbot’s maker, address several concerns to lift the ban. These actions underscore the increasing vigilance of EU regulators in monitoring AI compliance with data privacy laws.

EDITORIAL TEAM
EDITORIAL TEAM
TechGolly editorial team led by Al Mahmud Al Mamun. He worked as an Editor-in-Chief at a world-leading professional research Magazine. Rasel Hossain and Enamul Kabir are supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial knowledge and background in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.

Read More

We are highly passionate and dedicated to delivering our readers the latest information and insights into technology innovation and trends. Our mission is to help understand industry professionals and enthusiasts about the complexities of technology and the latest advancements.

Follow Us

TECHNOLOGY ARTICLES

SERVICES

COMPANY

CONTACT US

FOLLOW US