Google’s AI Model Under European Investigation

Image from Trustedreviews

Google’s AI Model Under European Investigation

Reading time: 2 min

  • Kiara Fabbri

    Written by: Kiara Fabbri Multimedia Journalist

  • Justyn Newman

    Fact-Checked by Justyn Newman Head Content Manager

The Data Protection Commission (DPC) has announced today an investigation into Google to determine whether the company followed EU data protection laws while developing its AI model, Pathways Language Model 2 (PaLM 2).

PaLM2 is a large language model used in various AI services, including email summarization. Google has stated it will cooperate with the inquiry, as noted by the AP.

The investigation will assess whether Google should have performed a Data Protection Impact Assessment (DPIA) to evaluate the potential risks to individuals’ rights and freedoms from its AI technologies.

This investigation is part of the DPC’s broader efforts to ensure compliance with data protection rules in the AI sector across Europe. Cross-border processing, which involves handling data across multiple EU countries or affecting individuals in several nations, is under particular scrutiny.

Generative AI tools, known for producing convincing yet false information and accessing personal data, pose significant legal risks, as noted by TechCrunch. The DPC, responsible for ensuring Google’s compliance with the General Data Protection Regulation (GDPR).

For this purpose the DPC can impose fines of up to 4% of Google’s parent company Alphabet’s global annual revenue for violations, as reported by TechCrunch.

Google has developed a range of generative AI tools, including its Gemini series of large language models (formerly Bard) used for various applications, including enhancing web search through AI chatbots, notes TechCrunch.

Central to these tools is Google’s PaLM2, a foundational LLM launched at last year’s I/O developer conference, said TechCrunch.

Last Month, Elon Musk’s X has also faced scrutiny from European regulators over its use of user data for AI training. The DPC launched an investigation after receiving complaints that X was feeding user data into its Grok AI technology without obtaining proper consent. While X has agreed to limit its data processing, it has not faced any sanctions.

This investigation is part of the DPC’s broader efforts to regulate the use of personal data in AI development across the European Union. The EU’s recent adoption of the Artificial Intelligence Act marks a significant step towards establishing a regulatory framework for AI technologies within the bloc.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...