World News

European Union Privacy Regulator Investigates Google’s Artificial Intelligence Model


Ireland’s Data Protection Commission expresses concerns about Google’s Pathways Language Model 2.

The EU’s privacy watchdog is currently investigating how Google is utilizing personal data in the creation of one of its artificial intelligence models.

Ireland’s Data Protection Commission (DPC), the regulatory body for companies based in Ireland within the EU, announced on Sept. 12 that it was looking into compliance with GDPR rules by Google’s Pathways Language Model 2, also known as PaLM2.

The regulator mentioned that it is collaborating with partners in the European Economic Area to oversee the handling of personal data from EU users used in the development of AI models and systems.

Their examination will focus on whether Google has evaluated if PaLM2’s data processing poses a “high risk to the rights and freedoms of individuals” within the EU, as stated by the commission.

According to Google, PaLM2 is an advanced language model with enhanced multilingual, reasoning, and coding capabilities that expands on the company’s preceding research in machine learning and AI.

This model, which was initially trained on a diverse range of datasets including webpages, source code, and other sources, can perform tasks like language translation, mathematical calculations, answering queries, and coding, among other functions.

Earlier this year, the tech company stated that PaLM2 would be integrated into over 25 new products and features, including email and Google Docs services, as part of the ongoing trend to embrace and expand the use of AI.

Other Firms Halt Plans to Train AI on User Data

The EU watchdog has raised concerns about the utilization of EU user data in training generative AI models with major tech companies.

Just recently, it was reported that the DPC welcomed the resolution of a case involving Elon Musk’s social media platform X, where the company agreed to permanently cease using European user data to train its AI chatbot, Grok.

In a statement on Sept. 4, the DPC stated that prior to reaching an agreement with X, they had significant concerns regarding the handling of personal data from EU users for Grok training, posing risks to individuals’ rights and freedoms.

This marked the first instance of such action taken by the DPC, leveraging powers under Section 134 of the Data Protection Act 2018.

In the same announcement, the DPC revealed ongoing efforts to address issues related to the use of personal data in AI models industry-wide and had sought input from the European Data Protection Board (EDPB) to initiate discussions and bring clarity to this complex area.

The EDPB is asked to consider the extent of personal data processing, including first-party and third-party data, at different stages during the training and operation of an AI model, among other aspects.

In a separate incident in June, the DPC mentioned that Meta Platforms had also paused plans to utilize content from European users to train their latest large language model version following discussions between the regulator and the social media firm.

The Epoch Times has reached out to representatives from Google and the Data Protection Commission for additional comments.

Reuters and the Associated Press contributed to this report.



Source link

TruthUSA

I'm TruthUSA, the author behind TruthUSA News Hub located at https://truthusa.us/. With our One Story at a Time," my aim is to provide you with unbiased and comprehensive news coverage. I dive deep into the latest happenings in the US and global events, and bring you objective stories sourced from reputable sources. My goal is to keep you informed and enlightened, ensuring you have access to the truth. Stay tuned to TruthUSA News Hub to discover the reality behind the headlines and gain a well-rounded perspective on the world.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.