World News

Search engines instructed to block AI-generated child sexual abuse material


Providers are now required to review and enhance their artificial intelligence capabilities to ensure that restricted material does not appear in search results.

In Australia, search engine providers like Google, Duck Duck Go, and Bing must prevent nefarious users from accessing deep fake child sexual abuse videos and images through embedded artificial intelligence following the implementation of an enforceable online safety code.

Introduced by the eSafety Commission, the Internet Search Engine Services Code sets out minimum compliance measures for search engine providers.

Search engines are now responsible for preventing users from searching for child exploitation, pro-terror, extreme crime, and violence material by not displaying search results.

Referred to as the “search code,” providers must now assess and enhance their artificial intelligence functionality to restrict the display of certain materials, similar to the measures required for algorithmic optimization.

Since September 2023, the eSafety Commission has been in discussions with providers regarding the implementation of new measures to limit the functionality of generative AI applications integrated into language and multimodal foundation models.

Five other online safety codes have been operational since last year, compelling compliance from social media services, app distribution services, hosting services, internet carriage services, and a code covering equipment suppliers and device manufacturers. The commission is currently working on drafting rules for online messaging services and photo storage.

Related Stories

Australia’s eSafety Commissioner Julie Inman Grant (AAP Image/Mick Tsikas)

Code Offers Protection Against the Worst Offenders

eSafety Commissioner Julie Inman Grant highlighted that the search engine code aims to prevent the “worst of the worst” offenders from accessing or sharing disturbing images with international pedophile networks and terrorist groups.

Ms. Grant mentioned that an earlier version of the codes for 2023 did not sufficiently establish robust community safeguards, necessitating an update to address the integration of generative AI by Google and Bing into their search engine services.

Penalties of up to $780,000 per day will be imposed on companies that violate the code with their AI-optimized search platforms.

Speaking with the AAP, University of NSW AI Institute chief scientist Toby Walsh expressed concerns about the weaponization of AI tools to generate offensive and illegal content, creating challenges for law enforcement in combating cybercrimes.

He noted that while the code may not completely solve the issue, it is a crucial starting point for addressing the problem.



Source link

TruthUSA

I'm TruthUSA, the author behind TruthUSA News Hub located at https://truthusa.us/. With our One Story at a Time," my aim is to provide you with unbiased and comprehensive news coverage. I dive deep into the latest happenings in the US and global events, and bring you objective stories sourced from reputable sources. My goal is to keep you informed and enlightened, ensuring you have access to the truth. Stay tuned to TruthUSA News Hub to discover the reality behind the headlines and gain a well-rounded perspective on the world.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.