World News

There are countless sexual AI apps accessible on smartphones, warns eSafety Commissioner


eSafe Commissioner highlighted the simplicity and cost-free nature of apps that facilitate perpetrators in causing immense harm to their victims.

The prevalence of sexual AI (artificial intelligence) applications on smartphones has streamlined the commission of offenses, as per statements made to a parliamentary committee.

During a recent inquiry session regarding a new sexual deepfakes bill, eSafety Commissioner Julie Inman Grant pointed out the availability of numerous apps with malicious intent on app stores.

She specifically mentioned apps openly promoting the modification of images of girls using AI technology.

“Shockingly, thousands of open-source AI apps like these have proliferated online, offering free and easy accessibility to smartphone users,” Ms. Inman Grant informed the Legal and Constitutional Affairs Legislation Committee.

“This makes it effortless and cost-free for perpetrators, with victims bearing immeasurable and lasting devastation.”

“Given their primary function to sexualize, humiliate, demoralize, denigrate, and create child sexual abuse material of girls, it begs the question of why these apps are permitted to exist at all.”

Concerns regarding Open-Source AI Apps

eSafety expressed worries about open-source sexual AI apps employing elaborate monetization strategies and gaining popularity on mainstream social media platforms, especially among younger users.

Quoting a recent study, Ms. Inman Grant mentioned a 2,408 percent surge in referral links to non-consensual pornographic deepfake websites on Reddit and X (formerly Twitter) in 2023 alone.

“We are also wary of the implications of multimodal generative AI, such as the creation of hyper-realistic synthetic child sexual abuse material through text prompt to video, as well as precise voice cloning and manipulated chatbots that could enhance grooming, sextortion, and other forms of sexual exploitation of minors on a large scale,” she added.

To address the risks posed by such apps, the Commissioner indicated that her agency had proposed mandatory standards to the parliament for reinforcing regulations on the issue.

Additionally, she believed that technology companies should take responsibility for minimizing risks on their platforms.

“It will be the responsibility of AI companies to take further steps to mitigate the risk that their platforms are utilized to produce highly harmful content, such as synthetic child sexual abuse material and deepfaked image-based abuse involving minors,” Ms. Inman Grant emphasized.

“These stringent safety standards will also be applicable to platform libraries hosting and distributing these apps.

“These companies must enforce robust terms of service and implement clear reporting mechanisms to ensure that the apps they host are not exploited for abusing, humiliating, and denigrating children.”

Challenges in Law Enforcement Efforts

While authorities are working to address AI-related risks, Ms. Inman Grant noted that technological advancements posed significant challenges for law enforcement agencies.

“It should be noted that deepfake detection tools considerably lag behind the freely available tools created to perpetuate deepfakes,” she remarked.

“These deepfakes are becoming so realistic that distinguishing them with the naked eye is increasingly difficult.”

The Commissioner further stated that deepfakes generated by AI were overwhelming investigators and support hotlines, given the rapid production and sharing of such material, outpacing reporting, triaging, and analysis efforts.

eSafety’s Informal Approach towards Sexual Abuse Materials

Ms. Inman Grant shared that despite the availability of formal channels, eSafety often adopts an informal approach when addressing sexual abuse materials.

Under current laws, the online content regulatory body can informally request online service providers to remove illegal or restricted content.

“We have achieved a 90 percent success rate in terms of removing image-based content from mainly overseas websites,” she disclosed.

“We opt for these informal pathways due to their speediness, as the quicker we remove harmful content, the more relief we provide to the victim.”

Since the enactment of the Online Safety Act 2021, eSafety has issued 10 formal warnings, 13 remedial directions, and 34 removal notices to entities within Australia.



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.