World News

Australia Urged by Experts to Address AI Risks ‘Without Delay’


Before the federal election, a group of AI experts have urged the upcoming Australian Parliament to establish an AI safety institute.

More than 100 AI experts and concerned individuals have signed an open letter calling for the next Australian government to create an AI safety institute (AISI) to address AI risks before it becomes too late.

The letter has been submitted to political parties in preparation for the May federal election.

AI experts are pushing politicians to fulfill the commitments made by the Australian government at the Seoul AI Summit in May 2024.

According to the Seoul Declaration, participant countries pledged to establish or expand Al safety institutes, research programs, and other relevant institutions including supervisory bodies.

Despite this, Australia is the sole signatory that has not yet established an AISI.

Greg Sadler, CEO of Good Ancestors Policy and coordinator of Australians for AI Safety, highlighted the concerning precedent set by Australia making specific commitments without following through.

Toby Ord, a senior researcher at Oxford University and a board member of the Centre for the Governance of AI, warned that Australia risks losing control over AI systems.

He emphasized that establishing an Australian AI Safety Institute would enable the country to play a significant role globally in guiding this crucial technology.

The letter also pointed out the lack of funding dedicated to understanding and addressing AI risks, despite significant investments made to enhance AI capabilities.

The letter emphasized the need for independent technical expertise within the government to engage in global AI risk research and ensure that regulations and policies align with Australia’s requirements. According to the letter,

Mandatory AI Guardrails

The letter also called for the introduction of an “AI Act” that mandates AI developers and deployers in the country to incorporate mandatory guardrails into their products.

While the government has consulted with the sector on safe and responsible AI and received advice on implementing mandatory guardrails for high-risk systems, experts believe that the next parliament must take action now.

University of the Sunshine Coast Professor Paul Salmon, Founder of the Centre for Human Factors and Sociotechnical Systems, expressed support for the AI Act, stating that it would effectively manage risks.

He stated, “We are running out of time to ensure that all AI technologies are safe, ethical, and beneficial to humanity.”

Yanni Kyriacos, Director of AI Safety Australia and New Zealand, highlighted the absence of a legal framework in the country to assure Australians about the safety of adopting AI.

He added, “Establishing robust assurance builds trust. While we are excited about the potential of AI, not enough effort is being made to address real safety concerns, which is why Australians remain skeptical about embracing AI.”

A study from 2024 conducted by the University of Queensland revealed that the top priority identified by participants was the government’s role in preventing dangerous and catastrophic outcomes from AI, with eight out of ten individuals believing that preventing AI-induced extinction should be a global priority.

The survey respondents’ top concerns included AI acting against human interests (misalignment), misuse of AI by malicious actors, and job displacement caused by AI.

Furthermore, nine out of ten respondents expressed the need for the government to establish a new regulatory body for AI.



Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.