Critics caution that new bill will lead to two levels of scrutiny for ‘misinformation’
Australia’s Senate Committee is currently investigating the potential impacts of a Misinformation Bill.
Debates surrounding Australia’s proposed Misinformation and Disinformation Bill have arisen during a Senate Committee hearing on Oct. 11. Various stakeholders have expressed concerns regarding transparency, fairness, and the possibility of overreach.
The bill, aimed at combating harmful misinformation on digital platforms, has raised questions about censorship, the role of fact-checkers, and whether it unfairly targets certain sectors of the media.
Critics, including community broadcasters and the Institute of Public Affairs (IPA), argue that the bill could result in a two-tiered system where media outlets receive lighter scrutiny compared to the general public.
The IPA presented findings from a survey conducted from May 17 to 19, where 49 percent of respondents supported increased debate or freedom of speech, while only 39 percent backed government or social media company censorship.
Interestingly, younger Australians, aged 18 to 24, displayed the strongest support for free speech, with only 8 percent favoring government censorship and 17 percent supporting censorship by social media companies.
A significant 59 percent of this age group believed that more debate and free speech was effective to deal with misinformation.
According to the IPA, the proposed legislation has come under criticism for potentially establishing a two-tiered system where professional media organizations face fewer consequences for spreading false information than ordinary citizens.
The IPA, represented by Deputy Executive Director Daniel Wild, argued during the inquiry that exemptions for certain media outlets in the bill create an imbalance.
Larina Alick, lawyer for Nine Entertainment, also stated that media outlets should not be held to the same level of scrutiny as social media platforms, pointing out that professionally produced content adheres to editorial standards unlike the unchecked information often found on user-generated platforms.
Critics also questioned the bill’s reliance on industry-drafted codes, expressing concerns that such measures could undermine public scrutiny and transparency.
The Misinformation Bill operates on a mechanism where rather than the Australian Communications and Media Authority (ACMA) directly regulating content posted by social media outlets, the companies will establish industry codes of conduct that they must adhere to.
If ACMA is dissatisfied with the enforcement of the code, it will create its own. ACMA currently possesses the power, under the Broadcasting Services Act, to investigate individuals suspected of spreading misinformation.
Fact-Checking and Potential Biases
One contentious aspect of the bill is its reliance on fact-checking organizations, which critics argue are biased and lack transparency.
The IPA presented an analysis of 187 fact-checking articles related to the recent Voice referendum, revealing that 91 percent of the scrutinized claims targeted opponents of the proposal, while only 9 percent focused on its proponents.
Wild stated that these fact-checking organizations have shown bias, cautioning that empowering such entities under the new law could further dampen public debate.
‘Censor First, Ask Questions Later’
One key concern raised by multiple stakeholders was the bill’s provision for heavy penalties, which could lead social media companies to over-censor in order to avoid violating the law.
Alice Dawkins, executive director of Reset Tech Australia, noted that tech companies like Meta have become less transparent about their approach to combating misinformation, complicating regulatory efforts.
Dawkins stressed the importance of platforms providing more data to regulators and researchers to properly monitor misinformation trends.
Criticism has also been directed at the bill’s lack of clarity regarding what constitutes “harmful content.”
Regulating Non-Human Actors and Bots
The challenge of regulating bots and other non-human actors in the digital space remains unresolved.
Bots play a significant role in the spread of misinformation, and while the bill aims to address this issue, questions remain about ACMA and digital platforms’ capacity to effectively regulate such activity.
Dawkins expressed concerns about ACMA’s ability to collect and analyze data on bot activity from tech companies, prompting questions about the necessity of stricter regulations.
Fears of Regulatory Burden for Community Media
Another point of contention raised during the inquiry was the potential for double regulation of community broadcasters.
Reece Kinnane, head of advocacy for the Community Broadcasting Association of Australia (CBAA), highlighted the lack of clear exemptions for community broadcasters under the Broadcasting Services Act (BSA).
Kinnane argued that community broadcasters, already regulated by ACMA, should not face additional scrutiny. He emphasized the crucial role played by over 500 community radio and television stations in delivering local news, particularly for Indigenous, ethnic, and regional communities.
Despite being overseen by ACMA, the bill could subject these broadcasters to penalties for spreading misinformation, which Kinnane found concerning. He urged the Senate to ensure that the bill does not disadvantage community broadcasters compared to larger media outlets.