World News

Less than 50% of Companies feel Confident in AI Data Quality: Findings from Google Study


Organizations are uncovering new vulnerabilities and weaknesses, particularly related to the quality of their data.

Despite the growing interest in artificial intelligence (AI) in Australia, organizations are becoming increasingly concerned about the technology’s weaknesses, especially in terms of data accuracy.

A recent report on AI trends released by Google on July 23 surveyed hundreds of business and IT leaders about their objectives and strategies for utilizing generative AI.

The findings disclosed that less than half of respondents (44 percent) are fully confident in their organization’s data quality, with another 11 percent expressing even lower confidence.

Furthermore, just over half of respondents (54 percent) perceive their organizations as only somewhat mature in terms of data governance, while only 27 percent regard their organizations as extremely or very mature in this aspect.

Meanwhile, more than two-thirds (69 percent) of employees admitted to disregarding their organization’s cybersecurity guidelines in the past year.

This is happening despite search interest in AI reaching a peak in May, with a 20 percent increase during the April-June period compared to the first quarter of the year.

“This explosion of new technology has its drawbacks, too,” the report observed.

Organizations are uncovering new vulnerabilities and weaknesses, especially related to the quality of their data.

The report stressed that just applying large language models (LLMs) to data is “not enough”; these models must be “grounded in good quality enterprise data or risk hallucinations.”

LLMs, the models powering AI chatbots, are machine learning models capable of understanding and generating human language by processing vast amounts of text data.

AI hallucinations occur when LLMs generate incorrect or misleading information but present it as factual.

‘Full Of Hallunications’

American AI expert Susan Aaronson has also raised similar concerns, stating that datasets produced by AI are usually inaccurate.

During an event hosted by the United States Studies Centre on July 9, Ms. Aaronson, a research professor of international affairs at George Washington University, expressed doubts about the benefits of AI due to it being “full of hallucinations.”

“It is a risk-based system,” she explained. “There is no federal law [in the U.S.] saying that AI can be misused. People will misuse it.”

She referred to a childcare benefits scandal in the Netherlands as an example. In this case, a self-learning algorithm used by the tax authority wrongly accused around 26,000 parents of fraudulent benefit claims, leading to financial hardship and even suicides.

This scandal forced the resignation of the government of the Netherlands in 2021.

A recent Senate inquiry echoed these concerns, with calls for guidelines and limitations on AI use from various sectors like media outlets, voice actors, and lawyers.

Adobe Asia Pacific’s public sector strategy director John Mackenney highlighted concerns around the misappropriation of image, voice, and style and addressed the need for trustworthy AI models.

The inquiry is expected to present its findings in September, while Australia’s national AI expert advisory group is evaluating the introduction of regulations for high-risk AI deployments.



Source link

TruthUSA

I'm TruthUSA, the author behind TruthUSA News Hub located at https://truthusa.us/. With our One Story at a Time," my aim is to provide you with unbiased and comprehensive news coverage. I dive deep into the latest happenings in the US and global events, and bring you objective stories sourced from reputable sources. My goal is to keep you informed and enlightened, ensuring you have access to the truth. Stay tuned to TruthUSA News Hub to discover the reality behind the headlines and gain a well-rounded perspective on the world.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.