World News

Government Report Identifies AI Threats Such as ‘Data Poisoning’ and ‘Manipulation Attacks’


The Australian Signals Directorate has articulated these threats in their new AI guidelines alongside international partners, including the US.

An artificial intelligence (AI) report released by the Australian Signals Directorate warns that the technology can “intentionally  or inadvertently cause harm.”

The publication, produced by the Australian Cyber Security Centre with international partners, including the United States, warned that AI presented both opportunities and threats.

Threats included data poisoning of an AI model, input manipulation attacks, generative AI hallucinations, privacy and intellectual property concerns, and model stealing attacks.

The report noted government, academia, and industry have a role to play managing AI technology, including via regulation and governance.

“While AI has the potential to increase efficiency and lower costs, it can also intentionally or inadvertently cause harm,” the report states (pdf).

The threats were not outlined to deter from AI use, but to help AI stakeholders engage with the technology securely, it said.

Related Stories

Artificial Intelligence Could Enhance Your Travel Experience

Supreme Court’s John Roberts Urges ‘Caution’ on Using Artificial Intelligence

Describing data poisoning, the publication highlighted this tactic, which involves manipulating an AI’s training data to teach the model “incorrect patterns.”

This can lead to the AI model “misclassifying data” or producing “biased, inaccurate or malicious” outputs.

“Any organisational function that relies on the integrity of the AI system’s outputs could be negatively impacted by data poisoning,” the publication states.

“An AI model’s training data could be manipulated by inserting new data or modifying existing data; or the training data could be taken from a source that was poisoned to begin with. Data poisoning may also occur in the model’s fine-tuning process.”

Manipulation attacks like prompt injection can also be a threat, the report highlighted. This involves malicious instructions or hidden commands being implemented into an AI system.

“Prompt injection can allow a malicious actor to hijack the AI model’s output and jailbreak the AI system. In doing so, the malicious actor can evade content filters and other safeguards restricting the AI system’s functionality,” the report noted.

In addition, the study highlighted that generative AI systems can hallucinate. This occurs when a generative AI, such as a chatbot, processes incomplete or incorrect patterns and generates completely false information.

“Organisational functions that rely on the accuracy of generative AI outputs could be negatively impacted by hallucinations, unless appropriate mitigations are implemented,” the authors noted.

Organisations also needed to be careful about the information they shared with generative AI systems due to privacy and intellectual property concerns.

Information provided to AI systems could be incorporated into the system’s training data, influencing outputs to prompts from outside the organisation, the report explained.

Finally, the publication warned of the risk of model stealing attacks, where a malicious actor provided inputs to an AI system and used the outputs to create a replica.

The authors note model stealing is a “serious intellectual property concern.”

“For example, consider an insurance company that has developed an AI model to provide customers with insurance quotes,” the report said.

“If a competitor were to query this model to the extent that it could create a replica of it, it could benefit from the investment that went into creating the model, without sharing in its development costs.”

International collaborators in the report included the United States, United Kingdom, Canada, New Zealand, Germany, Israel, Japan, Norway, Singapore, and Sweden.

In the United States, the FBI, National Security Agency, and Cybersecurity and Infrastructure Security Agency collaborated with authors on the report.

The study encouraged organisations to evaluate AI’s benefits and risks and consider cybersecurity implications of the technology.

Organisations were encouraged to consider cybersecurity frameworks, privacy and data protection obligations, privileged access, multi-authentication, backups of the AI system, supply chains of AI systems, health checks of the AI system and staff interaction.

People walk past an AI sign at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. (The Canadian Press/Ryan Remiorz)
People walk past an AI sign at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. (The Canadian Press/Ryan Remiorz)

Pope Francis Joins Calls for Regulation

Meanwhile, Pope Francis has warned of the dangers of AI after he was a a victim of a “deepfake photo.” Speaking at the 58th World Communications Day, the pope called for more regulation of the technology including an international treaty.

He raised concerns about the creation of deepfake images and fake audio messages using AI technology.

“The development of systems of artificial intelligence, to which I devoted my recent message for the World Day of Peace, is radically affecting the world of information and communication, and through it, certain foundations of life in society,” the pope said.

“We need but think of the long-standing problem of disinformation in the form of fake news which today can employ deepfakes, namely the creation and diffusion of images that appear perfectly plausible but false (I too have been an object of this), or of audio messages that use a person’s voice to say things which that person never said.”

The pope said like every other product of human intelligence and skill, the “algorithms are not neutral.” He called on the international community to adopt a “binding” treaty that regulates AI.

“I once more appeal to the international community to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms,” he said.



Source link

TruthUSA

I'm TruthUSA, the author behind TruthUSA News Hub located at https://truthusa.us/. With our One Story at a Time," my aim is to provide you with unbiased and comprehensive news coverage. I dive deep into the latest happenings in the US and global events, and bring you objective stories sourced from reputable sources. My goal is to keep you informed and enlightened, ensuring you have access to the truth. Stay tuned to TruthUSA News Hub to discover the reality behind the headlines and gain a well-rounded perspective on the world.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.