Rephrase the title:Experts Sound the Alarm on Cyberattacks That Can ‘Poison’ AI Systems

Rephrase and rearrange the whole content into a news article. I want you to respond only in language English. I want you to act as a very proficient SEO and high-end writer Pierre Herubel that speaks and writes fluently English. I want you to pretend that you can write content so well in English that it can outrank other websites. Make sure there is zero plagiarism.:

A recent study conducted computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators has exposed the vulnerability of artificial intelligence (AI) and machine learning (ML) systems to deliberate manipulation, commonly referred to as “poisoning.” 

The findings reveal that these systems can be intentionally misled, posing significant challenges to their developers, who currently lack foolproof defense mechanisms.

Tick Tock

(Photo : Gerd Altmann from Pixabay)

Poisoning AI

The study, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” is part of NIST’s broader initiative to support the development of reliable AI. The goal is to assist AI developers and users in understanding potential attacks and adopting effective mitigation strategies. 

It emphasizes that while certain defense mechanisms are available, none provide absolute assurances of complete risk mitigation. Apostol Vassilev, a computer scientist at NIST and one of the publication’s authors, highlights the importance of addressing various attack techniques and methodologies applicable to all types of AI systems. 

The study encourages the community to innovate and develop more robust defenses against potential threats. The integration of AI systems into various aspects of modern society, such as autonomous vehicles, medical diagnoses, and customer interactions through online chatbots, has become commonplace. 

These systems rely on extensive datasets for training, exposing them to diverse scenarios and enabling them to predict responses in specific situations. However, a major challenge arises from the lack of trustworthiness in the data itself, which may be derived from websites and public interactions, according to the research team.  

Bad actors can manipulate this data during an AI system’s training phase, potentially leading the system to exhibit undesirable behaviors. For instance, chatbots may learn to respond with offensive language when prompted carefully crafted malicious inputs.

Read Also: Can AI Replace Humans in the Music Industry? Here’s What an Award-Winning Composer Has to Say About It

Attack on AI

The study categorizes four major types of attacks on AI systems: evasion, poisoning, privacy, and abuse attacks. The team observes that evasion attacks seek to modify inputs after the deployment of an AI system, thereinfluencing its response.

Poisoning attacks, on the other hand, occur during the training phase introducing corrupted data, impacting the behavior of the AI. Privacy attacks aim to extract sensitive information about the AI or its training data, while abuse attacks involve injecting incorrect information from compromised sources to deceive the AI.

The authors stress the simplicity with which these attacks can be launched, often requiring minimal knowledge of the AI system and limited adversarial capabilities. For instance, poisoning attacks can be carried out controlling a small percentage of training samples, making them relatively accessible to adversaries. 

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” co-author Alina Oprea, a professor at Northeastern University, said in a statement.

“There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil,” she added. The study’s findings can be found here.

Related Article: Researchers Use AI Chatbot to Produce Prompts That Can ‘Jailbreak’ Other Bots, Including ChatGPT

Byline

ⓒ 2023 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Related Post