,

NIST Highlights Growing Security Concerns in AI Deployment

NIST Highlights Growing Security Concerns in AI Deployment

The U.S. National Institute of Standards and Technology (NIST) is sounding the alarm on the escalating security and privacy risks associated with the rapid deployment of artificial intelligence (AI) systems. As AI, including advanced models like OpenAI ChatGPT and Google Bard, becomes integral to online services, NIST outlines potential threats, emphasizing the need for robust defenses.

“Security and privacy challenges include adversarial manipulation of training data, model vulnerabilities exploitation, and malicious interactions leading to the exfiltration of sensitive information,” warns NIST. Threats manifest at various stages, such as corrupted training data, software security flaws, data model poisoning, supply chain vulnerabilities, and prompt injection attacks leading to privacy breaches.

Apostol Vassilev, a NIST computer scientist, acknowledges the desire for more product exposure but cautions about the potential misuse of AI. “A chatbot can spew out bad or toxic information when prompted with carefully designed language,” says Vassilev.

NIST identifies four major types of attacks: evasion, poisoning, privacy, and abuse. Evasion aims to generate adversarial output post-deployment, poisoning targets the training phase with corrupted data, privacy attacks seek sensitive information, and abuse attacks compromise legitimate sources.

These attacks can be executed by threat actors with varying levels of knowledge (white-box, black-box, gray-box). NIST points out the lack of foolproof defenses and urges the tech community to develop better safeguards.

The report comes after the U.K., the U.S., and international partners released guidelines for secure AI system development, emphasizing the need for ongoing research to address vulnerabilities.

“There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil,” warns Vassilev, underscoring the gravity of the situation. As AI integrates further into the connected economy, these challenges demand continuous research, collaboration, and innovation in securing these transformative technologies.

You can check out the full report here.

Leave a Reply

Your email address will not be published. Required fields are marked *