- RisingTack quietly changes key features that fool AI without changing the appearance of the image
- Vision systems in self -driving cars could be dazzled by almost invisible image changes
- Attacked Fool Top AI models used in cars, cameras and diagnostics for healthcare
Artificial intelligence becomes more integrated into technologies that depend on visual recognition, from autonomous vehicles to medical imaging – but this increased usability also increases potential security risks, experts have warned.
A new method called RisingTack could threaten the reliability of these systems by silently manipulating what AI sees.
This can theoretically cause it to miss or incorrectly identify objects even when images occur unchanged for human observers.
Targeted deception through minimal image change
Developed by researchers at North Carolina State University, RisingTank is a form of conflicting attacks that subtly change visual input to deceive AI models.
The technique does not require large or obvious image changes; Instead, it is targeted at specific features within an image that is essential to recognition.
“This requires some calculation strength, but allows us to make very small, targeted changes in the most important features that make the attack successful,” said Tianfu Wu, associate professor of electric and computer technology and co-corresponding author of the study.
These carefully constructed changes are completely not detectable to human observers, making the manipulated images seem perfectly normal to the naked eye.
“The end result is that two images can look identical to human eyes, and we can clearly see a car in both pictures,” Wu explained.
“But because of the rising attack, AI would see a car in the first picture, but would not see a car in the second picture.”
This can compromise the security of critical systems such as those found in self -driving cars that depend on visual models to detect traffic signs, pedestrians and other vehicles.
If AI is manipulated not to see a stop sign or another car, the consequences can be serious.
The team tested the method against four widely used Vision Architectures: Resnet-50, Densenet-121, VitB and Deit-B. All four were manipulated successfully.
“We can influence AI’s ability to see any of the top 20 or 30 goals it was trained to identify,” Wu said, referring to ordinary examples such as cars, bicycles, pedestrians and stop signs.
While the current focus is on computer vision, researchers are already looking at broader implications.
“We are now deciding how effective the technique is at attacking other AI systems, such as large language models,” noted Wu.
The long -term goal, he added, is not only to postpone vulnerabilities, but to guide the development of safer systems.
“In the future, the goal is to develop techniques that can successfully defend against such attacks.”
As attackers continue to discover new methods of disrupting AI behavior, the need for stronger digital protective measures becomes more urgent.
Via Techxplore



