Researchers have found that stickers on road signs can trick AI systems in autonomous cars, leading to unpredictable and dangerous behaviour.
At the Network and Distributed System Security Symposium in San Diego, UC Irvine’s Donald Bren School of Information & Computer Sciences presented their groundbreaking study. The researchers explored the real-world impacts of low-cost, easily deployable malicious attacks on traffic sign recognition (TSR) systems—a critical component of autonomous vehicle technology.
Their findings substantiated what previously had been theoretical: that interference such as tampering with roadside signs can render them undetectable to AI systems in autonomous cars. Even more concerning, such interference can cause the systems to misread or create “phantom” signs, leading to erratic responses including emergency braking, speeding, and other road violations.
Alfred Chen, assistant professor of computer science at UC Irvine and co-author of the study, commented: “This fact spotlights the importance of security, since vulnerabilities in these systems, once exploited, can lead to safety hazards that become a matter of life and death.”
Large-scale evaluation across consumer autonomous cars
The researchers believe that theirs is the first large-scale evaluation of TSR security vulnerabilities in commercially-available vehicles from leading consumer brands.
Autonomous vehicles are no longer hypothetical concepts; they are here and thriving.
“Waymo has been delivering more than 150,000 autonomous rides per week, and there are millions of Autopilot-equipped Tesla vehicles on the road, which demonstrates that autonomous vehicle technology is becoming an integral part of daily life in America and around the world,” Chen highlighted.
Such milestones illustrate the integral role self-driving technologies are playing in modern mobility, making it all the more crucial to address potential flaws.
The study focused on three representative AI attack designs, assessing their impact on top consumer vehicle brands equipped with TSR systems.
A simple, low-cost threat: Multicoloured stickers
What makes the study alarming is the simplicity and accessibility of the attack method.
The research, led by Ningfei Wang – a current research scientist at Meta who conducted the experiments as part of his Ph.D. at UC Irvine – demonstrated that swirling, multicoloured stickers could easily confuse TSR algorithms.
These stickers, which Wang described as “cheaply and easily produced,” can be created by anyone with basic resources.
One particularly intriguing, yet concerning, discovery during the project revolves around a feature referred to as “spatial memorisation.” Designed to help TSR systems retain memory of detected signs, this feature can mitigate the impact of certain attacks, such as entirely removing a stop sign from the car’s “view.” However, Wang said, it makes spoofing a fake stop sign “much easier than we expected.”
Challenging security assumptions about autonomous cars
The research also refuted several assumptions widely held in academic circles about autonomous vehicle security.
“Academics have studied driverless vehicle security for years and have discovered various practical security vulnerabilities in the latest autonomous driving technology,” Chen remarked. However, he pointed out that these studies often take place in controlled, academic setups that don’t reflect real-world scenarios.
“Our study fills this critical gap,” Chen continued, noting that commercially-available systems were previously overlooked in academic research. By focusing on existing commercial AI algorithms, the team uncovered broken assumptions, inaccuracies, and false claims that significantly impact TSR’s real-world performance.
One major finding involved the underestimated prevalence of spatial memorisation in commercial systems. By modelling this feature, the UC Irvine team directly challenged the validity of prior claims made by the state-of-the-art research community.
Catalysing further research
Chen and his collaborators hope their findings act as a catalyst for further research on security threats to autonomous vehicles.
“We believe this work should only be the beginning, and we hope that it inspires more researchers in both academia and industry to systematically revisit the actual impacts and meaningfulness of such types of security threats against real-world autonomous vehicles,” Chen stated.
He added, “This would be the necessary first step before we can actually know if, at the societal level, action is needed to ensure safety on our streets and highways.”
To ensure rigorous testing and expand their study’s reach, the researchers collaborated with notable institutions and benefitted from funding provided by the National Science Foundation and CARMEN+ University Transportation Center under the US Department of Transportation.
As self-driving vehicles continue to become more ubiquitous, the study from UC Irvine raises a red flag about potential vulnerabilities that could have life-or-death consequences. The team’s findings call for enhanced security protocols, proactive industry partnerships, and timely discussions to ensure that autonomous vehicles can navigate our streets securely without compromising public safety.
(Photo by Murat Onder)
See also: Wayve launches embodied AI driving testing in Germany


Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.