quinta-feira, novembro 21, 2024
HomeIoTResearchers sound alarm over security flaws

Researchers sound alarm over security flaws


Researchers at the University of Pennsylvania’s School of Engineering and Applied Science (Penn Engineering) have discovered alarming security flaws in AI robots.

The study, funded by the National Science Foundation and the Army Research Laboratory, focused on the integration of large language models (LLMs) in robotics. The findings reveal that a wide variety of AI robots can be easily manipulated or hacked, potentially leading to dangerous consequences.

George Pappas, UPS Foundation Professor at Penn Engineering, said: “Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world.”

The research team developed an algorithm called RoboPAIR, which achieved a 100% “jailbreak” rate in just days. This algorithm successfully bypassed safety guardrails in three different robotic systems: the Unitree Go2 quadruped robot, the Clearpath Robotics Jackal wheeled vehicle, and the Dolphin LLM self-driving simulator by NVIDIA.

Particularly concerning was the vulnerability of OpenAI’s ChatGPT, which governs the first two systems. The researchers demonstrated that by bypassing safety protocols, a self-driving system could be manipulated to speed through crosswalks.

(Credit: Alexander Robey, Zachary Ravichandran, Vijay Kumar, Hamed Hassani, George J. Pappas)

Alexander Robey, a recent Penn Engineering Ph.D. graduate and the paper’s first author, emphasises the importance of identifying these weaknesses: “What is important to underscore here is that systems become safer when you find their weaknesses. This is true for cybersecurity. This is also true for AI safety.”

The researchers argue that addressing this problem requires more than a simple software patch. Instead, they call for a comprehensive reevaluation of how AI integration into robotics and other physical systems is regulated.

Vijay Kumar, Nemirovsky Family Dean of Penn Engineering and a coauthor of the study, commented: “We must address intrinsic vulnerabilities before deploying AI-enabled robots in the real world. Indeed our research is developing a framework for verification and validation that ensures only actions that conform to social norms can — and should — be taken by robotic systems.”

Prior to the study’s public release, Penn Engineering informed the affected companies about their system vulnerabilities. The researchers are now collaborating with these manufacturers to use their findings as a framework for advancing the testing and validation of AI safety protocols.

Additional co-authors include Hamed Hassani, Associate Professor at Penn Engineering and Wharton, and Zachary Ravichandran, a doctoral student in the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory.

See also: The evolution and future of Boston Dynamics’ robots

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , , , , , ,



RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular

Recent Comments