The Intersection of Embodied AI and LLMs: Unveiling New Security Threats
As LLMs are fine-tuned for embodied AI systems like autonomous vehicles and robots, new security risks emerge. A framework identifies backdoor attacks with success rates up to 100%, posing significant threats to these systems' safety.
Photo source
As artificial intelligence continues to advance, embodied AI—systems like autonomous vehicles and household robots that operate in the physical world—are increasingly adopting large language models (LLMs) to improve decision-making and reasoning capabilities. However, the integration of LLMs into these real-world systems brings with it significant security risks.
A new framework, BALD (Backdoor Attacks against LLM-based Decision-making systems), provides a comprehensive evaluation of potential attack vectors within these systems.