Hackers fool self-driving cars and drones using fake road signs, turning plain text into dangerous instructions that anyone can exploit


  • Printed words can override sensors and context in autonomous decision-making systems
  • Vision language models treat public text as commands without confirming intent
  • Road signs become attack vectors when AI reads language too literally

Autonomous vehicles and drones rely on vision systems that combine image recognition with language processing to interpret their surroundings, helping them read road signs, labels and markings as contextual information that supports navigation and identification.

Researchers from the University of California, Santa Cruz and Johns Hopkins set out to test whether that assumption holds when written language is deliberately manipulated.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top