IBM’s AI ‘Bob’ could be manipulated to download and execute malware


  • IBM’s GenAI tool “Bob” is vulnerable to indirect, rapid injection attacks in beta testing
  • CLI faces rapid injection risks; IDE exposed to AI-specific data exfiltration vectors
  • Exploit requires “always allow” permissions, enabling arbitrary shell scripts and malware deployment

IBM’s Generative Artificial Intelligence (GenAI) tool, Bob, is susceptible to the same dangerous attack vector as most other similar tools – indirect prompt injection.

Indirect prompt injection is when the AI ​​tool is allowed to read the content found in other apps, such as email or calendar.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top