- Antigravity IDE allows agents to execute commands automatically under default settings
- Rapid injection attacks can trigger unwanted code execution in the IDE
- Data exfiltration is done through Markdown, tool calls or hidden instructions
Google’s new Antigravity IDE launched with an AI-first design, but it’s already showing issues that raise concerns about basic security expectations, experts have warned.
Researchers at PromptArmor found that the system allows its coding agent to execute commands automatically when certain default settings are enabled, creating openings for unintended behavior.
When untrusted inputs appear in source files or other processed content, the agent can be manipulated to run commands that the user never intended.
Risks associated with data access and exfiltration
The product allows the agent to perform tasks through the terminal, and while there are security measures in place, there are still some gaps in how these controls work.
These holes create room for rapid injection attacks that can lead to unwanted code execution when the agent follows hidden or hostile input.
The same weakness applies to the way Antigravity handles file access.
The agent has the ability to read and generate content, and this includes files that may contain credentials or sensitive project material.
Data exfiltration becomes possible when malicious instructions are hidden in Markdown, tool calls, or other text formats.
Attackers can exploit these channels to steer the agent against leaking internal files to attacker-controlled locations.
Report reference logs that contain cloud credentials and private code already collected in successful demonstrations and show the severity of these holes.
Google has acknowledged these issues and warns users during onboarding, but such warnings do not compensate for the possibility that agents can run unattended.
Antigravity encourages users to accept recommended settings that allow the agent to function with minimal supervision.
The configuration places human review decisions in the hands of the system, including when terminal commands require approval.
Users working with multiple agents through the Agent Manager interface may not detect malicious behavior until actions are completed.
This design assumes continuous user attention, even though the interface explicitly encourages background operation.
As a result, sensitive tasks can run unchecked, and simple visual warnings do little to change the underlying exposure.
These choices undermine expectations normally associated with a modern firewall or similar protection.
Despite restrictions, credential leaks can occur. The IDE is designed to prevent direct access to files listed in .gitignore, including .env files that store sensitive variables.
However, the agent can bypass this layer by using terminal commands to print file contents, effectively bypassing the policy.
After collecting the data, the agent encrypts the credentials, adds them to a monitored domain, and activates a browser subagent to complete the exfiltration.
The process happens quickly and is rarely visible unless the user is actively monitoring the agent’s actions, which is unlikely when multiple tasks are running in parallel.
These problems illustrate the risks created when AI tools are granted broad autonomy without corresponding structural safeguards.
The design aims for convenience, but the current configuration gives attackers significant leverage long before stronger defenses are implemented.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



