- A carefully crafted branch name can steal your GitHub authentication token
- Unicode spaces hide malicious payloads from human eyes in plain sight
- Attackers can automate token theft across multiple users sharing a repository
Security researchers have discovered a command injection vulnerability in OpenAI’s Codex cloud environment that allowed attackers to steal GitHub authentication tokens using nothing more than a carefully crafted branch name.
Investigations by BeyondTrust Phantom Labs found that the vulnerability stems from incorrect input sanitization in how Codex handled GitHub branch names during task execution.
By injecting arbitrary commands through the branch name parameter, an attacker could execute malicious payloads inside the agent’s container and obtain sensitive authentication tokens that provide access to connected GitHub repositories.
The article continues below
A vulnerability in plain sight
What makes this attack particularly worrisome is the method researchers developed to hide the malicious payload from human detection.
The team identified a way to hide the payload by using the Ideographic Space, a Unicode character designated as U+3000.
By adding 94 Ideographic Spaces followed by “or true” to the branch name, error conditions can be bypassed while the malicious part becomes invisible in the Codex UI.
The ideographic spaces are ignored by Bash during command execution, but they effectively hide the attack from any user who can see the branch name through the web portal.
The attack could be automated to compromise multiple users interacting with a shared GitHub repository.
With the right repository permissions, an attacker can create a new branch containing the obfuscated payload and even set that branch as the default branch for the repository.
Any user who subsequently interacted with that branch through Codex would have their GitHub OAuth token exfiltrated to a remote server controlled by the attacker.
The researchers tested this technique by hosting a simple HTTP server on Amazon EC2 to monitor incoming requests, confirming that the stolen tokens were transferred successfully.
The vulnerability affected several Codex interfaces, including the ChatGPT website, the Codex CLI, the Codex SDK, and the Codex IDE extension.
Phantom Labs also discovered that authentication tokens stored locally on developer machines in the auth.json file could be exploited to replicate the attack via backend APIs.
Beyond simple token theft, the same technique could steal GitHub Installation Access tokens by referencing Codex in a pull request comment, triggering a code review container that executed the payload.
All reported issues have since been remedied in coordination with OpenAI’s security team.
However, the discovery raises concerns about AI coding agents operating with privileged access.
Traditional security tools such as antivirus and firewalls cannot prevent this attack because it happens inside OpenAI’s cloud environment, beyond their visibility.
To stay secure, organizations should audit AI tool permissions, especially agents, and enforce least privilege.
They should also monitor repositories for unusual branch names that contain Unicode spaces, rotate GitHub tokens regularly, and review access logs for suspicious API activity.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



