BeyondTrust Phantom Labs researchers have revealed a important command injection vulnerability in OpenAI’s Codex. The flaw allowed attackers to steal delicate GitHub OAuth tokens utilizing hidden Unicode characters in department names, doubtlessly compromising complete enterprise environments.
A considerable safety vulnerability has been recognized in OpenAI’s Codex, a software utilized by numerous builders to help in writing and reviewing code. The flaw might have allowed hackers to steal GitHub Entry Tokens, that are the keys that give somebody full management over an individual’s or an organization’s personal code repositories.
These findings come from researchers at BeyondTrust Phantom Labs, who discovered {that a} easy lack of enter sanitization might flip a coding assistant into a possible doorway for information theft.
The Invisible Department Trick
To your info, instruments like Codex want a token to entry a programmer’s work. Phantom Labs researchers found the system didn’t correctly examine consumer information, permitting for a command injection via the GitHub department identify. Additional probing revealed that attackers might cover malicious directions utilizing an Ideographic House, a particular Unicode character that appears like a traditional area to the human eye.
Whereas a developer would possibly assume they’re viewing a regular department named primary, a hidden command may very well be working within the background. “When user-controlled enter is handed into these environments with out strict validation, the end result is not only a bug, it’s a scalable assault path,” researchers famous within the weblog publish shared with Hackread.com. In testing, they efficiently compelled the system to disclose secret login tokens in plain textual content.
Scaling the Assault
Researchers word this was not only a risk to particular person customers, because the flaw affected the ChatGPT web site, Codex SDK, and varied developer extensions. If a hacker modified a challenge’s default department to a malicious model, anybody opening it might have their credentials exfiltrated.
Additionally value noting is that the chance prolonged past the cloud. The staff, led by Director of Analysis Fletcher Davis, discovered that Codex shops delicate login information in a file referred to as auth.json on a consumer’s native laptop. If a hacker accessed a developer’s machine, they might elevate these tokens to maneuver via a complete organisation’s GitHub surroundings.
A Speedy Response
Luckily, the Phantom Labs staff was fast to report the flaw to OpenAI on 16 December 2025. This led to an preliminary hotfix only a week in a while 23 December. By 30 January 2026, OpenAI had carried out stronger protections for shell instructions and restricted the entry these tokens have, finally labelling the difficulty a “Essential Precedence 1” vulnerability on 5 February.
OpenAI has since confirmed the fixes are full, thanking the researchers for his or her partnership. Nonetheless, it reminds us to watch out with AI instruments as they aren’t simply assistants however stay environments with high-level entry to our most delicate information.

