Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Selfyz AI Video Technology App Evaluate: Key Options

    December 13, 2025

    UK’s ICO Superb LastPass £1.2 Million Over 2022 Safety Breach – Hackread – Cybersecurity Information, Information Breaches, AI, and Extra

    December 13, 2025

    Ai2's new Olmo 3.1 extends reinforcement studying coaching for stronger reasoning benchmarks

    December 13, 2025
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Emerging Tech»A Single Poisoned Doc May Leak ‘Secret’ Knowledge By way of ChatGPT
    Emerging Tech

    A Single Poisoned Doc May Leak ‘Secret’ Knowledge By way of ChatGPT

    Sophia Ahmed WilsonBy Sophia Ahmed WilsonAugust 7, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    A Single Poisoned Doc May Leak ‘Secret’ Knowledge By way of ChatGPT
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    The most recent generative AI fashions should not simply stand-alone text-generating chatbots—as a substitute, they will simply be hooked as much as your information to present personalised solutions to your questions. OpenAI’s ChatGPT will be linked to your Gmail inbox, allowed to examine your GitHub code, or discover appointments in your Microsoft calendar. However these connections have the potential to be abused—and researchers have proven it could actually take only a single “poisoned” doc to take action.

    New findings from safety researchers Michael Bargury and Tamir Ishay Sharbat, revealed on the Black Hat hacker convention in Las Vegas at the moment, present how a weak point in OpenAI’s Connectors allowed delicate data to be extracted from a Google Drive account utilizing an oblique immediate injection assault. In an illustration of the assault, dubbed AgentFlayer, Bargury exhibits the way it was doable to extract developer secrets and techniques, within the type of API keys, that had been saved in an illustration Drive account.

    The vulnerability highlights how connecting AI fashions to exterior methods and sharing extra information throughout them will increase the potential assault floor for malicious hackers and probably multiplies the methods the place vulnerabilities could also be launched.

    “There’s nothing the consumer must do to be compromised, and there may be nothing the consumer must do for the info to exit,” Bargury, the CTO at safety agency Zenity, tells WIRED. “We’ve proven that is utterly zero-click; we simply want your e mail, we share the doc with you, and that’s it. So sure, that is very, very unhealthy,” Bargury says.

    OpenAI didn’t instantly reply to WIRED’s request for remark concerning the vulnerability in Connectors. The corporate launched Connectors for ChatGPT as a beta function earlier this yr, and its web site lists a minimum of 17 completely different providers that may be linked up with its accounts. It says the system lets you “carry your instruments and information into ChatGPT” and “search recordsdata, pull reside information, and reference content material proper within the chat.”

    Bargury says he reported the findings to OpenAI earlier this yr and that the corporate shortly launched mitigations to forestall the method he used to extract information by way of Connectors. The best way the assault works means solely a restricted quantity of knowledge may very well be extracted without delay—full paperwork couldn’t be eliminated as a part of the assault.

    “Whereas this subject isn’t particular to Google, it illustrates why growing sturdy protections in opposition to immediate injection assaults is vital,” says Andy Wen, senior director of safety product administration at Google Workspace, pointing to the corporate’s lately enhanced AI safety measures.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Sophia Ahmed Wilson
    • Website

    Related Posts

    Ai2's new Olmo 3.1 extends reinforcement studying coaching for stronger reasoning benchmarks

    December 13, 2025

    At this time’s NYT Mini Crossword Solutions for Dec. 13

    December 13, 2025

    Wordle immediately: The reply and hints for December 13, 2025

    December 13, 2025
    Top Posts

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025

    Meta resumes AI coaching utilizing EU person knowledge

    April 18, 2025
    Don't Miss

    Selfyz AI Video Technology App Evaluate: Key Options

    By Amelia Harper JonesDecember 13, 2025

    Selfyz AI is a cellphone app that depends on AI to construct animated clips from…

    UK’s ICO Superb LastPass £1.2 Million Over 2022 Safety Breach – Hackread – Cybersecurity Information, Information Breaches, AI, and Extra

    December 13, 2025

    Ai2's new Olmo 3.1 extends reinforcement studying coaching for stronger reasoning benchmarks

    December 13, 2025

    How Leaders Can Cease “Seeing Ghosts” And Overcome Invisible Threats and Alternatives in Enterprise

    December 13, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.