Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    FBI Accessed Home windows Laptops After Microsoft Shared BitLocker Restoration Keys – Hackread – Cybersecurity Information, Information Breaches, AI, and Extra

    January 25, 2026

    Pet Bowl 2026: Learn how to Watch and Stream the Furry Showdown

    January 25, 2026

    Why Each Chief Ought to Put on the Coach’s Hat ― and 4 Expertise Wanted To Coach Successfully

    January 25, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»AI Ethics & Regulation»Google Gemini AI Tricked Into Leaking Calendar Knowledge through Assembly Invitations – Hackread – Cybersecurity Information, Knowledge Breaches, AI, and Extra
    AI Ethics & Regulation

    Google Gemini AI Tricked Into Leaking Calendar Knowledge through Assembly Invitations – Hackread – Cybersecurity Information, Knowledge Breaches, AI, and Extra

    Declan MurphyBy Declan MurphyJanuary 19, 2026No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Google Gemini AI Tricked Into Leaking Calendar Knowledge through Assembly Invitations – Hackread – Cybersecurity Information, Knowledge Breaches, AI, and Extra
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    AI assistants are constructed to make life simpler, however a brand new discovery reveals that even a easy assembly invite will be became a Trojan Horse. Researchers at Miggo Safety discovered a scary flaw in how Google Gemini interacts with Google Calendar, the place an attacker can ship you a normal-looking invite that quietly tips the AI into stealing your personal information.

    Gemini, as we all know it, is designed to be useful by studying your schedule, and that is precisely what the researchers at Miggo Safety exploited. They discovered that as a result of the AI causes via language slightly than simply code, it may be bossed round by directions hidden in plain sight. This analysis was shared with Hackread.com to indicate how simple it’s for issues to go unsuitable.

    How the assault occurs

    In line with Miggo Safety’s weblog submit, researchers didn’t use malware or suspicious hyperlinks; as a substitute, they used Oblique Immediate Injection for this assault. It begins when an attacker sends you a gathering invite, and inside its description discipline (the half the place you’d often see an agenda), they disguise a command. This command tells Gemini to summarise your different personal conferences and create a brand new occasion to retailer that abstract.

    The scary half is that you simply don’t even need to click on something for the assault to begin. It sits and waits till you ask Gemini a completely regular query, like “Am I busy this weekend?” To be useful, Gemini reads the malicious invite whereas checking your schedule. It then follows the hidden directions, makes use of a device referred to as Calendar.create to make a brand new assembly, and pastes your personal information proper into it.

    In line with researchers, probably the most harmful half is that it seems completely regular. Gemini simply tells you, “it’s a free time slot,” whereas it’s busy leaking your information within the background. “Vulnerabilities are not confined to code,” the workforce famous, explaining that the AI’s personal “assistant” nature is what makes it susceptible.

    Assault chain (Supply: Miggo Safety)

    Not the First Time for Gemini

    It’s value noting that this isn’t the primary language downside Google has confronted. Again in December 2025, Noma Safety discovered a flaw named GeminiJack that additionally used hidden instructions in Docs and emails to peek at company secrets and techniques with out leaving any warning indicators. This earlier flaw was described as an “architectural weak point” in how enterprise AI techniques perceive data.

    Whereas Google has already patched the particular flaw discovered by Miggo Safety, the larger downside stays. Conventional safety seems for unhealthy code, however these new assaults simply use unhealthy language. So long as our AI assistants are skilled to be this beneficial, hackers will hold in search of methods to make use of that helpfulness in opposition to us.



    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Declan Murphy
    • Website

    Related Posts

    FBI Accessed Home windows Laptops After Microsoft Shared BitLocker Restoration Keys – Hackread – Cybersecurity Information, Information Breaches, AI, and Extra

    January 25, 2026

    Multi-Stage Phishing Marketing campaign Targets Russia with Amnesia RAT and Ransomware

    January 25, 2026

    Microsoft Groups to Start Sharing Worker Location with Employers Primarily based on Wi-Fi Networks

    January 25, 2026
    Top Posts

    FBI Accessed Home windows Laptops After Microsoft Shared BitLocker Restoration Keys – Hackread – Cybersecurity Information, Information Breaches, AI, and Extra

    January 25, 2026

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025
    Don't Miss

    FBI Accessed Home windows Laptops After Microsoft Shared BitLocker Restoration Keys – Hackread – Cybersecurity Information, Information Breaches, AI, and Extra

    By Declan MurphyJanuary 25, 2026

    Is your Home windows PC safe? A latest Guam court docket case reveals Microsoft can…

    Pet Bowl 2026: Learn how to Watch and Stream the Furry Showdown

    January 25, 2026

    Why Each Chief Ought to Put on the Coach’s Hat ― and 4 Expertise Wanted To Coach Successfully

    January 25, 2026

    How the Amazon.com Catalog Crew constructed self-learning generative AI at scale with Amazon Bedrock

    January 25, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.