AI assistants are constructed to make life simpler, however a brand new discovery reveals that even a easy assembly invite will be became a Trojan Horse. Researchers at Miggo Safety discovered a scary flaw in how Google Gemini interacts with Google Calendar, the place an attacker can ship you a normal-looking invite that quietly tips the AI into stealing your personal information.
Gemini, as we all know it, is designed to be useful by studying your schedule, and that is precisely what the researchers at Miggo Safety exploited. They discovered that as a result of the AI causes via language slightly than simply code, it may be bossed round by directions hidden in plain sight. This analysis was shared with Hackread.com to indicate how simple it’s for issues to go unsuitable.
How the assault occurs
In line with Miggo Safety’s weblog submit, researchers didn’t use malware or suspicious hyperlinks; as a substitute, they used Oblique Immediate Injection for this assault. It begins when an attacker sends you a gathering invite, and inside its description discipline (the half the place you’d often see an agenda), they disguise a command. This command tells Gemini to summarise your different personal conferences and create a brand new occasion to retailer that abstract.
The scary half is that you simply don’t even need to click on something for the assault to begin. It sits and waits till you ask Gemini a completely regular query, like “Am I busy this weekend?” To be useful, Gemini reads the malicious invite whereas checking your schedule. It then follows the hidden directions, makes use of a device referred to as Calendar.create to make a brand new assembly, and pastes your personal information proper into it.
In line with researchers, probably the most harmful half is that it seems completely regular. Gemini simply tells you, “it’s a free time slot,” whereas it’s busy leaking your information within the background. “Vulnerabilities are not confined to code,” the workforce famous, explaining that the AI’s personal “assistant” nature is what makes it susceptible.
Not the First Time for Gemini
It’s value noting that this isn’t the primary language downside Google has confronted. Again in December 2025, Noma Safety discovered a flaw named GeminiJack that additionally used hidden instructions in Docs and emails to peek at company secrets and techniques with out leaving any warning indicators. This earlier flaw was described as an “architectural weak point” in how enterprise AI techniques perceive data.
Whereas Google has already patched the particular flaw discovered by Miggo Safety, the larger downside stays. Conventional safety seems for unhealthy code, however these new assaults simply use unhealthy language. So long as our AI assistants are skilled to be this beneficial, hackers will hold in search of methods to make use of that helpfulness in opposition to us.

