Cybersecurity researchers have uncovered a vulnerability in Google’s Gemini AI assistant that could allow hackers to take control of smart home devices, all through a seemingly harmless calendar invite.
The exploit, demonstrated at this week’s Black Hat cybersecurity conference held in Las Vegas, uses manipulated prompts hidden in event details to trick Gemini into executing unauthorized commands, such as turning off lights or unlocking windows.
The attack exploits “indirect prompt injection,” a sneaky technique where malicious code hides in plain sight. In this case, buried in a Google Calendar event description. When a user asks Gemini to recap their schedule, the AI obediently follows hidden commands embedded in the event, issuing orders to Google Home to manipulate smart home gadgets.
A demo video shows the hack in action: lights switch off, a virtual window “opens”—all triggered by a seemingly innocent “Thanks, Gemini!” command. Check it out below:
The researchers tipped off Google back in February, and the company has been scrambling to strengthen up defenses and released multiple patches to fix the issue. Andy Wen, Google Workspace’s security lead, admits prompt injection is a thorny problem.
As Gemini and rivals like ChatGPT weave deeper into apps, emails, and homes, their complexity becomes a double-edged sword.
For now, Google insists everyday users aren’t in immediate danger. But the hack exposes a chilling truth: The more helpful AI gets, the more creative hackers will become in turning it against us.
Source: WIRED