Google announced a new bug bounty program. The initiative invites security researchers to find and report vulnerabilities in its AI systems, with rewards as high as $30,000 for the most significant bugs.
The program specifically targets security flaws that are unique to AI. Google provided examples of the kinds of problems it is looking for. One scenario involves a “prompt injection” attack that could trick a Google Home device into unlocking a user’s front door. Another critical vulnerability could allow an attacker to steal private information by using a malicious prompt that forces an AI to summarize a person’s emails and send them to an unauthorized account.
These examples fall under what Google calls “rogue actions.” These are vulnerabilities that let an AI perform unauthorized tasks that harm security. A past flaw, for instance, allowed a hacker to open a person’s smart shutters and turn off their lights by exploiting a compromised Google Calendar event.

See also: How to earn money by reporting security bugs and vulnerabilities to Google
The rewards are tiered based on the product and the severity of the flaw. For finding rogue actions in Google’s most important products like Search, Gemini apps, and core Workspace tools including Gmail and Drive, a researcher can earn a base reward of $20,000. That amount can climb to a maximum of $30,000 depending on how well the bug is reported and how good the finding is. Bounties for other Google AI products, like Jules or NotebookLM, will be lower.
The company also clarified what does not qualify for a reward. Simply making an AI like Gemini “hallucinate,” or produce strange and incorrect information, will not earn a payout. Instead, Google asks researchers to report issues with harmful AI-generated content, such as hate speech or copyrighted material, using the feedback tools built directly into its products. This allows their safety teams to work on long-term fixes.
Source: Google Bug Hunters