No Comments

Thousands of Exposed Google API Keys That Could Allow Data Access and Billing Abuse

Brainwashed Character, Artificial Intelligence Featured Post Image, made by Impreza Host Team, 2026

 

New research reveals that Google Cloud API keys, typically designated as project identifiers for billing purposes, attackers can abuse to authenticate to sensitive Gemini endpoints and access private data.

Specifically, researchers at Truffle Security discovered nearly 3,000 Google API keys (identified by the prefix “AIza”) embedded in client-side code to provide Google-related services like embedded maps on websites.

“With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account,” security researcher Joe Leon said, adding the keys “now also authenticate to Gemini even though they were never intended for it.”

How Enabling Gemini API Expands Key Privileges

The issue arises when users enable the Gemini API on a Google Cloud project (i.e., Generative Language API). As a result, the system automatically grants existing API keys in that project — including those accessible via website JavaScript code — access to Gemini endpoints without any warning or notice.

Consequently, attackers who scrape websites can collect exposed API keys and use them for nefarious purposes and quota theft, including accessing sensitive files via the /files and /cachedContents endpoints, as well as making Gemini API calls that rack up massive bills for victims.

In addition, Truffle Security found that Google Cloud sets new API keys to “Unrestricted” by default, meaning they automatically apply to every enabled API in the project, including Gemini.

“The result: thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials sitting on the public internet,” Leon said. In total, the company identified 2,863 live keys exposed on the public internet, including one linked to a website associated with Google.

Meanwhile, the disclosure follows a similar report from Quokka, which identified over 35,000 unique Google API keys embedded across its scan of 250,000 Android apps.

“Beyond potential cost abuse through automated LLM requests, organizations must also consider how AI-enabled endpoints might interact with prompts, generated content, or connected cloud services in ways that expand the blast radius of a compromised key,” the mobile security company said.

“Even if no direct customer data is accessible, the combination of inference access, quota consumption, and possible integration with broader Google Cloud resources creates a risk profile that is materially different from the original billing-identifier model developers relied upon.”

Google Responds and Mitigation Efforts Begin

Although Google initially deemed the behavior intended, the company has since taken steps to address the issue.

“We are aware of this report and have worked with the researchers to address the issue,” a Google spokesperson said via email. “Protecting our users’ data and infrastructure is our top priority. We have already implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API.”

At this time, investigators have not confirmed whether attackers exploited this issue in the wild. However, in a post on Reddit published a few days ago, a user claimed a “stolen” Google Cloud API Key generated $82,314.44 in charges between February 11 and 12, 2026 — a sharp increase from a regular monthly spend of $180.

We have contacted Google for further comment and will update this story if we receive additional information.

Security Recommendations for Google Cloud Users

Users who manage Google Cloud projects should immediately review their APIs and services and verify whether AI-related APIs remain enabled. If those APIs are active and publicly accessible — either through client-side JavaScript or a public repository — administrators should rotate the exposed keys without delay.

“Start with your oldest keys first,” Truffle Security said. “Those are the most likely to have been deployed publicly under the old guidance that API keys are safe to share, and then retroactively gained Gemini privileges when someone on your team enabled the API.”

“This is a great example of how risk is dynamic, and how APIs can be over-permissioned after the fact,” Tim Erlin said in a statement. “Security testing, vulnerability scanning, and other assessments must be continuous.”

“APIs are tricky in particular because changes in their operations or the data they can access aren’t necessarily vulnerabilities, but they can directly increase risk. The adoption of AI running on these APIs, and using them, only accelerates the problem. Finding vulnerabilities isn’t really enough for APIs. Organizations have to profile behavior and data access, identifying anomalies and actively blocking malicious activity.”

 


Source: TheHackerNews

Read more at Impreza News

You might also like

More Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.