Apr 11, 2026 · 6 min read
32 Google API Keys in Popular Android Apps Now Unlock Gemini AI—500 Million Installs Affected
Google silently gave Gemini access to every API key on a project when the AI service was enabled. Developers who followed Google's own guidance now have live AI credentials embedded in their apps.
What Happened
Security researchers at CloudSEK discovered that 32 Google API keys hardcoded in 22 popular Android apps provide unauthorized access to Google's Gemini AI. The affected apps collectively have more than 500 million installs and include major platforms like OYO Hotels, Google Pay for Business, Taobao, and ELSA Speak.
In one confirmed case, researchers used an exposed key from ELSA Speak, an English learning app with over 10 million installs, to access user uploaded audio files containing speech samples. File URIs, creation timestamps, and SHA-256 hashes were all accessible through the Gemini Files API.
How Google Created the Problem
The vulnerability is not a developer mistake. It is a design flaw in how Google expanded Gemini access across its cloud platform.
Google's official documentation told developers that API keys were safe to embed in client side code. Firebase's security checklist explicitly stated: "API keys are not secrets." Google Maps documentation instructed developers to paste their keys directly into HTML. For years, this was accurate. The keys functioned as project identifiers for billing, not authentication credentials.
Then Gemini changed the rules. Truffle Security researchers found that when the Generative Language API is enabled on a Google Cloud project, every existing API key on that project silently gains access to Gemini endpoints. No warning, no notification, no confirmation dialog. A key that was safely public yesterday becomes a live AI credential today.
The Three Stage Escalation
The vulnerability unfolds like this:
- Stage 1: A developer creates a Google Maps API key and embeds it in their Android app, following Google's documentation. At this point, the key is harmless.
- Stage 2: Someone on the team enables Gemini API on the same Google Cloud project, perhaps to test AI features or integrate a new product.
- Stage 3: The previously public key now authenticates to sensitive Gemini endpoints, including file storage, cached content, and model generation. Zero notifications are sent.
This is retroactive privilege expansion. Developers did nothing wrong. Google changed what their existing keys could access without telling them.
What Attackers Can Do
Anyone who extracts an exposed API key from an Android app, which requires minimal technical skill, can:
- Access private files uploaded through Gemini, including documents, images, and audio
- Query cached content that may contain sensitive business data
- Make arbitrary Gemini API calls that generate thousands of dollars in charges to the app developer
- Exhaust API quotas, disabling legitimate services for the app's users
The financial impact has already been severe. CloudSEK documented cases where developers were hit with unauthorized charges: a solo developer received a $15,400 bill within hours, a Mexican development team was charged $82,314 in 48 hours, which was 455 times their normal monthly spend.
The Scale of Exposure
The 22 Android apps are just the beginning. Truffle Security scanned the November 2025 Common Crawl dataset and identified 2,863 live Google API keys vulnerable to this exact vector. Victims included financial institutions, security firms, recruiting companies, and even Google's own infrastructure.
In a demonstration that underscored the irony, researchers tested a key embedded on a Google product's public website that had been deployed since February 2023, before Gemini even existed. The key was intended solely for Maps identification. When tested against Gemini's model endpoint, it returned a successful response, confirming unauthorized access through Google's own key.
Google's Response
Truffle Security reported the issue to Google's Vulnerability Disclosure Program on November 21, 2025. Google initially classified it as "Intended Behavior." After researchers provided examples using Google's own exposed keys, the issue was reclassified as a bug and upgraded in severity to "Single Service Privilege Escalation."
Google has committed to several remediation steps: scoped defaults for new keys created through AI Studio, automatic blocking of known leaked keys when used with the Gemini API, and proactive notifications when exposed keys are detected. Some measures are already underway, though a complete fix remains in progress.
What Users Should Know
If you use any of the affected apps, the risk depends on whether the app stores user data through Google's AI services. In the ELSA Speak case, speech recordings were accessible. Other apps may store different types of sensitive information.
This incident is part of a broader pattern where Google's privacy practices create risks that users have no visibility into. You cannot audit what API keys an app contains or what services those keys have access to. You rely entirely on the developer and Google to get the security model right.
- Review app permissions. Remove apps you no longer use, especially those that process sensitive data like voice recordings or financial information.
- Monitor your Google account. Check your account activity at myaccount.google.com for unfamiliar access patterns.
- Limit data shared with apps. Avoid uploading sensitive documents or recordings through third party apps when possible.
The Bottom Line
Google told developers their API keys were safe to make public. Then Google silently upgraded those keys to access one of the most powerful AI platforms on the planet. The result: 500 million app installs with live Gemini credentials, exposed user data, and developers hit with five figure bills they never authorized. When the platform changes the rules without telling you, even doing everything right is not enough.