- Google Cloud services dominate leaked credentials across the Android ecosystem
- Hundreds of Firebase databases show clear signs of automated compromise
- Exposed storage bins leaked hundreds of millions of files
A major security study has analyzed 1.8 million Android apps available on the Google Play Store, focusing on those that explicitly claim AI features, and identified worrying security flaws that could reveal secrets.
From the initial research pool, Cyber news researchers identified 38,630 Android AI apps and examined their internal code for visible credentials and cloud service references, finding widespread data handling errors that extended well beyond isolated developer errors.
Overall, the researchers found that nearly three-quarters (72%) of the Android AI apps analyzed contained at least one hard-coded secret embedded directly in the application code—and on average, each affected app leaked 5.1 secrets.
Hard-coded secrets remain common across Android AI apps
In total, the researchers identified 197,092 unique secrets across the dataset, showing that insecure coding practices remain widespread despite longstanding warnings.
More than 81% of all discovered secrets were tied to Google Cloud infrastructure, including project IDs, API keys, Firebase databases, and storage collections.
Of the hard-coded Google Cloud endpoints discovered, 26,424 were identified, although about two-thirds pointed to infrastructure that no longer existed.
Among the remaining endpoints, 8,545 Google Cloud storage locations still existed and required authentication, while hundreds were misconfigured and left publicly available — potentially exposing more than 200 million files, totaling nearly 730 TB of user data.
The investigation also identified 285 Firebase databases with no authentication controls at all, which together leaked at least 1.1 GB of user data.
In 42% of these exposed databases, researchers found tables marked as proof of concept, indicating prior compromise by attackers.
Other databases contained administrator accounts created with hacker-style email addresses, showing that the exploit was not theoretical but already underway.
Many of these databases remained unsecured even after clear signs of intrusion, suggesting poor monitoring rather than one-off failures.
Despite concerns about AI features, leaked API keys for major language models were relatively rare—only a small number of keys associated with major providers such as OpenAI, Google Gemini, and Claude were detected across the entire dataset.
In typical configurations, these leaked keys would allow attackers to send new requests, but would not allow access to stored conversations, historical prompts, or past responses.
Some of the most serious exposures involved live payment infrastructure, including leaked Stripe secret keys capable of providing full control over payment systems.
Other leaked credentials enabled access to communications, analytics and customer data platforms, allowing impersonation of apps or unauthorized data extraction.
These errors cannot be mitigated by basic tools like a firewall or malware removal tools after exposure has occurred.
The scale of exposed data and the number of apps already compromised suggest that app store screening alone has not reduced systemic risk.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



