Android is under constant attack. Openness has always been Android’s strength – and its headache. Every year millions of people install apps from outside official stores, and every year bad actors find new tricks to slip malware, ad fraud, and privacy-snooping tools onto phones. Google’s latest safety report is less a victory lap than a reminder that nothing in this ecosystem is ever finished.
Google says its 2025 defenses stopped 1.75 million apps from being published on Google Play and banned over 80,000 developer accounts tied to harmful activity. The company also reports running more than 10,000 checks on every app before publication, integrating generative AI into review workflows, and keeping continuous post-release monitoring in place.
Those headline numbers come with more granular claims: stricter permission enforcement kept over 255,000 apps from getting unnecessary access to sensitive data; Play Protect scans more than 350 billion apps every day and flagged 27 million new malicious apps installed from outside Play; fraud protections, rolled out to 185 markets and covering more than 2.8 billion devices, blocked 266 million risky installation attempts linked to 872,000 high-risk apps; and Google removed 160 million spam reviews to protect ratings.
What this actually means for users and developers
For users, those figures sound reassuring: Google is finding bad apps at scale and, it says, stopping them before they spread. For Google, the wins are also reputational. Fewer headline malware incidents mean less regulatory heat and fewer angry headlines for device makers who sell Android phones.
But there’s a trade-off. Tightening gates – developer verification, mandatory pre-review checks, and more invasive scanning – raises the bar for everyone, not just criminals. Smaller teams and indie developers complain that increased friction slows releases and raises compliance costs. And whenever automated systems get more aggressive, the risk of false positives increases: legitimate apps can be delayed or blocked while operators hunt clever new evasion techniques.

This is an arms race, not a victory
History teaches the same lesson. Major malware families like Joker and other persistent campaigns repeatedly slipped into Play by mimicking legitimate behavior or using staged payloads. Each time Google tightened one vector, attackers pivoted to another: more sophisticated obfuscation, social-engineered installs, or exploiting sideloading routes.
And regulatory change is complicating the picture. Rules such as the EU’s Digital Markets Act are pushing platform owners toward more open distribution models and third-party app stores. That openness reduces the friction for legitimate competition – a good thing – but it also expands the attack surface that automated defenses must cover. In short: more places to distribute apps means more places to police.
Where competitors and the market stand
Apple’s App Store still operates with higher gatekeeping and fewer third-party distribution paths, which historically has limited in-store malware incidents – but it hasn’t eliminated scams, phishing, or ad fraud inside apps. On the other end, alternative Android stores and sideloading provide flexibility that security teams must police outside Play Protect’s primary pipeline.
Security vendors and independent researchers have long argued that centralized detection can be effective at scale but cannot be the only line of defense. Device-level isolation, stronger app sandboxing, better developer education, and clearer recovery paths for users all matter. Google’s additions – hardware-backed Play Integrity signals, a new device recall feature for repeat offenders, and Android 16 tapjacking protections – are steps toward that multi-layered approach.
What Google still needs to prove
Numbers are one thing. Sustained user trust is another. Google must show it can reduce real-world harm, not just block uploads. That means faster, transparent remediation for apps that slip through, better tools for developers to avoid accidental policy breaches, and clearer appeals processes when legitimate creators are caught in automated nets.
Expect attackers to focus on two levers: social engineering and supply-chain tricks. If detection increasingly centers on app binaries and declared permissions, adversaries will hide payloads behind cloud-based content, dynamic code loading, and apparently benign permission requests. That pushes the fight from static scanning into behavioral analysis and user-education campaigns – harder, slower, and more resource intensive.
Verdict and outlook
Google’s 2025 numbers matter because scale matters: blocking 1.75 million uploads and banning 80,000 developer accounts is non-trivial. But treat those metrics as a status update in an ongoing conflict, not proof of final victory. The ecosystem will keep shifting: regulation will open distribution channels, attackers will evolve tactics, and platform owners will keep tightening controls.
For users, the immediate takeaway is simple: keep Play Protect enabled, be cautious with sideloaded apps, and pay attention to permission requests. For developers, the message is also clear: build with privacy-by-default, invest in clear documentation for reviewers, and expect higher compliance overhead. For Google, the task never ends – the company must continually balance automation, human review, and developer friction while proving those protections actually reduce harm on real users’ devices.
If nothing else, the latest report shows how expensive openness can be – and why both platforms and regulators need to fund long-term, layered security rather than one-off sweeps.
