Small companies that build services on public web data have always operated in legal gray zones. Now one of them, SerpApi, has taken that uncertainty and turned it into a counterargument: if Google’s search results are fair game for enforcement, then Google itself is the biggest scraper of all.
The immediate news is straightforward. In December, Google sued SerpApi, accusing the smaller company of accessing and copying Google search results ”at an astonishing scale” and of bypassing Google’s anti-scraping protections. On Friday, SerpApi responded with a motion to dismiss that reframes the fight: it says Google doesn’t own copyright in the results it displays and argues that SerpApi is simply doing ”what Google does to everyone else.”
Just like Google – but at a much smaller scale – SerpApi uses ’automated means’ to scrape public websites, which it then synthesizes and makes available to its own customers in ways it believes they will find relevant and useful. This, of course, is exactly what Google does.
SerpApi motion to dismiss
That argument sounds clever because it is: it flips the usual narrative of David versus Goliath into a critique of how the web gets organized and who gets to charge for access to that organization. The case isn’t just about two companies; it’s about whether curated slices of the public web – search results, snippets, rankings – can be treated as proprietary assets that platforms may protect from third-party replication.
Why the question matters
If a court accepts SerpApi’s framing, the ruling could reduce incumbents’ ability to lock down how they index and present public content. That matters for several groups: startups that depend on scraped data to power analytics or feed AI models; researchers and archivists who rely on programmatic access to large data sets; and developers who prefer scraping over paying for expensive, limited APIs.
Conversely, a win for Google would hand platforms more legal leverage to treat the way they organize public content as a commercial output worth defending – potentially pushing more traffic toward paid APIs, stricter technical blocks, and stronger contract enforcement against third parties.
Where this fight fits in the legal ecosystem
This isn’t the first time courts have been asked to draw lines around automated access to public data. In past cases over profile scraping and mass copying, judges have wrestled with the interplay of copyright law, contract terms, and anti-hacking statutes. Lower-court rulings have sometimes favored access when data is plainly public, while other decisions have upheld platforms’ right to restrict scraping when technical or contractual barriers are in place.
There’s also a familiar precedent in disputes over bulk copying for transformative uses. When Google scanned millions of books years ago, the courts evaluated whether presenting snippets and enabling search was a fair use of copyrighted material. Those decisions didn’t make mass copying free from scrutiny – they suggested context and purpose matter.
How the industry actually handles scraping
In practice, companies use three responses to the same problem: build your own indexing (costly), buy licensed access to a provider’s API (expensive and often restricted), or scrape the public web (cheaper but legally risky). A niche market of ”SERP APIs” – services that package search engine result pages into programmatic feeds – has emerged precisely because many businesses want search-derived data without running large crawler fleets.
Platforms have tools to limit scraping: technical blocks like rate-limiting and anti-bot systems, and legal tools such as terms of service and copyright claims. The tension is that those defenses also shape who can compete and who must pay to access insights that originate on public websites.
What SerpApi left out – and why that matters
SerpApi’s motion stresses parity: Google scrapes the web and so can SerpApi. That’s a philosophically neat rebuttal, but it dodges harder questions. For example: did SerpApi circumvent explicit technical measures that Google set up? And if so, does bypassing those measures trigger separate legal doctrines (like anti-circumvention or computer fraud rules) even if the underlying content is public?
The company also frames SearchGuard as a business-protection tool rather than a legal barrier to accessing licensed content. Courts will want to see the facts: what exactly did SerpApi do to access results, and how does that behavior compare with accepted industry practices?
My read and what happens next
Expect a narrow, fact-heavy fight. Judges rarely hand sweeping doctrinal wins in early motions to dismiss. This case will likely produce skirmishes over technical evidence – logs, access patterns, anti-bot responses – before the court tackles the broader question of whether search-result compilations deserve copyright-like protection.
If Google prevails on the facts, the company preserves a model where its curated indexing is an enforceable asset; that encourages paid APIs and heavier gatekeeping. If SerpApi survives, we’ll probably see more small vendors that bundle and resell public search data, and platforms may be pushed to harden technical blocks or rethink licensing strategies.
Either outcome will ripple into AI training markets too. Large language models and other systems depend on massive crawled corpora; clarifying what can be copied and republished at scale will shape costs and legal risk for builders worldwide.
Bottom line
This fight is less about two companies and more about who gets to control the plumbing of the public web. SerpApi’s argument is blunt and deliberately provocative: if Google profits from scraping public pages to create a search product, why shouldn’t smaller firms do the same? Whether the courts accept that logic will determine how open – or fenced – the internet remains for the many businesses that depend on programmatic access.
