Britain’s online safety watchdog has put Telegram under formal scrutiny under the Online Safety Act, joining a wider sweep of checks on chat apps and social platforms over how they handle illegal content and protect children. The timing matters: regulators are moving from warnings to enforcement, and platforms that built their pitch around speed, privacy, or loose moderation are finding that promise a lot harder to defend.
Ofcom said the investigation follows information about possible child exploitation material on Telegram, along with its own assessment of the platform. The regulator is now testing whether Telegram did enough to prevent the spread of illegal material and remove it quickly. Telegram, for its part, says it has largely eliminated public distribution of such content since 2018 and argues that the pressure on it reflects a broader crackdown on services that value privacy and open speech.
What Ofcom is checking on Telegram
The watchdog is looking at whether Telegram complied with duties to stop illegal content from circulating and to remove it promptly once flagged. That’s the core of the Online Safety Act: not just reacting after the fact, but building systems that make harmful material harder to spread in the first place.
Telegram is not alone. Ofcom has also opened separate reviews of Teen Chat and Chat Avenue, both aimed at teens, to see whether they are doing enough to shield minors from unwanted contact. It is also investigating X over Grok, the company’s AI chatbot, after concerns it was used to generate intimate content without consent. This is a pretty clear signal: the regulator is no longer treating these as isolated moderation lapses, but as a structural problem across chat, social, and AI tools.
Telegram’s defense and the stakes for platforms
Telegram’s response is familiar: deny the premise, stress existing moderation, and cast the case as part of a broader political squeeze on platforms that resist heavy-handed control. That line may play well with some users, but regulators usually care less about philosophy than about whether harmful material stayed up too long.
If Ofcom finds violations, the penalties are real:
- Fines of up to £18 million
- Or 10% of global turnover, whichever is higher
- In severe cases, court-backed access restrictions in Britain
- Pressure on payment providers, ad networks, and internet service providers to stop supporting the platform
That is a much sharper tool than the usual slap-on-the-wrist tech regulation, and it explains why companies are suddenly paying closer attention to moderation systems they once treated as optional overhead.
The next test for Online Safety Act enforcement
The bigger question is whether Ofcom is setting a template for how the Online Safety Act will be enforced across the industry. Telegram’s case is high-profile because the app has long been associated with encrypted or lightly moderated communities, but the same pressure is now landing on teen-focused chat services and AI products that can produce abuse at scale.
That makes this less about one messenger and more about whether the era of ”move fast and moderate later” is finally over. If the regulator follows through, platforms will have to prove that safety systems are more than a policy page and a prayer.

