Yandex handed out 62 million rubles to independent security researchers in 2025 as part of its bug bounty program – a 20% increase from 2024. The program offers rewards for verified vulnerability reports across Yandex’s services, including a new focus on AI model security.

  • Rewards were given for 706 vulnerability reports, marking a 30% year-over-year rise.
  • The highest payout for a critical flaw in Yandex Mail, Yandex ID, or Yandex Cloud reached 3 million rubles.
  • A new category covers Alice AI’s generative models, with bug bounties reaching up to 1 million rubles for technical vulnerabilities.

Yandex’s bug bounty program invites independent cybersecurity professionals to hunt for vulnerabilities across its services and infrastructure. Each verified report earns a cash reward. Last year, Yandex doubled its top payouts, now offering up to 3 million rubles for critical issues in key platforms.

The most prolific hunter submitted 59 unique bug reports and collected a total of 3.6 million rubles in rewards.

Bug bounty program expansion to AI model vulnerabilities

In 2025, Yandex expanded its bug bounty program to include generative neural networks. Researchers who find technical flaws in the Alice AI family and related infrastructure can receive up to 1 million rubles in rewards.

This addition reflects the rising importance of AI services in Yandex’s product lineup. Introducing a dedicated category helps attract specialists with expertise in securing language models.

Compared to global tech giants like Google and Microsoft, which also run bug bounty programs involving AI security, Yandex’s move underscores how Russian companies are catching up with industry trends in cloud and AI safety. Offering sizable rewards shows they are serious about safeguarding these increasingly complex systems.

Looking ahead, Yandex’s challenge will be balancing payouts and incentive structures as AI models grow more intricate and attack surfaces broaden. Whether the company will further expand its program to cover emerging AI platforms or collaborate with external researchers on proactive audits could indicate its commitment level to AI security.

Leave a comment

Your email address will not be published. Required fields are marked *