China is preparing to tighten oversight of AI ”digital humans” as the technology spreads from slick product videos into more sensitive territory, including avatars that imitate deceased relatives. The push reflects a familiar regulatory dilemma: the same tools that make marketing look futuristic can also blur consent, grief, and identity in ways platforms have been happy to ignore.

The issue is not theoretical. One woman, Zhang Xinyu, had an AI avatar created after her father died from cancer, and it was made to look and sound like him. That is a niche use case, but it shows how quickly ”digital humans” have moved beyond polished brand mascots and into emotionally loaded territory.
How digital humans became a business
Videos featuring these avatars are now common on Chinese social platforms, where they are often deployed to pitch products with uncanny faces and smoothly animated gestures. The appeal is obvious: cheaper spokespeople, endless stamina, and no awkward scheduling. The downside is equally obvious, though apparently less profitable to mention out loud.
China has spent years tightening rules around generative AI, deepfakes, and recommendation systems, so this move fits a wider pattern rather than a sudden moral awakening. The bigger trend is that regulators are no longer treating synthetic media as a novelty problem; they are treating it like infrastructure with real social risk.
AI avatar rules for minors and deepfakes
What makes digital humans tricky is not just that they can impersonate a person, but that they can be tuned to trigger trust. For minors, that is especially sensitive: the source material says the new rules would ban services that offer virtual intimate relationships or encourage them to ”develop extreme emotions, or cultivate harmful habits.”
- Digital humans are already used widely in social media ads and product promotion.
- The new regulatory focus reaches beyond fraud into emotional manipulation.
- Services aimed at minors face the sharpest restrictions in the reported rules.
What comes next for AI avatars
If Beijing follows through, the immediate winners will be brands willing to keep their AI faces obviously artificial and their claims boringly legal. The losers will be the operators leaning on imitation, sentiment, or pseudo-relationships to juice engagement. Expect the market to keep growing anyway; the real question is whether the next wave of digital humans looks less like a fake person and more like a clearly labeled tool.

