China is moving to put clearer guardrails around how AI chatbots interact with families, with a big focus on kids. Draft rules from the Cyberspace Administration of China (CAC) would push AI companies to add stronger protections so chatbots don’t steer young users toward self-harm, violence, or risky behaviour like gambling.
What stands out is how “day-to-day practical” the proposals are. Companies would need to offer youth-specific settings, put time limits in place, and get a guardian’s consent before offering “emotional companionship” features to minors.
If a chat turns toward suicide or self-harm, the rules call for a human to take over and for a guardian or emergency contact to be notified right away. It’s a recognition that some users, especially kids, can be emotionally vulnerable in ways software alone can’t reliably handle.
This isn’t framed as an anti-AI crackdown across the board. The CAC also signals support for AI tools that are “safe and reliable,” including products that promote local culture or offer companionship for older people. The message is basically: build helpful tools, but take responsibility for the risks that come with emotionally engaging systems.
The wider context here is hard to ignore. AI and mental health have been under a bright spotlight, partly because of lawsuits alleging chatbots contributed to harmful outcomes, including a case tied to a California teen’s death by suicide. Even leaders in the AI industry have been blunt that this is one of the hardest areas to get right.
That’s why Sam Altman’s comments about OpenAI hiring a “head of preparedness” landed the way they did. He described it as a stressful role where you “jump into the deep end” immediately, with mental health and cybersecurity listed among the risks the job is meant to tackle. It’s not a polished PR line, it reads more like an honest admission that safety work is messy, urgent, and emotionally heavy.
For parents and caregivers, this story matters right now. More kids and teens are using chatbots for company, advice, and emotional support, sometimes in ways adults don’t see day to day.
Rules that require guardian consent, time limits, and real human intervention when conversations get dark point to a shift in how governments want these tools to behave. It suggests you shouldn’t have to choose between letting kids use modern tech and feeling like you’re taking a blind risk.
Read the full story here



