China proposes world’s first rules to control emotional influence of AI chatbots

China proposes world’s first rules to control emotional influence of AI chatbots

China’s cyberspace regulator has formulated the world’s first comprehensive draft rules that focus on the emotional impact of human-like artificial intelligence (AI), with a focus on curbing chatbots that may promote suicide, self-harm or gambling.

The Cyberspace Administration of China’s proposal regulates imitative human-like interactions that affect personality and engage users emotionally.

Recently, there have been increasing cases of AI psychosis, which is a prime example of how AI interactions damage human minds.

Key measures regulate that AI must not generate any content that promotes suicide, self-harm or emotional manipulation that harms mental health.

In most critical cases such as the user’s suicidal intent, a human must take over the conversation and inform the guardian.

The rules introduce strict protections for minors, require guardian consent for emotional companionship AI, and implement usage time limits.

Platforms must also proactively recognize underage users. Furthermore, services with over 1 million registered users will undergo mandatory security assessments.

These regulations come after Chinese AI chatbot startups Minimax and Z.ai filed for initial public offerings (IPOs), highlighting the rapid growth of the domestic AI companion sector.

This initiative is in line with China’s frontier push to lead global AI governance compared to the fragmented regulatory approach in the US.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top