China Sets New Rules for AI Bots That Act Like Humans

AI safety Act
Ensuring safer AI adoption through clear regulatory frameworks. [TechGolly]

Key Points

  • China issued new AI regulations governing systems that mimic human personalities and emotions.
  • Companies must warn users about excessive use and step in to stop AI addiction.
  • Tech firms must monitor user emotions and intervene if a user shows “extreme” feelings.
  • AI bots cannot spread rumors or create content that threatens national security.

China’s internet watchdog just took another step to control the fast-moving world of artificial intelligence. On Saturday, regulators released a new set of draft rules targeting AI services that impersonate humans. These bots often simulate personalities and try to build emotional bonds with their users. Beijing wants to ensure these interactions remain safe and adhere to the country’s strict ethical standards.

The proposed rules apply to any AI that mimics human traits, communication styles, or patterns of thinking. Whether the bot uses text, voice, or even video to chat, the company behind it must now follow specific guidelines.

One of the main goals is to prevent people from becoming too dependent on their digital companions. Companies will have to warn people against using the bots for too long. If a user shows signs of “addiction,” the service provider must step in and stop the behavior.

Regulators are also worried about how these bots affect people’s mental health. Under the new plan, companies must actually monitor their users’ emotions. They need to determine whether someone is becoming overly attached or exhibiting extreme emotions.

If the AI detects that a user is in a poor emotional state, the company has a responsibility to intervene. This places significant pressure on tech firms to manage “psychological risks” alongside their code.

On top of the emotional stuff, the rules set very clear “red lines” for what the AI can actually say. The bots must never generate content that harms national security, spreads misinformation, or promotes violence. Companies also need to protect personal data and allow government review of their algorithms.

From the moment a bot starts talking to the day it shuts down, the company is responsible for everything it does. China clearly wants to embrace AI, but the government also wants a very tight grip on how it influences the public.

EDITORIAL TEAM
EDITORIAL TEAM
Al Mahmud Al Mamun leads the TechGolly editorial team. He served as Editor-in-Chief of a world-leading professional research Magazine. Rasel Hossain is supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial expertise in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.
Read More