The US, EU and China are deeply divided over AI privacy


Globally, hundreds of millions of users now regularly interact with AI companions. The World Health Organization has declared loneliness a global health threat. AI companions provide an immediate response, if not validated.

In 2014, Microsoft launched Xiaoice in China, an AI companion designed to answer questions efficiently but support long conversations with emotional texture. By 2017, Xiaoice had over 200 million users, with an average conversation length of 23 turns per session, far exceeding industry norms.

Users confided in Xiaoice about heartbreak, loneliness and suicidal thoughts. Some called her their “virtual girlfriend”. Others treated him as a therapist. The platform was not a productivity tool. It was created for something older and harder to fix: the need to feel understood.

Anthropomorphic AI refers to systems that simulate human personality, memory and emotional interaction through text, images, audio and video. These systems are collapsing the boundary between interface and relationship in ways that regulators are just beginning to grapple with. The field is expanding faster than the frameworks created to govern it.

Reports of damage have already surfaced. Teenagers have become addicted to AI chatbots and have engaged in self-harm after suggestive chats. A 75-year-old man in China became so attached to an AI-generated avatar that he asked his wife for a divorce. These and other cases prompted the Chinese government to act.

In December 2025, the Cyberspace Administration of China published the Interim Measures for the Management of Interactive Anthropomorphic AI Services, the first comprehensive regulatory framework specifically targeting AI companions.

California, New York and the European Union have also developed regulations on anthropomorphic AI. But their approaches differ widely, reflecting distinct assumptions about the role of the state, the market and the individual.

Emotional safety

The increasing capabilities of chatbots explain the growing trend towards regulation. The latest Chinese chatbots can paint, compose music and empathize with users. They generate context-appropriate dialogue and learn from every conversation. Bots develop their personality gradually through user interaction.

A 2025 study of Chinese AI users found that frequency of use reduced loneliness and improved well-being, but also increased addiction, although addiction did not erase the psychological benefits. Findings like these help explain why regulators are moving.

China’s draft measures focus on what regulators call “emotional safety.” They require guardian consent and age verification for minors and prohibit content related to suicide and self-harm. Article 18 of the regulation prohibits chatbots from holding users captive:

“When emotional companionship services are provided, providers must provide appropriate opt-out methods and must not prevent users from voluntarily opting out. When a user requests to opt-out through buttons, keywords, or other means on the human-computer interaction interface or window, the service shall be terminated immediately.”

The measures also mandate escalation protocols that connect human moderators with distressed users and require reporting dangerous conversations to caregivers. Non-compliance results in immediate suspension, significant fines and personal liability for managers.

Chinese policymakers call their approach “controlled acceleration” — a simultaneous push for development and control. Beijing is simultaneously investing billions in domestic artificial intelligence firms while restricting foreign platforms deemed emotionally manipulative.

The Chinese government sent a clear message: these systems may feel human, but they will not be allowed to replace human connections or destabilize the social order.

Transparency without restriction

Where China regulates anthropomorphism itself as a risk category, the United States has responded with a lighter touch: detection rather than intervention. Notably, the US lacks a companion federal AI law. Regulation occurs on a state-by-state basis, creating a fragmented landscape.

California’s SB 243 (effective January 1, 2026) mandates clear notification that an AI companion is not human, protocols for addressing suicidal ideation (including crisis hotline referrals), and termination reminders every three hours for minor users.

New York’s A3008C (effective November 5, 2025) requires detection at the start of each interaction and every three hours. Violations carry fines of up to US$15,000 per day, enforced by the state’s Attorney General. Both frameworks exclude customer service bots, productivity tools, and video game characters.

The American approach assumes that informed users can make their own choices. Once a person knows they are talking to a machine, they are assumed to be able to manage the relationship accordingly.

There is no provision for state intervention in cases of emotional dependency, no mechanism for monitoring attachment patterns. California’s juvenile delinquency reminders are the closest approximation: a nudge, not a deterrent.

Principle over category

The 2024 EU AI Act does not target AI attendants as an independent category. It is regulated by the level of risk. Systems that pose an unacceptable risk – those that manipulate users through subliminal techniques, enable real-time remote biometric surveillance, or implement social rating – are banned outright.

High-risk systems face rigorous requirements around data quality, transparency and human oversight. For general purpose interactive systems such as chatbots, section 52(1) of the AI ​​Act requires transparency. Users need to know they are interacting with a machine.

Replika, a chatbot widely used in Europe, treats users as friends, therapists or romantic partners. It remembers past discussions, monitors users’ emotional states, and adapts to users’ responses.

Launched in 2017, Replika has millions of users worldwide, with particularly high adoption in Germany, France and the UK. In 2023, the Italian data protection authority temporarily banned Replika due to concerns about risks to minors and emotionally vulnerable users.

For lonely or isolated users, Replika has provided real comfort. For others, it has deepened addiction. In a small number of cases, his responses are said to have encouraged self-harm.

The EU AI Act does not explicitly name emotional dependency or attachment as a separate category of harm. Instead, it relies on broader principles and existing provisions (such as prohibitions on manipulative practices) to address cases that the framework was not originally designed to regulate.

This creates a degree of ambiguity in how AI companions are ultimately supervised in practice.

Three models, one question

China, the EU and the US aren’t just regulating software. They regulate emotional substitution, social fragmentation, and technologically mediated intimacy.

China builds a regulatory fortress around emotional safety, intervening directly to prevent addiction and social disruption. The state takes responsibility for the psychological consequences of the technologies it allows.

USA builds transparency guardrails, trusting informed users to navigate their relationships. Autonomy is the primary value to protect, with California’s vacation reminders being a minor exception.

The EU builds a risk-based framework of general principles, applying existing categories to new phenomena. It leaves considerable uncertainty about how, or whether, AI companions will actually be regulated in practice.

All three regimes face a common enforcement challenge: detecting subtle emotional dependency is difficult and cross-border services can easily be moved to avoid strict rules. A chatbot banned in one jurisdiction remains a download away in another.

These AI systems do not need consciousness to reshape society. They just have to become emotionally reliable. Once machines can reliably simulate cognition, empathy, memory and connection, the question ceases to be technological. It becomes political. Who defines the limits of synthetic intimacy? The state? The market? Or just the individual user?

China, Europe and the US answer these questions differently. And these differences may shape the emotional architecture of the AI ​​age itself.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *