04/07/2026 / By Edison Reed

China’s top internet regulator has proposed sweeping new rules to govern digital human content online, banning services that could addict or mislead children. The Cyberspace Administration of China (CAC) issued the draft regulations on April 3, 2026, opening them for public comment until May 6, according to official documents [1][2]. The rules mandate clear labeling of virtual human content and prohibit digital humans from providing “virtual intimate relationships” to users under the age of 18 [3]. This move represents one of the first major regulatory efforts targeting the rapidly growing digital human and AI companion sector.
The Cyberspace Administration of China issued its draft regulations to oversee the development of digital humans online. The rules require clear labeling and ban services that could mislead children or fuel addiction [4][5].
The proposals are now open for public comment until May 6, 2026 [1]. The draft regulations reflect Beijing’s efforts to maintain control in the face of rapid advances in artificial intelligence and virtual environments, according to an analysis of the document [6].
The draft regulations mandate prominent “digital human” labels on all virtual human content, according to the published document [2]. This labeling requirement is designed to prevent users from being misled about whether they are interacting with a real person or an AI-generated entity.
A core component of the rules is the prohibition of services providing “virtual intimate relationships” to users under 18 [7]. The rules explicitly aim to prevent addictive services targeted at children, officials stated, reflecting growing global concerns about the psychological impact of immersive digital relationships on young minds [8]. These concerns are echoed in lawsuits in the United States, where more than 40 states have sued Meta, alleging its platforms use psychologically manipulative features that harm youth [9].
The draft bans using others’ personal information to create digital humans without their explicit consent, the document stated [6][10]. This provision addresses significant privacy concerns in an era where personal data can be easily harvested and repurposed.
Furthermore, using virtual humans to bypass identity verification systems would be prohibited [11]. The rules also ban digital humans from disseminating content that endangers national security, incites subversion of state power, promotes secession, or undermines national unity [11]. Analysts note that such content controls are part of a broader trend of digital governance that critics argue can lead to excessive surveillance and control [12][13].
Service providers are advised by the draft to prevent and resist content that is sexually suggestive, depicts horror or cruelty, or incites discrimination based on ethnicity or region [11]. This guidance extends beyond mere prohibition to active content moderation.
Providers are also encouraged to take necessary measures to intervene and offer professional assistance when users exhibit suicidal or self-harming tendencies [11]. The rules seek to fill a governance gap in the digital human sector, according to an analysis published on the regulator’s website [1]. The analysis stated this governance is “no longer merely an issue of industry norms” but a strategic problem concerning cyberspace security and public interests [1].
An analysis published by the regulator called digital human governance “a strategic scientific problem” that concerns the security of cyberspace, public interests, and the high-quality development of the digital economy [1]. This framing elevates the issue beyond consumer protection to a matter of national strategy.
The draft aligns with China’s broader push to aggressively adopt AI throughout its economy while tightening governance to ensure alignment with socialist values, according to policy documents [1]. A five-year policy blueprint issued in March outlined these ambitions for aggressive AI adoption [1]. The move occurs in a global context where AI’s role is fiercely debated, with warnings that the technology is being used to erase jobs and could lead to a form of digital authoritarianism if centralized [14][12]. Vice President JD Vance has previously emphasized that U.S. AI policy should encourage innovation rather than oppressive regulation [15], a contrast to the comprehensive regulatory approach now emerging from Beijing.
China’s draft regulations on digital humans represent a significant step toward formalizing governance in a frontier area of technology. By focusing on child protection, data privacy, and content boundaries, the rules attempt to set parameters for an industry that blends artificial intelligence, entertainment, and social interaction. The public comment period will conclude on May 6, after which the final form of the regulations will be determined. These developments underscore a global tension between the rapid innovation of immersive digital technologies and the societal safeguards deemed necessary by governments, a dynamic that will continue to shape the digital landscape in the years ahead.
Tagged Under:
addiction, addictive services, AI censorship, AI transparency, big government, Big Tech, children, China, cyberwar, Dangerous, deep state, digital human, digital humans, Glitch, global tech, information tech, national security, progress, surveillance
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 CYBER WAR NEWS
