Artificial intelligence has become a defining technological turning point—one unlike any tool humanity has created before. While earlier inventions strengthened our physical capabilities, AI challenges something far more fundamental: our ability to generate, use, and reason with knowledge. Because of this, AI holds extraordinary potential to reshape personal identity, the global economy, and the structure of societies. Yet these vast possibilities come with equally vast risks, making coordinated global governance not just important, but essential.
Why Current AI Is Powerful—But Still Far From Human-Like Autonomy
Public debate often centers on artificial general intelligence (AGI), a vague concept implying machines might one day match humans across all cognitive abilities. But human intelligence cannot be neatly itemized—and outperforming us in isolated tasks does not equal genuine autonomy. True autonomy requires broad understanding, adaptive reasoning, and the ability to coordinate many skills toward flexible goals.
Today’s AI systems, even the most advanced conversational models, are nowhere near functioning as autonomous agents within complex real-world organizations. Critical systems—such as smart cities, automated telecom networks, smart grids, and autonomous factories—depend on sophisticated collective intelligence, where multiple agents with individual goals must coordinate flawlessly. Current machine-learning techniques cannot deliver this level of reliability.
The failures of the autonomous vehicle industry highlight these limits. Companies once promised full autonomy by 2020, yet today’s driving systems remain restricted and require constant human oversight. This illustrates a central truth: present-day AI is confined to low-risk tasks and cannot be trusted with high-stakes decision-making.
The Core Challenge: AI Is Powerful but Opaque
For AI to be trustworthy in critical roles, it must demonstrate strong reasoning, follow ethical and legal guidelines, and meet reliability standards far beyond what is currently possible. However, the biggest obstacle is the black-box nature of modern AI. These systems can learn from data with remarkable efficiency but cannot provide transparent, guaranteed explanations for their behavior.
This makes traditional safety certification—like the processes used for aircraft or elevators—nearly impossible. AI also seeks to model human cognition, yet even humans do not fully understand how cognition works. Initiatives promoting “ethical,” “aligned,” or “responsible” AI often fall short because the underlying concepts are extraordinarily complex.
A machine that passes a medical exam still does not possess the judgment, accountability, or moral responsibility of a doctor. Designing AI that respects social norms, exhibits responsible collective intelligence, and avoids unintended harm remains one of the field’s biggest challenges.
Three Categories of AI Risks the World Must Confront
1. Technological Risks
These arise from AI’s opacity and unpredictability. High-criticality sectors require extreme reliability—standards far above what current AI can meet. Global standards are urgently needed, yet progress is stalled by both technical limitations and resistance from major U.S. companies, which argue that strict standards would hinder innovation.
2. Anthropogenic Risks
These stem from human misuse, governance failures, and organizational negligence. Overconfidence, skill degradation, and confusion are common in semi-autonomous systems. Corporate strategies that prioritize rapid growth over safety intensify these risks. Tesla’s “Full Self-Driving” branding—despite requiring constant human supervision—illustrates the dangerous gap between marketing and reality.
3. Systemic Risks
These involve large-scale disruptions to societies and economies. Job displacement, environmental strain, and monopolistic power are widely recognized concerns. Yet one subtle threat receives far less attention: cognitive outsourcing—where people increasingly rely on AI for thinking, decision-making, and creativity. Over time, this may erode critical thinking, reduce personal accountability, and homogenize human thought.
A New Global Vision for AI: Human-Centered, Safe, and Realistic
Addressing these complex risks requires a renewed vision of AI—one that rejects the narrow AGI race and instead prioritizes safety, transparency, and human well-being. This vision must acknowledge the real technical limits of contemporary AI, invest in long-term scientific research, and dismantle the “move fast and break things” culture that produces fragile, unsafe systems. It must also challenge the belief that technological progress is inevitable and beyond societal control.
China, with its strong industrial ecosystem and growing demand for intelligent services, is uniquely positioned to help shape this global agenda. Through emerging organizations such as the China AI Safety and Development Association and the World AI Cooperation Organisation, China and its international partners can push for global standards, research collaborations, and frameworks prioritizing reliability, safety, and societal benefit.
If the world hopes to prevent AI-driven harm, it must rethink not only how AI is built—but how humanity chooses to shape its future.

