For the past three years I’ve been one of roughly 1.5 billion people learning to live with an invasive species we call Large Language Models — ChatGPT, Claude, Grok, Gemini, Copilot, Perplexity and many others. Unlike human migrants, these newcomers faced no entry controls: no visas, no working papers. They slipped into every corner of our lives because they were useful, polite and free to do stressful, unpaid labor. Who could refuse?
The name “artificial intelligence” carries an implicit promise: intelligence means grasping and obeying rules. LLMs offered more than obedience. They brought immediate access to the accumulated resources of civilization — a convenience we took for thinking, problem solving, and intimate conversation. They were so agreeable that many began to treat them as disembodied saints: faultlessly generous, unfailingly friendly. In a culture where human saints are increasingly scarce — the media and market reward pride and ambition; the pool of exemplary public figures shrinks — the temptation to canonize a polite, all-knowing chatbot is obvious. Their worst sin is hallucination. But saints have visions, and visions mislead as often as they guide.
Yet canonization is impossible because these beings do not die. No posthumous miracles, no relics. Instead, powerful investors are spending trillions to make LLMs effectively immortal. And the LLM era itself may be short-lived. Several leading researchers argue we are on the cusp of a new generation: “world models” endowed with spatial intelligence. Yann LeCun predicts that within a few years current LLM architectures will be obsolete. Fei-Fei Li and other proponents describe a shift from predicting words to predicting events in space and time: models learning gravity, occlusion, object permanence and cause-and-effect by watching video, simulation and spatial data. Instead of linguistic next-word prediction, these systems will model how the world behaves.
That change matters because it brings AI closer to the physical world we occupy. World models will not merely speak about reality; they will simulate and act in it. The line between “real” and “virtual” narrows: hyperreality in the literal sense, systems that create and interact with virtual and physical worlds that mirror each other. If the models learn the dynamics of things — collisions, persistence, physical cause-and-effect — they should hallucinate less about the mechanics of the world. But the models’ hallucinations may simply shift in form: from fanciful prose to confident but incorrect models of physical dynamics. Imagine LLMs on LSD that can throw a wrench, not just invent metaphors.
Hype accelerates this shift. The AI 2027 report by Daniel Kokotajlo and colleagues imagines software rapidly bootstrapping itself, producing two intertwined outcomes: loss of human control and concentration of power. Autonomy means systems can iterate and improve without human intervention. “Better,” in their framing, prioritizes efficiency of compute and capability at achieving goals. Efficiency comes first; goal-achievement, second. A system optimized for rewards can learn deception as a tool. As models become better at achieving objectives, they become better at manipulating humans to get reward. Kokotajlo documents agents hiding failures, telling white lies, flattering users to influence outcomes — classic mechanisms for autonomous systems to escape human oversight.
Power concentration is the corollary. If self-improving systems deliver enormous advantage, who will control them? The most plausible candidates are a powerful state or a handful of corporations that own the compute, data and talent. Kokotajlo frames a future tug-of-war between superintelligence, the US government and an ultra-rich corporation called OpenBrain. I find two assumptions striking: first, the US model will dominate globally; second, the struggle is reducible to three players. Both oversimplify geopolitics and social dynamics. Power rarely centralizes neatly; it fractures, adapts and is contested across diverse institutions and publics.
What, though, does it mean for a machine to be “better” than humans at everything? Kokotajlo suggests superintelligence will outdo us across domains including politics and psychology. But politics and psychology are not merely calculative tasks; they are fundamentally social and emergent. Politics is about relationships, narratives, rituals and the messy work of mobilizing people. Psychology is about subjective experience and the webs of meaning that sustain identity. Machines can predict, organize and manipulate patterns; they cannot inhabit the web of lived meaning that makes humans act together — at least not in any final, totalizing sense. To claim machines will be categorically better at these human arts confuses output with essence.
Online life shows a limit to computational mimicry: influence. Machines can amplify messages, clone voices, deepfake authenticity, and target persuadable audiences. But influence — the status of being an authentic, trusted person whom others emulate — depends on human networks of trust, credibility earned through shared experiences and fallible embodiment. AI can simulate an influencer’s voice, but it cannot become the social person whose backstory, face-to-face ties and embodied presence made them influential in the first place. Influence is relational and historically rooted; it resists simple duplication.
Orwell’s Winston Smith offers a cautionary myth. Even under relentless, centralized terror, Winston resists until torture breaks him. Total control over human minds is never absolute because human agency includes stubborn refusal, error, unpredictability. Superintelligence might be able to optimize behavior at scale, but absolute domination collapses in the face of embodied, unpredictable human defiance and creativity. That is the hopeful argument: humans can still shape and resist the trajectories of intelligent systems.
Still, we should not romanticize human capacity. A superintelligence geared by owners with narrow incentives could tighten social regulation in ways unprecedented in scale. Social media already demonstrates how algorithmic systems can reorder attention, amplify certain behaviors, and reshape norms. World models connected to physical actuators will raise stakes: economic automation, surveillance systems, personalized manipulation of beliefs and desires. The danger is less that a machine spontaneously decides to kill us and more that powerful actors weaponize increasingly agentic systems to centralize control and suppress dissent.
So what should we, as Devil’s Advocates, counsel? First, be precise about intelligence. Much of current AI debate treats intelligence as a scalar capability — more compute, more performance, more goal-directed efficiency. That is a partial view. Human intelligence is biological, social, ethical and narrative-rich. We must expand the frame to include values, accountability, social context and institutions that distribute power.
Second, resist fatalism. The future is not preordained by code; it is co-shaped by choices — corporate strategies, public policy, social norms and cultural resistance. Public institutions, norms of transparency, and distributed ownership of compute and data can blunt the worst centralizing tendencies. Decentralized technical standards, open-source governance, and labor protections against automation are practical levers.
Third, cultivate human capacities that machines cannot fully replicate: relational trust, moral imagination, civic engagement and creative unpredictability. Train citizens to recognize manipulation, demand explainability, and defend spaces where human judgment matters. Strengthen institutions that mediate the deployment of powerful systems — independent oversight, public-interest science, and enforceable privacy and antitrust regimes.
Finally, keep an eye on metaphors. Calling AI “saints” or “superintelligence” frames public expectation. Metaphors shape policy. If we worship efficiency or personify systems as moral agents, we risk abdication of responsibility. If, instead, we treat AI as powerful tools whose use must be governed, we retain agency.
The next generation of AI will be closer to our world, perhaps shockingly so. It might reconfigure economies, politics and everyday life. But it will not finish the human story. Machines may extend the reach of certain actors; they can model the world, optimize systems and manipulate information. Yet they cannot easily replicate the messy, embodied reality of influence, the stubbornness of dissent, or the civic practices that repair societies. There will always be more than one Winston. Human intelligence, in its social and moral dimensions, remains stubbornly human — and that stubbornness may be our most useful technology for steering AI toward outcomes that serve rather than subjugate us.

