Shakespeare’s Claudius observed that sorrows rarely arrive alone: “When sorrows come, they come not single spies, but in battalions.” That line feels newly apt. Two dangerous battalions are converging today: escalating war in the Middle East (notably a US–Israeli campaign against Iran) and a quieter but potentially systemic unraveling of AI’s cognitive foundations as large language models (LLMs) increasingly ingest their own synthetic output.
Both crises share a structural pathology: feedback-loop blindness. In geopolitics, escalating intelligence, propaganda, and automated decision systems can feed on one another until diverse perspectives and exit options vanish. In AI, models trained on the web are now likely to consume a growing proportion of AI-generated text, producing a self-referential tide that flattens nuance, erodes rare reasoning, and slowly degrades the “tails” of human thought that matter most for robustness.
A recent newsletter, A Free Lunch, distilled the problem: by April 2025, much of the new web content already contained AI-generated text. Models must keep learning, and they learn from what’s on the web. If most of what’s available is their own output, they risk a “model collapse” — a silent, cumulative loss of diversity, nuance and unpredictability. Predicting the next token, critics argue, is statistical compression of the past, not building a real model of the world. Yann LeCun’s long-standing insistence on “world models” — systems that represent causal and physical dynamics rather than mimic textual surfaces — gains renewed force in this diagnosis.
The Free Lunch piece even suggests a financial shock: OpenAI’s valuation could collapse when evidence of this degradation appears in production systems, precipitating a wider contraction that drags exposed companies and investors. But the deeper danger is not corporate failure alone; it is the slow deterioration of the cognitive layer on which crucial sectors—defense, finance, healthcare, logistics—are beginning to depend.
I posed an optimistic prompt to two conversational AIs. One failed to respond; the other, called DeepSeek, sketched a three-phase scenario that imagines how this dual crisis—geopolitical brinkmanship and AI model collapse—could become a clarifying moment leading to systemic rebuilding.
Phase 1: Convergence of Crises (near term)
– Evidence of model collapse becomes apparent in production: LLMs perform worse on rare-language reasoning, outlier medical diagnoses, and novel proofs than earlier versions. Errors accumulate in systems that had relied on apparent improvements.
– Concurrently, geopolitical escalation produces severe shocks—energy markets seize, cyber- and physical attacks on infrastructure multiply, and apocalyptic rhetoric becomes operational risk.
– A small cohort of mid-level technical and policy officials across geopolitically diverse nations recognizes a shared root: feedback loops that cannibalize diversity and exit strategies.
Phase 2: Institutional Rupture
– The AI contraction produces failures across sectors that had hardwired LLMs into real-time decision loops: trading misreads, logistics reroutes based on hallucinated data, medical triage missing rare conditions.
– Trust collapses, but instead of retreat into blanket rejection of technology, a different response emerges: those who never fully trusted extractive, continuous-scrape models—librarians, archivists, energy-grid operators, open AI communities, and practical diplomats—become the nucleus of remediation.
– These actors emphasize provenance, human curation, and system-level modeling over scale-for-scale’s-sake.
Phase 3: Rebuilding System and Culture
DeepSeek proposed three organizing principles for a rebuilt AI ecosystem:
1) Provenance-first training (replace scale-first)
– Critical systems require models whose training data have complete, auditable chains of custody. Synthetic text generated after a cutoff is excluded from core training sets.
– Models are trained on curated, diverse, human-sourced corpora—books, peer-reviewed research, court records, parliamentary transcripts, and multilingual cultural archives—accepting smaller scale in exchange for robustness.
– The rare tail of reasoning (minority logics, atypical formulations) is deliberately oversampled to preserve cognitive antifragility.
2) World models as public infrastructure (not proprietary LLMs)
– Architectures that simulate causal, physical, and social dynamics—world models—become the standard for defense, energy, and healthcare. They provide explicit uncertainty bounds and are operated under joint inspection rather than as opaque black boxes.
– A neutral, shared facility—imagine a “CERN for world models”—hosts shared training environments and treaty-like governance to prevent proprietary silos from dominating essential system-modeling infrastructure.
– Updates are deliberate, versioned, and publicly debated—more like constitutional conventions than continuous web scraping.
3) Cultural shift: clarification-as-practice
– Transparency about feedback loops becomes mandatory. Labs, defense contractors and financial institutions publish annual feedback-loop audits documenting where model outputs have begun to cannibalize inputs.
– A new profession—“loop breakers”—emerges to inject controlled noise, contrarian data, and human-in-the-middle friction where systems have become too smooth or self-referential.
– Automated lethal or critical decision systems require diversity-of-reasoning checks: incompatible models must disagree before certain actions proceed. The preservation of surprise or the rare tail becomes a literal safety mechanism.
Who are the players?
They are likely not the headline-grabbing CEOs or presidents. The scenario names more prosaic and overlooked actors:
– Data curators who refuse indiscriminate scraping.
– Energy engineers who maintain analog fallbacks and model whole systems.
– Diplomats who used manual backchannels to prevent escalation.
– Open-weight model communities and archivists who preserved pre-flood snapshots of human-generated data.
– Librarians who can demonstrate provenance for facts.
Collapse may be inevitable, DeepSeek acknowledges, but collapse can clarify: it forces actors to see that “eating your own tail” is lethal both for international stability and for the cognitive substrate of economies. The remedy is not to halt AI, but to rebuild on the principle that intelligence is navigation of the unforeseen, not mere prediction of the next token.
Unresolved questions
– Given political and military turmoil, will the human capacities needed for reconstruction survive or be sufficiently distributed?
– If geopolitics cools, where will the initiative to reorganize society’s institutions and norms originate—and how will curators, engineers, diplomats, librarians and open communities actually coordinate in a competitive 21st-century environment that mistakes cooperation for weakness?
This piece is Part 1. In Part 2 we will examine DeepSeek’s suggestions for operationalizing an optimistic recovery and the practical steps groups might take to institutionalize provenance, world-model infrastructure, and feedback-loop auditing.
Your thoughts
We welcome reader input. Send ideas and commentaries to [email protected]. We intend to gather and build human perspectives into an ongoing conversation about how to repair the systems we increasingly depend upon—and how to ensure that those systems preserve, rather than erode, the diversity of thought that keeps societies resilient.
