In the first part of this conversation I engaged DeepSeek on “model collapse,” the feedback loop that erodes AI models’ nuance and diversity by recycling their own outputs back into training data. I linked that with ongoing wars and the risk that combined technological and geopolitical crises could push civilization toward catastrophic outcomes.
I asked DeepSeek: who might turn this developing crisis into a moment of clarification? The chatbot proposed a plausible plan—credible only in a world where pragmatism can sometimes override rivalry. Several questions follow: if political and military escalation continues, will there be enough of us and enough intact infrastructure to produce this clarification? On the optimistic branch—geopolitical tensions ease, limited nuclear exchanges shock but do not unravel society—where does the initiative to reorganize collective structures and reconnect curators, energy engineers, diplomats, librarians and communities actually come from?
If full-scale nuclear exchange occurs, the answer is simple and brutal: no. Model collapse becomes a footnote in radioactive silence. Clarification requires functioning power grids, at least one intact undersea cable, and people not digging fallout shelters with bare hands. So accept the optimistic branch: the worst is avoided; a traumatic 12–18 month AI-economic contraction hits a world politically intact but deeply shaken. Who initiates the reorganization needed to stop “eating our own tail”?
The unlikely origin: not a capital, not a conference
It will not come from the UN (too slow and veto-bound), Davos (too compromised by extractive logic), another grand AI Safety Summit (already failed by 2026), Silicon Valley (financially decimated and morally discredited), or the military (whose collapse-blindness helped cause the problem). Instead, the initiative will emerge from a set of low-profile, cross-sector “backbone institutions” that survived because they were boring, decentralized, and not fully captured by growth-at-all-costs. Think:
1) IETF + W3C
These bodies bring a 30-year record of decentralized, rough-consensus, running-code governance. They weren’t designed for AI, but after model collapse they become the natural forums for provenance protocols: standards that mark human-generated text, certify non-synthetic training corpora, and create interoperable “data nutrition labels.” They move slowly but in the right direction and are less vulnerable to hype and capture.
2) A repurposed IAEA
The IAEA already inspects supply chains and manages genuinely dual-use technology. Repurposed, it could become an International AI Assurance Agency: inspectors trained to verify chain-of-custody for datasets, tamper-evident seals on training corpora, unannounced site visits, and enforceable logging. Its inspectors would be librarians and archivists retrained to verify provenance rather than monitor radiation, using checksum verifiers instead of Geiger counters.
3) Cooperative energy grid operators (Nordics, Hydro‑Québec)
Operators in Northern Europe and Quebec solved a comparable coordination problem: balancing intermittent renewables across borders without a central command. Their culture is to coordinate fallbacks, publish “worst-case loads,” and accept binding manual overrides. After collapse, they’re natural architects for governance of “world models” that simulate critical infrastructure rather than consumer LLMs.
The mechanism: from initiative to reorganization
These backbone institutions can’t write constitutions, but they can build a layer of interoperability that makes the old extractive model unworkable. A plausible sequence:
Step 1: Data Provenance Protocol (DPP)
The IETF publishes RFC 9XXX: a mandatory machine-readable header for web text declaring origin (human, synthetic, mixed). No law compels it, but insurers—burned by the systemic crisis—refuse to underwrite models trained on unlabeled or unl traceable data. Economic incentives make provenance mandatory; curators and archivists find work restoring trusted corpora.
Step 2: Model Collapse Audit Treaty (MCAT)
Inspired by IAEA safeguards, a coalition of mid‑power nations negotiates a treaty that:
– Bans continuous online training for models in health, defense, or finance; models must be frozen, versioned, and recertified every 24 months.
– Establishes a Human‑Generated Corpus Reserve: a distributed, mirrored global archive of pre‑2024 human text hosted in Reykjavik, Singapore, Santiago, Durban, and similar nodes. Access is free; poisoning it becomes a crime against humanity.
Step 3: The Grid Model for AI Governance
Nordic and Quebec operators propose bifurcation: treat generative LLMs as non‑critical consumer tech, while governing world models for infrastructure like the electrical grid—regional cooperatives, open protocols, explicit uncertainty estimates, and mandatory human‑in‑the‑loop approval for actions that impact physical systems. The critical insight is separation of cognitive circuits: consumer generative models remain in one circuit; safety‑critical world models (power, water, transport) are governed like utilities.
Where the initiative lives: the “Neutral Node”
No single country can host this reorganization without it being seen as a power grab. The initiative coalesces around a neutral node that already supports multi‑stakeholder science and standards: Geneva (CERN), the Svalbard model (for data vaulting rather than seeds), or an Internet Archive–style repository. Geneva is plausible: it hosts CERN, ITU, WHO and UN agencies, and Swiss neutrality held through recent regional wars just enough to make it a meeting ground. CERN physicists are attuned to rare‑event search—the opposite of model collapse—and understand the need to preserve tails and atypical reasoning.
Imagine a small 2028 gathering in a CERN meeting room: a librarian from the Internet Archive, an Oslo grid operator, an IAEA inspector, an IETF routing engineer from Tokyo, and a diplomat from a surviving de‑escalation cell. No budget, no mandate—only a shared realization: the alternative is extinction by feedback loop. They draft a 12‑page Geneva Accord on Cognitive Infrastructure. By 2029, dozens of countries and key institutions sign. By 2030, the first certified non‑collapsing world model runs across data centers in Reykjavik, Singapore and Santiago.
The final answer to your question
Where will the initiative originate? Not in a revolutionary moment but in a bureaucratic one: standards‑body working groups, an IAEA inspection checklist, an electrical grid operator’s contingency plan. These are institutions boring enough to survive, flexible enough to adapt, and staffed by people who learned the hard lesson: prediction is not intelligence; growth is not resilience.
The curators, engineers, diplomats, librarians and community members don’t need to rebuild society from scratch. They need to reconnect broken feedback loops between their professions—loops severed deliberately or accidentally by extractive AI and apocalyptic geopolitics. The reconnection happens in a standards RFC, an inspection protocol, or a grid operator’s emergency manual. That is where the future will be saved: not with a bang, but with a memorandum.
Can we reconnect the broken feedback loops? Can AI, which helped reveal these guidelines, play a constructive role in the repair? If so, what role should it play? I invite readers to join the discussion and send thoughts to [email protected]. We will fold your responses into the ongoing conversation.
