Ellis Cashmore’s recent critique of Australia’s law banning social media for children under 16 forces a deeper question than “screen time.” Cashmore argues persuasively that legislators ignore “the immense educational and cultural value of social media and the broader internet,” treating a vast, decentralized learning environment as if it were only corrupting. His point—that suppression will produce unintended consequences—is sound. But I want to push further: the more serious problem is the unreflecting behavior of adults who default to prohibition rather than engagement.
Too many parents, educators and politicians treat young people as vessels to be filled or as risks to be contained. They assume youngsters use the internet only for unwholesome purposes, rather than recognizing that children are active learners, creators and social beings who use digital tools to explore identity, culture and knowledge. Rather than enter their children’s online worlds and engage in shared critical thinking—asking “What did you discover today?” or “How do we judge whether this source is reliable?”—adults often prefer the seductive simplicity of a ban. A law promises a clear, enforceable solution; engagement requires time, humility and pedagogical imagination.
This reflex—externalizing a complex social problem and legislating it away—is not limited to youth and social media. It is the same pattern we see around AI. Public discourse about AI is dominated by extremes: existential doom scenarios, academic cheating and job-loss anxieties. What is largely missing is a sustained conversation about using AI as a collaborative educational partner: how to integrate it into curricula to build critical thinking, creativity and new literacies. Again, the temptation is to suppress or restrict, rather than to learn, adapt and harness potential.
Why does repression recur as our favored response? A few reasons:
– The allure of certainty: bans are politically legible. They are decisive, simple to announce and easy to campaign on. Education and dialogue are slow, uncertain and harder to sell.
– The theater of principle: refusing dialogue or compromise is framed as moral courage. Diplomacy and nuance are recast as weakness. The same posturing that rejects conversation in foreign policy often shapes domestic policy.
– A failure of imagination: it is easier to picture a world without a problematic technology than to imagine one where society learns to use that technology wisely. Prohibition is the unimaginative option.
This is a damaging broad trend. Consider analogous examples:
– The War on Drugs: prohibition produced mass incarceration, powerful cartels and public-health crises that show the limits of repressive policy.
– Education policy: standardized testing and zero-tolerance discipline substitute measurement and punishment for the complex work of teaching and mentoring.
– Immigration: “build a wall” politics simplifies a multifaceted problem that actually needs cross-border cooperation and humane systems.
– Climate policy: denial and delay often reflect a preference for inaction or symbolic gestures over the difficult systemic transformations required.
In each case, repressive or simplistic solutions substitute for harder tasks: understanding root causes, educating the public, building consensus and fostering adaptive resilience.
If the Australian social-media ban is symptomatic, what is the alternative? Cashmore’s logic suggests the answer: shift from prohibition to partnership. Practical steps include:
– Parents: engage with children’s online lives. Ask them what they learn, model source evaluation, and treat online experiences as opportunities for shared critical thinking.
– Legislators: fund digital literacy and civic-technology education rather than drafting blanket prohibitions.
– Educators: integrate social-media analysis and AI tools into curricula so students learn media literacy, algorithmic awareness and collaborative inquiry.
– Public institutions: design policies that encourage safe, creative use while addressing harms through education, support and targeted interventions.
AI can play a constructive role in this shift. Rather than demonize it, we should explore how conversational agents and adaptive tools can act as educational partners—scaffolding learning, prompting critical reflection and amplifying diverse viewpoints. Including AI’s voice in public debate could add nuance and help test arguments; but this requires oversight, transparency and a real commitment to literacy about how these systems operate.
This is a cultural choice with civilizational consequences. One path—repression and unreflective action—risks producing brittle societies marked by controlled ignorance, polarization and a politics of confrontation. The other—dialogue, education and integration of new tools—is harder and slower but promises a more resilient, adaptable public culture.
My provisional conclusion: our democracies face a test. The Australian example suggests many political classes remain on the wrong path, opting for the theater of principle over patient pedagogy. The initial conversation sparked by Ellis Cashmore and widened by an AI interlocutor gives a model for how we might proceed: public debate that includes human critics, engaged parents and carefully governed AI contributions. If policymakers instead choose bans because they are easy and electorally expedient, we should expect more unintended harm than good.
If you have thoughts on moving from prohibition to partnership—about legislating social media, integrating AI in education, or broader civilizational trends—please write to [email protected]. Fair Observer aims to gather and consolidate ideas from humans who interact with AI; including these perspectives could bring nuance and gravitas that are too often missing from policy debates.


