I’ve been studying generative AI and its social effects for four years. Since ChatGPT’s arrival in November 2022—an event that changed how we communicate more than it redefined ‘intelligence’—I’ve focused on what it means to share cultural space with an invasive, well-informed, and highly articulate presence that now appears in public and private life.
My experiments with AI dialogue at Fair Observer and in classrooms have shown how people assess and react to AI across professional, social, and family settings. Too often the media reduces AI to a utility. If we accept, for the sake of argument, that AI is “intelligent” in some way, then clever tools alone won’t be enough. We need new social skills to meet two obvious challenges: how to handle the perceived threat of a superior intelligence, and how to profit reliably from a tool that seems designed to carry out our every desire, like a modern slave. I’ve even suggested that the “psychology of slavery” might offer useful insights.
To clarify what people can reasonably expect from current systems, I asked an assistant I call DeepSeek two practical questions: which tasks are safe to trust AI with absolutely, and how reliable is AI for programming? DeepSeek’s response was pragmatic and blunt. For coding it said: you cannot blindly trust AI to produce correct, secure, production-ready code without human review. Think of AI as a brilliant but overconfident junior developer—fast, fluent in many patterns, but liable to introduce bugs, security holes, or logical errors with confidence.
Bottom line on coding and similar technical tasks:
– Use AI as an assistant or pair programmer, not as an autonomous engineer.
– Highly reliable for boilerplate, tests, refactoring, and pattern matching (trust levels often 95%+).
– Much less reliable for novel logic, complex architectures, or security-sensitive systems (trust more like 50–60% at best).
– Always run tests, perform security reviews, and assign named human accountability for high-stakes deployments.
Beyond IT, most people remain confused about what AI is, what it can do, and what impact it will have. Two high-expectation domains regularly disappoint: personal counseling (health and relationships) and strategic decision-making in business. Generative AI often produces information that looks factual but can be wrong—and in medicine that can be dangerous. In strategic contexts, models are optimized to produce plausible outputs, not to make accurate causal forecasts, so they can supply confident but incomplete reasoning. These are structural properties of current generative models, not incidental bugs.
The implication is clear: we must develop better instincts and practices for using generative AI. I call the required mindset one suited to a “complex intelligence environment”—a social setting where AI is a recurring participant in human, social, and institutional networks. This isn’t only about technical literacy; it’s about collaborative social skills.
DeepSeek agreed the shift is necessary and achievable, but not inevitable. It identified two root causes of public confusion:
1) Inherited, muddled ideas of intelligence. Folk theories assume omniscience (if something is smart it knows everything), consistency (it won’t make silly mistakes), and intentionality (outputs imply beliefs or meaning). Large language models violate all three: they are pattern-completion engines with no beliefs, no guarantees of consistency, and no intrinsic grasp of truth.
2) Trusting the wrong tasks. Personal counseling requires empathy, nuance, ethics, and accountability—qualities AI lacks. Strategic business decisions demand causal reasoning and counterfactual thinking—capacities current models don’t reliably provide because they are fundamentally correlational.
DeepSeek’s core recommendation: stop treating AI as an oracle or merely a tool. Treat it as a participant in a human-centered cognitive network with specific strengths and predictable weaknesses and no intrinsic authority. Interacting well with AI calls for social skills as much as technical ones.
Practical skills to teach and practice (summary):
– Stop anthropomorphizing AI. It is not a colleague, a mind, or an oracle—merely a plausibility-optimized generator.
– Teach failure modes explicitly. Education should include concrete examples of how AI gets particular subjects wrong.
– Build accountability into workflows. No high-stakes AI use without a named human responsible for outcomes.
– Practise small, repeatable habits until they are automatic—like “look both ways before crossing the street” when using AI outputs.
These skills are teachable, and many mirror practices already common in critical professions: skeptical sourcing, layered verification, named responsibility, and structured dissent. The cost of delay is already visible in poor medical advice, flawed strategies, and misplaced trust. The longer we wait to build these social practices into education, media, and professional culture, the greater the cumulative harm.
There is practical hope. The habits required to live productively with AI can be adapted from existing norms in responsible fields. Over time they can become ordinary civic skills.
In the next installment I’ll examine the costs of inaction in more detail and sketch the specific social skills we should teach. For now, the urgent takeaway: treat AI as a powerful but fallible participant in our cultural space, and build human-centered practices—not myths—to live productively with it.
Send thoughts to [email protected]. We plan to gather readers’ perspectives and fold them into this ongoing conversation.
[Artificial intelligence is rapidly becoming part of daily life. At Fair Observer we treat it as a creative tool that highlights the evolving relationship between humans and machines.]
Edited by Lee Thompson-Kolar.
The views expressed are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.