I’m now in my fourth year of regularly exploring generative AI’s capacities and societal impact. Since November 2022, when ChatGPT transformed communication more than our concept of intelligence, I’ve focused on what it means to share cultural space with an invasive but well-informed, articulate presence appearing in public and private life.
My experiments with AI dialogue at Fair Observer and in educational contexts have shown how people react to and measure AI’s impact professionally, socially, and familially. Too often, media frames AI in a simplistic utilitarian way. If we accept that AI is, in some sense, “intelligent,” then clever tools alone won’t suffice. We must develop new human social skills to meet two obvious challenges:
– How to handle the perceived threat of a superior intelligence, and
– How to profit reliably from a tool that seems designed to carry out our every wish like a modern slave.
I even suggested that serious study in the “psychology of slavery” might be relevant. To clarify what people can reasonably expect from AI, I asked DeepSeek:
What tasks can AI do that everyone recognizes as efficient and reliable — what can we blindly trust AI to get right?
DeepSeek answered with a table and a summary. I followed up on coding:
What about coding? How reliable is AI overall for programming?
DeepSeek’s essential message: No — you cannot blindly trust AI to write correct, secure, production-ready code without human review. But AI is highly useful for specific sub-tasks. Think of it as a brilliant but overconfident junior developer: fast, fluent in many patterns, but prone to confidently introduced bugs, security holes, or logical errors if unchecked.
Bottom line:
– AI coding is absolutely efficient and often reliable as an assistant, but not for blind trust in most real-world scenarios.
– Treat AI as a pair programmer who types fast but needs supervision, not an autonomous engineer.
– For boilerplate, tests, refactoring, and pattern matching: trust it heavily (95%+).
– For novel logic, security, or large systems: trust it lightly (50–60% at best).
– Always run tests, review security, and require named human accountability for high-stakes deployments.
With those practical cautions noted, the deeper issue remains: most people outside IT are confused about what AI is, what it can do, and what impact it will have. I suspect inherited and muddled ideas about intelligence skew expectations. Two domains with high hopes but often disappointing results are:
– Personal counseling on health and relationships, and
– Strategic decision-making in business or professional contexts.
Generative AI can produce information that looks factual but may be inaccurate; for medical advice, this can be dangerous (eClinicalMedicine). In strategic contexts, AI’s architecture — optimized to produce plausible outputs, not accurate causal forecasts — leads to confidently stated but incomplete reasoning (Kris Johnson Ferreira, Harvard Business School Working Knowledge). These are not incidental bugs; they are structural features of current generative models.
What this implies is we humans must develop better instincts and work habits when using generative AI. Specifically, we need to cultivate critical thinking suited to a “complex intelligence environment” — a social setting in which AI participates as a contributing member of a human, social, and civilizational network. That requires a new set of collaborative social skills, not just technical literacy.
I asked DeepSeek whether this is possible and what it would require. It agreed the shift is both necessary and achievable but not inevitable. It identified two root causes of public confusion:
1. Inherited, muddled ideas of intelligence
– Folk theories assume omnipotence (if something’s smart it knows everything), consistency (it won’t make silly mistakes), and intentionality (outputs imply beliefs or meaning). Large language models violate all three: they are pattern-completion engines with no beliefs, no guarantees of consistency, and no intrinsic grasp of truth.
2. The wrong tasks being trusted
– Personal counseling demands empathy, nuance, ethical judgment, and accountability — none of which AI possesses.
– Strategic business decisions require causal reasoning, counterfactual thinking, and understanding hidden constraints — strengths AI lacks because it is fundamentally correlational, not causal.
DeepSeek restated my proposal: stop treating AI as an oracle or a mere tool and instead treat it as a participant in a human-centered cognitive network with specific strengths and predictable weaknesses and no intrinsic authority. Interacting well with AI calls for social skills as much as technical ones.
Its list of what those skills might include was extensive; for now I’ll present DeepSeek’s “final answer” in brief:
Is it possible to develop collaborative social skills for living with AI? Yes — but only if we act deliberately:
1. Stop anthropomorphizing AI — it is not a colleague, an oracle, or a mind; it’s a plausibility-optimized generator.
2. Teach failure modes explicitly — curricula should include “here’s how AI gets this subject wrong.”
3. Design accountability into workflows — no high-stakes AI use without a named human responsible for outcomes.
4. Practice specific skills until they are automatic, like looking both ways before crossing the street.
DeepSeek emphasized that these skills are teachable but currently not being taught widely. The cost of delay is already visible in poor medical advice, flawed strategies, and misplaced trust. The longer we wait, the higher the cumulative harm.
There is practical hope: many of the skills resemble existing habits in critical professions — skeptical sourcing, layered verification, named responsibility, and structured dissent — and can be adapted for widespread practice. But they require systemic change across education, media, and professional culture.
We will explore the parameters of the cost and the specific skills in the next part of this conversation. For now, the urgent takeaway is simple: treat AI as a powerful, fallible participant in our cultural space and build human-centered practices — not myths — to live productively with it.
Your thoughts: write to [email protected]. We intend to gather and incorporate readers’ perspectives into this ongoing dialogue.
[Artificial Intelligence is rapidly becoming part of daily life. At Fair Observer we see it as a creative tool that reveals the complex relationship between humans and machines.]
Lee Thompson-Kolar edited this piece.
The views expressed are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

