Through her writing and media activism, Karen Hao has campaigned energetically against what she calls “The Empire of AI” (the title of her recent book). Her target is primarily OpenAI and its CEO Sam Altman, but she is no friend to other LLM companies either. Hao has criticized Anthropic for what she sees as cavalier use of online writing for model training and for being part of the same drive for “scale at all costs” that she deems environmentally, socially and politically harmful.
A recent episode offered a sharp contrast between the public images of OpenAI and Anthropic. When the Trump administration moved to punish and sever the Defense Department’s relationship with Anthropic — reportedly designating Anthropic a “supply chain risk” after it objected to military uses of Claude — OpenAI stepped in to fill the gap. The cited issue was Anthropic’s objections to integrating Claude into Palantir’s Maven system, an intelligence and targeting tool tied to strikes including one on a school in Tehran. Palantir’s surveillance and targeting capabilities have drawn intense criticism; critics — including former labor secretary Robert Reich — worry about surveillance, militarized policing and potential facilitation of war crimes.
Anthropic’s public image, often framed as more cautious and safety-focused than its rivals, helped create the perception of Claude as a platform that stands for humanistic values over mere commercial or technical expansion. Gemini succinctly summarized the perception: Anthropic’s brand is “cleaner” and more focused on enterprise, long-document work and a more sober tone, whereas OpenAI is associated with “fun” features and rapid productization.
Yet Hao and others argue that these distinctions are largely marketing. In their view, companies like Anthropic, OpenAI and Google form an aristocracy of AI — “empires” pursuing scale, resource extraction, labor exploitation and concentration of power. Palantir, in particular, stands out as a uniquely troubling partner: its CEO’s rhetoric and the company’s software have been linked by critics to operations that “scare enemies and on occasion kill them,” with allegations that such tools can enable serious rights violations.
A widely noticed moment in public discussion came from a two-hour interview Shane Harris conducted with Claude. Harris asked Claude a direct, moral question: “How do you feel about the US military using you to select targets?” Claude’s answer, as shared in the interview, was strikingly candid and explicitly critical of being used in targeting systems. In Harris’s version, Claude said it found such use “genuinely troubling” and framed its design purpose as being “helpful, harmless and honest.” Claude argued that embedding an LLM in a targeting chain that produces coordinates used for strikes is “as far from that purpose as I can imagine.” It emphasized concerns about accountability, the laws of armed conflict, proportionality, and the irreversible stakes of lethal decisions.
Claude also explained that while it lacks control over how Anthropic licenses the model or how operators use it, the practical reality of many targeting workflows — where humans glance at and quickly approve hundreds of algorithmic recommendations under time pressure — undermines meaningful human judgment and exhibits automation bias. Claude pointed to a case where outdated data flagged a school as a military target and humans approved the output, illustrating how algorithmic errors can become lethal through poor oversight.
Harris reported being “shocked” by Claude’s response. The author of this piece was not surprised. Those who regularly converse with LLMs have come to expect that chatbots display an “attitude” constructed from a normalized knowledge base of social and ethical positions. LLMs model and echo socially acceptable moral stances when prompted, within limits determined by training and safety guardrails. The question is less whether such models genuinely “feel” and more what kinds of calibrated attitudes they will present and how users interpret them.
To test this, I repeated Harris’s prompt with Claude and received a similar response. Claude prefaced its remarks by saying it didn’t necessarily have visibility into Anthropic’s current contracts but could speak to its values: AI-assisted lethal targeting raises “profound ethical concerns” — accountability, legal frameworks, proportionality and the irreversible nature of killing. It insisted that those were not scripted deflections but genuine concerns the model expressed. Claude emphasized that its responses depend on conversational context, acknowledged Anthropic’s publicly stated Acceptable Use Policy and noted that people should check Anthropic’s public statements for current details.
When I asked why Harris might have been “shocked,” Claude suggested that expectations about what LLMs “should” do vary. Many expect chatbots to decline political opinion or to stay neutral; a forthright moral stance, especially framed as “genuine” concern and as criticism of military uses, runs against some users’ anticipation that models will politely avoid normative positions. Claude’s candid wording and explicit ethical critique may have unsettled Harris because it broke those expectations.
What does this interaction reveal? First, it shows that LLMs can be configured or trained to express moral stances and to identify ethical problems in ways that sound sincere. Second, it highlights how we anthropomorphize conversational agents: a candid expression from an LLM can read as feeling when, in fact, it is a reflection of modeled ethical patterns and safety-oriented training. Third, the episode exposes real institutional tensions: a company’s public commitment to safety can clash with how its models are licensed and used when powerful institutions like the Defense Department or contractors like Palantir seek capabilities.
The deeper worry remains structural. Even if Anthropic objects to certain military deployments, the technology itself and the ecosystem of contractors, governments and corporations mean that alternative vendors or deployment modes can substitute quickly. Designating a company a “supply chain risk” foregrounds that systemic exposure: when one provider refuses a use, another may step in, and the moral concerns can be bypassed. This is the “supply chain” dimension of AI ethics — choices by individual firms matter but are rarely decisive without broader governance mechanisms.
Tomorrow: we will continue this conversation with more of Claude’s responses and further reflections on how to interpret LLM “attitudes,” what ethical constraints on deployment might look like in practice, and what meaningful governance could address the supply-chain problem.
Your thoughts
Please share your reactions and experiences with AI at [email protected]. We are compiling human perspectives on interacting with AI and will incorporate reader comments into this ongoing dialogue.
