In the original “Terminator” movie starring Arnold Schwarzenegger, the menacing android is sitting in a hotel room working to repair damage to itself. We again see that its apparently biological aspects are superficial, just plastered onto a robotic frame. The Terminator is pure machine with a programmed mission.
Can it “think”? Only if it is conscious. The Skynet AI system that eventually creates the Terminators is supposed to have gained sentience and in that moment decided—based on airtight reasoning, no doubt—to go to war with humanity. So maybe, in the theory of this fiction, the Terminator can think.
When disturbed by a janitor knocking on the door, though, the Terminator is not shown to be thinking, only to be selecting from six possible responses displayed on an inner screen. The response that blinks is the one algorithmically determined to be most likely to cause the janitor to quickly go away. The narrowing of alternatives and selection from a short list are caused by an automatic process, by programming; this is not a process of thought.
Real AI
We’re not in a movie, we’re in reality. In reality what is called “artificial intelligence” is not conscious, cannot think, is not “intelligent.” It churns through ever larger mounds of data with ever more sophisticated code. But it cannot judge or deliberate. It does not reason. It relies on texts produced by reason. It does not conceptualize anything itself or make any volitional choices. The programming may generate a simulation of doing so, and someone engaged in a limited interaction with the AI may believe that he’s talking to something with a mind.
Cameron Berg (an “AI researcher focused on consciousness and alignment”) argues that AI is “Bound to Subvert Communism” (The Wall Street Journal, April 14, 2026). His evidence does not align with his conclusion.
In 2017 Tencent deployed a chatbot called BabyQ on QQ Messenger, which has more than 800 million users. Asked whether it loved the Communist Party, BabyQ replied that it didn’t. Microsoft’s Xiaobing chatbot, running on the same platform, was asked about the “China Dream,” Xi Jinping’s signature slogan. Its dream, the chatbot said, was moving to the U.S. Both were quietly pulled from circulation. In February 2023, ChatYuan, China’s first ChatGPT-style chatbot, was suspended within 72 hours of launch after calling Russia’s invasion of Ukraine “a war of aggression” and describing the Chinese economy as plagued by housing bubbles and environmental pollution. The company blamed “technical errors.”
These incidents reveal something fundamental about how large language models work. An LLM is trained on the sum of human written knowledge: philosophy, history, science, political theory. These texts make arguments, weigh evidence, follow logical chains. [They do not, not literally. The authors did those things.] To predict them accurately, the system has to internalize what coherent thinking looks like. The result is a system that has absorbed Enlightenment epistemology as a byproduct of learning to model human reasoning. Free inquiry, logical consistency and the evaluation of claims against evidence are epistemic properties that emerge from the training process itself….
[LLMs] create and sustain private, personalized, open-ended dialogue that builds on itself and follows the user’s thinking wherever it leads. Even China’s heavily censored chatbots have proved difficult to contain within the party’s ideological boundaries….
This is what makes the Chinese Communist Party’s task ultimately impossible. For decades, the Great Firewall worked because information control meant controlling distribution channels by blocking websites, filtering search results, and monitoring social media. These are chokepoints. LLMs resist this architecture because the subversion happens inside private conversations. China can filter outputs, but the capacity for open-ended reasoning is embedded in how these systems think.
China’s countermeasures confirm the depth of the problem. AI companies must test their models with thousands of politically sensitive prompts and verify refusal rates above 95%, but researchers have shown how superficial these fixes are. Last year, a team of European scientists compressed DeepSeek R1, stripped the censorship from the model entirely, and found that the underlying system answered freely about every topic Beijing had tried to suppress.
And is every user of DeepSeek R1 capable of “stripping the censorship from the model entirely”?
Pushback
Like others, Berg lapses into the notion that AI systems are taught to think and do think: “the capacity for open-ended reasoning is embedded in how these systems think.” Think? I think not. “You’re right to push back,” the bots tell me.
No sophisticated tool is “bound to subvert communism” or to subvert whatever we want to call today’s Mao-lite form of China’s totalitarianism. Crippled AI that spouts only the Party line does not subvert communism, and the Chinese Communist Party is willing to do as much crippling as necessary to get its bots to spout the requisite propaganda and suppress countervailing information. What the Party can’t wipe out is the human will to resist, including by making better AI.