On June 25, 2025, the House Select Committee on the Chinese Communist Party held a hearing on “Algorithms and Authoritarians: Why U.S. AI Must Lead.”
“We’re not in one AI race with China—we’re in two,” said a witness at the hearing, Mark Beall of the AI Policy Network, in his prepared statement.
The first race is for commercial dominance, to gain an “for an economic, military, and geopolitical edge through artificial intelligence.” This kind of competition among powers is familiar.
Artificial superintelligence
The second, less familiar, says Beall, is “a race toward artificial superintelligence, also known as ASI…. If any nation develops ASI in today’s environment—and particularly a hostile nation—it’s not hyperbole to say that we could be facing an existential crisis. These ASI systems—in the wrong hands or without guardrails—have the potential to destroy global electrical grids, develop incurable super-viruses, empty every bank account in the world, and worse.”
In his statement, Beall does not say what artificial superintelligence is. Is it a form of intelligence?
What we have now, “artificial intelligence,” is not a form of intelligence. Neither abacuses nor computers nor software programs are forms of intelligence. Programmed capacity to churn through starship-loads of data, match patterns, and execute steps is not intelligence.
ChatGPT and the rest are not intelligent, are not brains and minds. They do not reflect, ponder, choose, conceptualize, or deliberate. To get to superintelligent computers, one must first get to intelligent computers. To get to intelligent computers, one must first get to conscious computers. And consciousness is an attribute of life. The chronic use of metaphors in descriptions of how AI algorithms work does not help. (Example from OpenAI: “Our large-scale reinforcement learning algorithm teaches the model how to think productively using its chain of thought in a highly data-efficient training process.” There is no learning, no teaching, no thinking, no chain of thought. That writers about complicated computing seem to find the metaphors inescapable does not make them literally accurate characterizations of anything.)
Articles on artificial superintelligence tell us what ASI would be if it existed—it would be super much better in every way than human intelligence, for one thing—but not how it could exist. Fantasies about future threats that don’t require any glimmer of a foundation in fact can be multiplied infinitely. Existing threats and credible possibilities are more than enough to contend with.
The advice of the witnesses
Thomas G. Mahnken: “The United States should seek to bolster the strengths inherent in our democratic system and our approach to innovation. We will never out-authoritarian the authoritarians.
“We could stumble and falter under two circumstances. First, we could fail if we inhibit ourselves from pursuing AI—that is, if we take counsel of our fears and slow our momentum such that the Chinese overtake us. Second, we could fail if we are careless and continue allow the PRC to poach our innovations and steal our data.”
Mark Beall: “First, we need to better protect our assets. The fact that Chinese military researchers freely buy, steal, download, and then weaponize American technology represents a dereliction of duty that would have been unthinkable during the Cold War.
“Second, we need to promote those assets….
“Third, we need to prepare. Like the superpowers stepping back from nuclear annihilation during the Cuban missile crisis, we must recognize that the superintelligence race cannot be won—only survived.”
Jack Clark: “First, the U.S. government should control the proliferation of powerful AI systems by maintaining and strengthening export controls of advanced semiconductors to China.
“Second, the U.S. government should invest in safety and security to give Americans confidence in the technology. Specifically, we should invest in federal capacity to test AI models for both national security risks and further afield risks like the blackmail example, through the Center for AI Standards and Innovation within the National Institute of Standards and Technology.
“Finally, the U.S. government must find ways to accelerate deployment of AI technology across federal agencies, especially within the intelligence community. This will help our government move faster in handling a rapidly moving threat landscape, and will also help us gain a better understanding of AI’s increasingly significant impacts on national security in the coming years.”
A warning from the chairman
After the hearing, Congressman John Moolenaar, chairman of the Select Committee on the CCP, sent a letter dated June 27, 2025 to Commerce Secretary Howard Lutnick:
It is not only China’s ability to build frontier AI that poses a threat—it is also who the Chinese are willing to share these immense capabilities with. Historically, the development of weapons of mass destruction—such as al-Qaeda’s anthrax lab or the Khan proliferation network—required rare combinations of technical expertise and radical intent. But access to sufficiently capable AI systems may soon eliminate the need for highly trained experts. AI could enable non-state actors to generate step-by-step instructions for constructing bioweapons targeted at Americans. These risks are no longer hypothetical; they are present now and growing. PRC models could be exported, licensed, or leaked to non-state actors or authoritarian regimes, increasing the risk of AI-enabled attacks. Preventing Chinese-origin AI from becoming a global proliferation threat must be central to any U.S. AI strategy.
Moolenaar offers eight recommendations. Among them: secure supply chains, prevent advanced chips subject to export controls from being tampered with or diverted to the People’s Republic of China, base export agreements on aggregate computing power and not numbers of chips, enforce security standards at overseas AI data centers, and require “strategic alignment” from nations that are partners of the United States.
Moolenaar and other lawmakers recently introduced legislation to “prevent Federal agencies from buying or using artificial intelligence models created by companies with ties to the Chinese Communist Party and other foreign adversaries.”