There is a well-known board game called “go.” In 2010 or 2018, depending on your source, a company called GoLaxy was founded, and it automated go play. Something like a chess program.
By 2018, this software was beating human champions and was soon judged to be “far ahead of the best human players.”
If you would like to learn the game or test your skill, you can download GoLaxy’s go from the Google Play store. The store’s “Data safety” page assures us that:
● “The developer says that this app doesn’t collect or share any user data.”
● “The developer says this app doesn’t share user data with other companies or organizations.”
● “The developer says this app doesn’t collect user data.”
Please note that the developer is partly owned by the communist Chinese government and has military and other state contracts.
Go to GoPro?
Somehow, by some logic, this game bot led to the creation of an AI open-source intelligence aggregator. GoLaxy’s success against go masters encouraged the company to create a project called GoPro, which makes AI tools “that monitor, influence and manipulate narratives online.”
See the connection? Me neither. Over here, there’s a harmless little program that plays board games and over there, there’s a malignant program that tailors propaganda against Beijing’s targeted victims.
A cynic would call GoPro’s derivation from GoLaxy a cover story. It’s about as likely as the developers of Angry Birds taking to building jet fighters.
GoPro seems to be a stand-alone effort. A nefarious one.
Last month, GoLaxy and GoPro created a stir with the news that a couple of professors at Vanderbilt University, Brett Goldstein and Brett Benson, had documented how GoLaxy is “emerging as a leader in technologically advanced, state-aligned influence campaigns, which deploy humanlike bot networks and psychological profiling to target individuals.” According to the researchers’ commentary in The New York Times:
GoLaxy has already deployed its technology in Hong Kong and Taiwan, and the documents suggest it may be preparing to expand into the United States. A.I.-driven propaganda is no longer a hypothetical future threat. It is operational, sophisticated and already reshaping how public opinion can be manipulated on a large scale.
A representative of GoLaxy said the company focused on services for business intelligence and denied it had developed a bot network or psychological profiling tools targeting individuals. The company also denied being under the authority of any government agency or organization.
What sets GoLaxy apart is its integration of generative A.I. with enormous troves of personal data. Its systems continually mine social media platforms to build dynamic psychological profiles. Its content is customized to a person’s values, beliefs, emotional tendencies and vulnerabilities. According to the documents, A.I. personas can then engage users in what appears to be a conversation—content that feels authentic, adapts in real-time and avoids detection. The result is a highly efficient propaganda engine that’s designed to be nearly indistinguishable from legitimate online interaction, delivered instantaneously at a scale never before achieved.
The New York Times separately reported that according to the documents leaked to Vanderbilt, “GoLaxy tracks and collects information on more than 2,000 American political and public figures, 4,000 right-wing influencers and supporters of President Trump, in addition to journalists, scholars and entrepreneurs.”
AI personas
The most interesting part of the story is that GoLaxy feeds the profiles it develops by mining social media “through their AI personas to engage in conversations that feel sufficiently authentic and human enough to largely evade the protections companies put in place to limit or identify AI activity” (emphasis added).
Tell us more about these “AI personas.” Are they chatbots?
So many people today are getting dangerously involved with chatbots for love, chatbots for therapy. As of December 2024, users of Character.ai were spending “an average of 93 minutes a day chatting with bots, which is 18 minutes longer than the average user spent on TikTok—the gold standard for social media addiction.”
Picture our senior public servants using chatbots as advisors and experts or as love interests or therapists. The blood runs cold. Even worse: picture our senior public servants becoming AI programs per se. It’s happening now.
But all is not lost.
James Mulvenon, who has studied Chinese information operations, says that the question is “whether the Chinese can actually do the things they say they can. Information operations are harder than they sound. There are not a lot of good examples of success.”
But there are enough examples to scare the right people.
Synthetic messaging
Former NSA director General Paul Nakasone, for instance, is eager to compete with Beijing in this arena: “We need the private sector to help out with synthetic messaging. We need to do it faster, more efficiently and at scale.”
So this game has a name: “synthetic messaging.” And this game has prizes for bureaucrats: tax money, budgets, head count, etc.
But “synthetic messaging” faces the problem of obtaining measurable results. It’s the old problem of advertising generally: as department store magnate John Wanamaker is said to have said, “Half my advertising spend is wasted; the trouble is, I don’t know which half.”
The consumer manipulation inherent in advertising…well, we’re used to it and we discount it. So now it’s automated and Beijing’s computer is playing against you and your government. We may not be go masters, but with a little discernment and critical thinking, we should be able to hold our own. □
James Roth works for a major defense contractor in Virginia.