A June paper reviewed by Reuters revealed six Chinese from three institutions used Meta’s LLM model, Llama 2 13B, to develop “ChatBIT.” Two of the three institutions in question fall under the People’s Liberation Army’s (PLA) research wing.
The Reuters review, supplemented by other academic papers and analysts, indicated ChatBIT was “optimized for dialogue and question-answering tasks in the military field.” The paper revealed that China’s military-grade AI model “outperformed some other AI models that were roughly 90% as capable as OpenAI’s powerful ChatGPT-4.”
Sunny Cheung, an associate fellow at the Jamestown Foundation who specializes in China’s emerging and dual-use technologies, including AI, said, “It’s the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes.”
Unsupervised use of Meta’s open-source AI models
Llama is among the plethora of Meta’s AI offerings that are openly available for developers and users alike. However, the open-source label comes with key restrictions, including a clause that prohibits the deployment of its AI for “military, warfare, nuclear industries or applications, espionage.” These restrictions also extend to the creation of AI-generated content that can “incite and promote violence.”
Despite these restrictions being in place, Meta does not possess any control over the use of its publicly available LLMs. Molly Montgomery, Meta’s director of public policy, told Reuters in a phone interview that “any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy.”
In response to these evidence-backed findings against China, Geng Guotong and Li Weiwei with the AMS’s Military Science Information Research Center and the National Innovation Institute of Defense Technology issued a statement. “In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis but also to strategic planning, simulation training, and command decision-making.”
The US has become increasingly weary of China’s access to American open-source LLMs to further its defense capabilities and research. In October 2023, President Joe Biden signed an executive order that tackles the “substantial security risks” that come with American AI innovation. Similarly, another academic paper released in June revealed China’s use of Llama for “intelligence policing” in the US.
William Hannas, lead analyst at Georgetown University’s Center for Security and Emerging Technology (CSET), told Reuters: “There is too much collaboration going on between China’s best scientists and the US’s best AI scientists for them to be excluded from developments.”
A verdict on China’s recent efforts to leverage American AI infrastructure is yet to break out. However, the long-standing dilemma of the misuse of open-source AI models persists, placing greater stress on the US and China’s fractured relations.