China’s People’s Republic recently released a “position paper” outlining the country’s views on military artificial intelligence control. We’ve gone over it thoroughly and have come to the following conclusion: it’s gibberish.
First and foremost, you want to know whether a global superpower plans to create lethal autonomous weapons systems when it produces official government paperwork explaining its views on the use of artificial intelligence for military applications (LAWS).
China’s position paper makes no mention of limiting the use of computers that can choose and fire on targets on their own. Instead, it uses obfuscating words to dance around the subject.
Per the paper:
In terms of law and ethics, countries need to uphold the common values of humanity, put people’s well-being front and center, follow the principle of AI for good, and observe national or regional ethical norms in the development, deployment and use of relevant weapon systems.
There are presently no laws, norms, or restrictions governing the creation or use of military LAWs in the United States or the People’s Republic of China.
Background: While the paper’s rhetoric may be vacuous, its contents provide a wealth of information.
According to Megha Pardhi, a research analyst for the Asia Times, it was intended to communicate that China wants to “be perceived as a responsible state,” and that it is concerned about its progress in the field in comparison to other superpowers.
According to Pardhi:
Beijing is likely talking about regulation out of fear either that it cannot catch up with others or that it is not confident of its capabilities. Meanwhile, formulating a few commonly agreeable rules on weaponization of AI would be prudent.
What we think: According to Chinese military leaders, the PRC’s battlefield AI goals aren’t solely focused on LAWS.
Colonel Yuan-Chou Jin, a former director of the Army Command Headquarters’ Intelligence Division and an associate professor at the Graduate Institute of China Military Affairs Studies, compared the PRC’s planned AI application to the Third Reich’s deployment of Blitzkrieg tactics during World War II.
Per an article authored by Yuan-Chou Jin in The Diplomat:
Looking back in history, Wehrmacht highlighted the Blitzkrieg in its frontal attack to beat the rivals based upon its relative advantage of speed during WWII.
For Chinese familiar with martial arts, a relevant well-known phrase captures the same: “There is no impregnable defense, but for the swiftness.” Speed in history has been strongly featured as a critical factor that determines the outcome of war.
That is exactly the case of AI. One of the advantages of AI is to speed up military decision making. More specifically, AI is particularly fit for blitz tactics. In the scenario of the PLA waging a war against Taiwan, distance makes instant U.S. reinforcement difficult.
During WWII, blitzkrieg tactics were effective because they effectively swamped the opponent. It entailed hitting opponents’ airbases to render their planes ineffective while still on the ground, and then sending high-speed armour crashing through formerly “impassable” terrain to catch unprepared infantry off guard.
The lagging German infantry moved into the battlefield as the armour moved out in the closing act of every Blitzkrieg attack. The soldiers might then seek out and destroy any remaining bulwarks of resistance.
The tactic’s success was driven by two key factors: speed and decisiveness. And, arguably, the only way to speed up Blitzkrieg would have been to delegate target acquisition and elimination to artificial intelligence.
Despite the fact that neither the colonel’s article nor the PRC’s position paper specifically mention LAWs, it’s clear that what they don’t say is what actually matters.
Both China and the United States have every cause to suspect, and fear, that they are actively developing LAWS.