For several months, with the development of ChatGPT and other chat robots, artificial intelligence and its prospects have attracted worldwide attention. However, people pay little attention to the application of artificial intelligence in conflicts and its impact on civilians. In the war-torn Russia-Ukraine front, the extensive use of weapons such as unmanned aerial vehicles, unmanned boats, patrol missiles and smart mines will herald a future in which human beings will use their own weapon systems to wage war.
On August 4, a Ukrainian unmanned boat attacked a Russian amphibious landing ship moored in the Russian Novorossiysk military port, causing serious damage to the latter. A series of recent attacks on government buildings in Moscow are also related to Ukrainian drones. The Russian army, on the other hand, used a large number of suicide drones or patrol missiles on the land battlefield and achieved remarkable results.
According to the Jerusalem Post reported on August 10th, a British think tank, GlobalData, recently released a report, saying that "although the public’s interest in AI has increased greatly mainly due to the release of ChatGpt more than half a year ago, the military of various countries have already paid attention to the future application of AI in the battlefield, especially the autonomous weapon system capable of deep learning."
AI and autonomous weapon system
Autonomous weapon system refers to a weapon system with autonomy in "important functions", that is to say, this weapon can choose (search or monitor, determine and track) and attack (intercept, use force to confront, offset, destroy or destroy) targets without human intervention.
Autonomous weapon systems choose targets and use force against them without human intervention. After being initially started or launched by a human, the autonomous weapon system starts or triggers an attack on its own according to the environmental information received from the sensor and based on the general "target description". This means that the user has not chosen the specific target, exact time and/or place of the attack by using force from the autonomous weapon system, or even knows nothing about it.
Although at the current level of military science and technology, robots that fight automatically have not yet appeared, the unmanned trend of conflicts has become very obvious. In recent years, the patrol missile, which shines brightly in wars and conflicts, is a new concept ammunition, which has the potential to evolve into an autonomous weapon system in the future. This weapon can cruise over the target area, "stand by" and search for the target, and then attack after locating the target. In the Naka conflict between Azerbaijan and Armenia in 2020, a large number of patrol missiles participated in the war, which profoundly affected the development of the war situation and attracted great attention from all countries. At present, in the ongoing conflict between Russia and Ukraine, both sides also use a large number of patrol missiles. The United States has provided Ukraine with patrol missiles such as Spring Knife and Phoenix Ghost, while Russia has used a large number of patrol missiles such as The Lancet and KUB-BLA. A large number of videos and photos have proved that such weapons have caused serious losses to personnel and weapons and equipment of both sides.
In the existing arsenals of various countries, many remote-controlled weapon systems have autonomous "modes", so they can only operate autonomously in a short period of time. Moreover, these weapons are extremely limited in the tasks they perform, the types of targets they attack and the environment in which they are used. Most existing systems are also monitored by human operators in real time.
However, with the development of military science and technology, the future autonomous weapon system may have more freedom of action when determining the attack target, which is not strictly limited by space and time, and can cope with rapidly changing situations.
According to a report by Defense One, the US military is increasingly interested in using artificial intelligence, especially machine learning, to control autonomous weapons. According to the report, machine learning software is "experienced" in data, and can create its own model for specific tasks and formulate strategies to complete the task. This kind of software can program itself to some extent. However, this model is usually a "black box", and it is extremely difficult for human beings to predict, understand and explain it, and it is also difficult to test how and on what basis the machine learning system realizes a specific evaluation or decision. Some machine learning systems will continue to "learn" during their use, that is, the so-called "active", "online" learning or "self-learning", which means that their task models will change with time.
ICRC: The international community urgently needs to strengthen discussions.
As early as 2015, the International Red Cross (ICRC) pointed out that the current speed of scientific and technological development and military interest in the autonomy of weapon systems make it urgent for the international community to consider the legal and moral impact of such weapons.
Recently, ICRC also published an article on its website on July 24th, arguing that the operation process of autonomous weapon system may bring harm to people (whether civilians or combatants) affected by armed conflict, and it is in danger of escalating the conflict. It will also challenge the observance of the rules of international law, such as international humanitarian law, especially the rules of hostilities for the protection of civilians. In addition, replacing human’s decision on life and death with sensors, software and machine programs will arouse human’s fundamental moral concern.
Therefore, the dialogue around the military use of artificial intelligence needs to incorporate the principles of international humanitarian law stipulated in the Geneva Conventions and their Additional Protocols. The international community should adopt a people-oriented approach to deal with the use of artificial intelligence in conflict-affected areas. Protective measures need to be established to help strengthen existing protective measures and reduce other risks.
At present, the discussion of lethal autonomous weapon system in the international community focuses on whether it can clearly distinguish combatants from non-combatants, and whether it will blur the actors of killing behavior. Some non-governmental organizations or lobbying organizations, such as "stop the killer robot movement", have advocated that the international community completely ban lethal autonomous weapon systems. Different opinions hold that the supervision of international law and domestic law can be strengthened.
According to the British "Guardian" reported earlier, at a related meeting in 2019, many representatives from developing countries agreed to completely ban lethal autonomous weapon systems or impose strict regulatory measures, but Britain, the United States, Russia, Australia and Israel expressed opposition to this. Britain’s reason is that at present, the British army is not equipped with a completely autonomous weapon system, and the relevant discussions are not sufficient, so it is not in favor of premature implementation of the "preventive ban."