Securing the nuclear second-strike capability is the basis of the deterrence strategy that has so far deterred any potential attacker from launching a nuclear attack: “Whoever shoots first dies second”.
In order to be able to react when the second-strike capability is threatened, the nuclear powers have developed and installed computer-supported early warning and decision systems with the aim of recognizing an attack in good time so that they can activate their own nuclear launchers before the destructive impact. Such a strategy is called a “launch-on-warning” strategy. Although the time required to make decisions when an attack is reported has fallen to a few minutes in recent years, the final decision – not least because of the susceptibility to errors of such systems – is still left to humans.
The end of the INF Treaty (INF stands for Intermediate Range Nuclear Forces) has led to a new arms race, also with hypersonic missiles, which now shorten this time span even further. There is so little time for human beings to analyse and evaluate these alerts that recently, more and more artificial intelligence (AI) systems are developed for this purpose. While AI systems can be quite useful for reconnaissance, we consider the final decision to counterattack by a computer to be ethically unacceptable. Even AI systems cannot provide reliable results in such applications, because the underlying data is uncertain, vague and incomplete. Automatic recognition results are therefore only valid with a certain probability and can be wrong.
Due to the uncertainty of the data, people also base their decisions on contextual knowledge about the political situation and the assessment of the opponent“, which is further complicated by the potential end of the “open skies” treaty. For example, in an alarm case, the operating crew of the American early warning system decided that it must be a false alarm because the Soviet head of state was on a state visit to Washington at the time. A serious false alarm in the Soviet early warning system occurred in 1983, where only the courageous intervention of commander Stanislav Petrov prevented the nuclear catastrophe.
Even in machine decisions, contextual knowledge of the world political situation must be included in the evaluation of alarms, and this knowledge is also uncertain, vague and incomplete. The result of the analysis by an AI system is therefore always correct only within the framework of a statistical probability.
With new technology, new risks and fears usually also arise. Thus, parts of the population often described an irrational danger that did not exist in reality. For example, the danger of fast train journeys was classified as dangerous in the 19th century and currently the danger of autonomous vehicles or robots. Objectively speaking, safety has increased through the use of new technologies, and many experts believe that autonomous driving significantly reduces the risk of accidents. When developing innovations into reliable technologies, the repeated cycle of “trial and error” is always important: The misclassification of a truck tarpaulin as a “free road” in the earlier use of the Tesla, which led to a serious accident, has been eliminated and has resulted in better and more robust classification methods. However, in such cases the risks are manageable. However, the early warning and decision systems cannot be tried out due to the catastrophic consequences. Here, even the first mistake has the potential to cause irreparable damage to society and its ecology on a scale that threatens the continued existence of our modern society.
But the survival of humanity as a whole should never depend on the decision of a single person or a machine and the danger of an accidental nuclear war cannot be reduced by greater use of artificial intelligence methods. Due to the uncertain and incomplete data basis, neither humans nor machines can reliably evaluate incoming alarm messages in such a short time span.
Responsible for this text: Prof. Dr. (Ph. D.) grad. Ing. Joerg Siekmann
Translated with www.DeepL.com/Translator (free version)