Karl Hans Bläsius had already been intensively occupied with the subject of nuclear war by mistake in the 1980s (see Activities in the 1980s). He represented his subject area “knowledge-based systems” (belongs to the field of AI – artificial intelligence) in teaching and research at the Trier University of Applied Sciences. He has gained extensive experience in AI programming in the course of several company start-ups. Computer science and AI play an increasingly important role in early warning systems. Karl Hans Bläsius is co-author of several recent publications on the nuclear war risk (see https://atomkrieg-aus-versehen.de/en/artikel/).
Leo Ensel is a conflict researcher and intercultural trainer and has been working for years on different ways of relaxation between East and West. In numerous publications he calls for a “Broad Coalition of Reason” for de-escalation in the New East-West Conflict. Such confidence-building measures are also important in connection with false alarms in early warning systems, since the evaluation of an alarm message can depend decisively on mutual trust. Leo Ensel also knew Stanislaw Petrov personally and visited him in his home near Moscow. Stanislav Petrov’s prudent response to a missile attack report in 1983 may have prevented a nuclear war (see https://de.wikipedia.org/wiki/Stanislaw_Jewgrafowitsch_Petrow ). Leo Ensel also met Michael Gorbachev twice (most recently in August 2019) and talked to him about nuclear war risks.
Markus Patenge is a theologian and one of the authors of the position paper of the German Commission Justitia et Pax “Outlawing Nuclear Weapons as the Start of Nuclear Disarmament”. This position paper uses well-founded arguments to justify why deterrence theory cannot protect us from the use of nuclear weapons in the long term. Markus Patenge represents this position paper as a spokesman to the outside world, also, for example, in discussions with military representatives of NATO. Furthermore, he is a speaker for the area of peace.
Uwe Werner Schierhorn holds a degree in mechanical engineering and has been active in the peace movement for many years. On the subject of “accidental nuclear war” he investigates above all “near-accidents”, i.e. situations with missiles attack reports, and has compiled an extensive list of such incidents. The lieutenant of the reserve seeks contact with military and political decision-makers in order to talk to them about “accidental nuclear war”. He is a member of the „Gesellschaft für Sicherheitspolitik“ (Society for Security Policy) and the board of the „Darmstädter Signal“, a working group of critical soldiers. Furthermore, he is a consultant for peace building in schools (Protestant Church) and citizen radio editor at Welle-Rhein-Erft (Radio Erft) and Medienwerkstatt Bonn (Radio Bonn-Rhein-Sieg) with a focus on peace and security programmes.
Jörg Siekmann had already been intensively involved with the subject of nuclear war by accident in the 1980s (see Activities in the 1980s). He is one of the pioneers for the development of AI in Germany and was honored in 2019 by the “Gesellschaft für Informatik” as one of the 10 most influential minds in AI research in Germany. He is co-founder of the DFKI (German Research Institute for Artificial Intelligence). Due to his extensive scientific work in the field of AI he is able to assess limits and possible problems with AI applications in early warning systems very well. Jörg Siekmann is co-author of several recent publications on the nuclear war risk (see https://atomkrieg-aus-versehen.de/en/artikel/).
Bernhard Taureck is a philosopher and also deals intensively with conflict and war situations, including historical dimensions. In his current book “Drei Wurzeln des Krieges” (“Three Roots of War”) he deals with the essential characteristics of war from a philosophical point of view. From a war-critical perspective, he also deals with current threat situations of terror and the use of nuclear weapons as well as remaining possibilities for conflict avoidance. He fears that war is currently proving to be a resurgent dangerous illusion on the part of human beings.
Ingo J. Timm is a (business) computer scientist with a focus on artificial intelligence. He is researching AI systems in which the human being is in the center of attention and the production & processes follow the rhythm of the human being. Due to his extensive scientific activities in the field of AI but also his responsible activities in expert committees, he is able to assess the limits and possible problems of AI applications in early warning systems very well. He is co-author of recent publications on nuclear war risk. Ingo J. Timm is co-spokesman of the Artificial Intelligence Department of the Gesellschaft für Informatik e.V., member of the Zukunftsforum Öffentliche Sicherheit e.V., an initiative of members of parliament for members of the German Bundestag, and member of the Ethics Commission of the Senate of the University of Trier.
Translated with www.DeepL.com/Translator