Employment of AI in Decisions on the Use of Force

Research output: Contribution to book/Conference proceedings/Anthology/ReportChapter in book/Anthology/ReportContributed

Abstract

The use of force between states is expressly prohibited by the UN Charter. However, it may be permitted in exceptional cases, notably in self-defence. The use of AI, which is a multi-purpose technology that affects military operations at every level, raises manifold new legal challenges. AI suffers from technical shortcomings and lacks transparency in its decision-making process, which often leads to unpredictable decisions. As a result, today’s AI must not be used without human control. Even with human control, however, there are significant dangers, such as humans placing too much trust in the AI, or the AI making decisions too quickly for humans to monitor them effectively. States are therefore obliged to develop and deploy their AIs only with the utmost care. To counteract the difficulties of man-machine interaction, AI’s susceptibility to errors and a state’s responsibility for violations of international law must be highlighted. A cautious and sceptical approach towards AI remains necessary.

Details

Original languageEnglish
Title of host publicationResearch Handbook on Warfare and Artificial Intelligence
EditorsRobin Geiß, Henning Lahmann
PublisherEdgar Elgar
Pages136-160
Number of pages25
ISBN (electronic)9781800377400
Publication statusPublished - 1 Jan 2024
Peer-reviewedNo

External IDs

Scopus 85209849368

Keywords

Keywords

  • Artificial intelligence, Attribution, Due diligence, Jus contra bellum, Meaningful human control, Use of force