Employment of AI in Decisions on the Use of Force
Research output: Contribution to book/Conference proceedings/Anthology/Report › Chapter in book/Anthology/Report › Contributed
Contributors
Abstract
The use of force between states is expressly prohibited by the UN Charter. However, it may be permitted in exceptional cases, notably in self-defence. The use of AI, which is a multi-purpose technology that affects military operations at every level, raises manifold new legal challenges. AI suffers from technical shortcomings and lacks transparency in its decision-making process, which often leads to unpredictable decisions. As a result, today’s AI must not be used without human control. Even with human control, however, there are significant dangers, such as humans placing too much trust in the AI, or the AI making decisions too quickly for humans to monitor them effectively. States are therefore obliged to develop and deploy their AIs only with the utmost care. To counteract the difficulties of man-machine interaction, AI’s susceptibility to errors and a state’s responsibility for violations of international law must be highlighted. A cautious and sceptical approach towards AI remains necessary.
Details
Original language | English |
---|---|
Title of host publication | Research Handbook on Warfare and Artificial Intelligence |
Editors | Robin Geiß, Henning Lahmann |
Publisher | Edgar Elgar |
Pages | 136-160 |
Number of pages | 25 |
ISBN (electronic) | 9781800377400 |
Publication status | Published - 1 Jan 2024 |
Peer-reviewed | No |
External IDs
Scopus | 85209849368 |
---|
Keywords
ASJC Scopus subject areas
Keywords
- Artificial intelligence, Attribution, Due diligence, Jus contra bellum, Meaningful human control, Use of force