ARTIFICIAL INTELLIGENCE IN WARFARE: EXPLORING CRIMINAL RESPONSIBILITY UNDER INTERNATIONAL HUMANITARIAN LAW
Abstract
The increasing deployment of Artificial Intelligence (AI) in warfare has significant implications for International Humanitarian Law (IHL). As AI systems become more autonomous, attributing criminal responsibility for war crimes committed using these systems becomes increasingly complex. This study examines the challenges posed by AI in warfare to the principles of criminal responsibility under IHL. It analyzes the existing legal frameworks, identifies their limitations, and explores the various scenarios in which AI systems may be used in warfare, including autonomous weapons, cyber warfare, and intelligence gathering. A primary challenge in attributing criminal responsibility for AI-related war crimes is the lack of clear guidelines on human oversight and control. This study argues that adapting the current principles of command responsibility and superior responsibility may be necessary to account for AI system use. The study also examines the potential liability of states and non-state actors for AI-related war crimes, considering the implications of AI system use on the principles of distinction and proportionality, and the potential consequences for civilians and civilian objects. This research proposes a framework for attributing criminal responsibility for AI-related war crimes under IHL, emphasizing the need for further research and clear guidelines to ensure accountability for AI use in warfare.