By Izabela Zaluska
Jovana Davidovic, associate professor in the Department of Philosophy in the College of Liberal Arts and Sciences, received a $1 million grant from the Research Council of Norway to advance ethical risk management practices for AI-enabled weapons.

The three-year project, “Ethical Risk Management for AI-Enabled Weapons: A Systems Approach,” aims to develop ethical risk and governance practices to minimize the harms of using AI-enabled weapons systems and AI-decision support. The grant is for 11.5 million Norwegian Krone, which is about $1.03 million.
“The field of AI-enabled weapons is moving at a breakneck speed,” Davidovic said. “The use of AI models has exploded over the last five to 10 years, and with it, so have the risks of misuse, abuse, failure, bias, and ultimately, unjustified harm to civilians."
Davidovic’s career has focused on the ethics of war, specifically on questions about discrimination and proportionality in war. The questions she has been asking throughout her career got complicated by the presence of AI-enabled tools for wartime decision support and AI-enabled weapons.
Davidovic expressed how CLAS has supported her research, including when she spent a year as a research fellow at the United States Naval Academy working on these issues and her involvement with the recently-formed Interdisciplinary Consortium for the Study of War and Genocide.
The current project funded by the Research Council of Norway builds on Davidovic’s last five years of research on the ethics of AI-enabled weapons. The project is housed at Peace Research Institute of Oslo (PRIO), where Davidovic is a senior researcher.
Among the project’s goals is developing a practice-sensitive ethical framework and toolkit for mitigating ethical risks in the use of AI-enabled weapons.
“We hope this project can play a role in getting a clearer picture of all the various ways in which we can, throughout the lifecycle of algorithms (development to use) make good decisions that will minimize the risk of harm to civilians,” Davidovic said.
The project requires extensive interactions with the defense industry, weapons developers, and policy makers, Davidovic said. The grant will support field work and interviews with developers to better understand exactly how decisions get made in the entire lifecycle of an AI weapon or AI-decision support tool—from development and testing, to procurement, fielding, and deployment.
Davidovic and her team will also communicate their findings with policymakers, governments, and weapons manufacturers. The main partners on the project are the Norwegian Ministry of Defense, the Norwegian Ministry of Foreign Affairs, the Norwegian Red Cross, and Georgetown University.
“Our partners will contribute both to developing the framework, but centrally they will contribute by helping facilitate conversations in policy circles and government circles helping to socialize our findings, including at United Nations meetings and side-events, Munich Security Conferences, and other venues,” Davidovic said.