Skip to main content

Ingvild Bode

Associate Professor
Department of Political Science and Public Management


What are your research interests?
I am interested in understanding the dynamics of how international norms emerge and change. These are important processes to examine because norms guide the behaviour of actors, such as states, by providing standards for appropriate behaviour. Within this broader field, I concentrate on norms in the field of peace and security, especially use of force norms, and how these may change in the context of weapons systems featuring emerging technologies such as Artificial Intelligence (AI). While states are obligated to use force in adherence to international law, it can often take years or decades for states to agree upon specific regulations on new weapons systems. Yet, in the meantime, such weapons systems are already developed, tested, and deployed in practices – and these practices can shape norms from the bottom-up. My research investigates these processes in the context of weaponised AI. 

How did you become interested in your field of research?

My interest was triggered by empirical observations: there are many international norms and policies that shape the international agenda, but we know comparatively little about where they come from and the dynamics of this process. Originally, I was interested in armed drones and how the availability of this technology changed how states use force, such as the targeted killing of terrorist suspects. I became interested in AI-driven technologies because of their revolutionary potential for shaping our societies – in the context of war but also beyond. 
What research question would you above all like to find the answer to? And why is that?
The question I aim to answer is how do practices of developing, testing, and using weapons systems with autonomous features change international norms? Capturing these dynamics is vital because of the significant effect such systems may have on how states conduct war. But analysing this process can also tell us something about the wider societal implications of the AI revolution in mapping how practices shape social norms. 

Which impact do you expect your research to have on the surrounding society?  
Weaponising AI in the form of weapons systems with autonomous features signals a concerning development as they may completely change the nature of warfare. This is because they come with a decrease or even a functional loss of human control over the use of force. This raises ethical, political, as well as legal concerns that come down to two questions. First, can these systems actually deliver what they are expected to do? For example, can autonomous weapons systems even be used in adherence to international law? The answers to this are often highly speculative with regard to what technology is/will be capable of doing. Second, and the more fundamental question – should autonomous weapons systems be employed to use force? “Delegating” kill decisions to AI “outsources” decisions that should weigh heavy on human consciences and threatens human dignity. The introduction of such systems should therefore be subject to fundamental ethical and moral reasoning. The aim of my research is to encourage a critical engagement with weaponised AI in wider society – and to question the seemingly inevitable “progress” of AI-driven applications. My research also makes contributions to the ongoing regulatory debate on lethal autonomous weapons systems at the UN-CCW in Geneva by engaging with key stakeholders.