Talk
Talk | Why robots should learn to say “I don’t know”: Uncertainty-awareness and safety for physical AI
By Associate Professor Iñigo Iturrate, The Maersk Mc-Kinney Moller Institute, SDU Robotics
Time: 14.15-15.15
Place: Campus Odense, U58
Artificial intelligence (AI) has seen unprecedented development in the last few years and has become part of our everyday vernacular and routine, assisting with tasks such as text and image generation. Increasingly, AI is also being applied to control physical systems such as robots. Yet, these domains of application are fundamentally different and should be treated as such: The direct impact of a nonsensical sentence or image is merely an inconvenience to the user. However, an error committed by an AI-controlled robot carries consequences in the physical world. The importance of this becomes clear when we consider that many robotics use cases fall within safety- and performance-critical domains, such as industrial manufacturing or surgery. In these use cases, not only do errors carry catastrophic consequences, but their allowable magnitude is several orders smaller than for most image or text processing tasks.
This talk will examine various current research applications of AI systems to robotics use cases in (de/re)manufacturing and surgery. It will motivate the need for physical AI systems with an in-built capability to quantify their own uncertainty, as well as for certifiable safety and performance guarantees for such systems. Taking this as an outset, we will see some examples of how these ideas can technically be applied to real AI systems and we will consider the associated challenges. Lastly, we will discuss ethical issues around autonomous AI-controlled robots.