Computer Vision for Robots

Motivation

Cameras are the eyes of a robot and allow it to perceive its environment. Computer vision is about algorithms and the data that these algorithms work on come from cameras or other sensors. At SDU Robotics we develop methods for solving a range of computer vision problems that occur in robot technology. We work closely together across research areas on integrating real-life solutions for robotic applications.

Current work

Visual recognition and pose estimation. Object recognition is a long-standing problem in computer vision. There is a huge potential for e.g. robotic automation if algorithms can perform reliable part localization. We are currently developing methods for fast and reliable object recognition in both image and 3D data. This includes classical computer vision methods, but also newer methods based on deep learning.

Visual inspection. Many assembly processes can benefit from up to several inspection stages where the quality of the assembly process so far can be asserted. We are currently developing methods for training a visual system to recognize one or more failure types during each step of an assembly process. We employ deep learning techniques that are provided with images of the correct and different failure types which should allow for an easy to use system.

3D sensor modelling. The foundation for many of the methods we are developing is a good visual signal from the sensor that is being used. A challenging aspect is how to reconstruct 3D data from many different surfaces, e.g. shiny materials. We are researching new camera models and new ways to design sensors that can reliably extract relevant depth information for many different types of objects.

Future work

Few-shot learning and simulated images for visual inspection. To allow us to reduce the number of real-world images needed for training, and thereby make our visual inspection approach even more applicable, we will investigate different technologies. These technologies are based on exploiting generic structures (few-shot learning) and rendered images as a proxy for real-world situations.

Unsupervised and synthetic learning for visual recognition. We will also investigate learning from synthetic data for object recognition and pose estimation, allowing users of these methods to skip the lengthy process of acquiring training examples for our algorithms. Some of the key topics that we will address are domain randomization and domain transfer. To further improve the learned visual representations, we will also focus on unsupervised and semi-supervised learning, both using image data and 3D data.

For more information contact Associate Professor Anders Glent Buch