EN FR
EN FR


Section: Overall Objectives

Overall Objectives

Historically, research activities of the Lagadic team are concerned with visual servoing, visual tracking, and active vision. Visual servoing consists in using the information provided by a vision sensor to control the movements of a dynamic system. This research topic is at the intersection of the fields of robotics, automatic control, and computer vision. These fields are the subject of profitable research since many years and are particularly interesting by their very broad scientific and application spectrum. Within this spectrum, we focus on the interaction between visual perception and action. This topic is important because it provides an alternative to the traditional Perception-Decision-Action cycle. It is indeed possible to link the perception and action aspects more closely, by directly integrating the measurements provided by a vision sensor in closed loop control laws. Our objective is thus to design strategies of coupling perception and action from images for applications in robotics, computer vision, virtual reality and augmented reality.

This objective is significant, first of all because of the variety and the great number of potential applications to which our work can lead (see Section 4.1). Secondly, it is also significant to be able to raise the scientific aspects associated with these problems, namely modeling of visual features representing the interaction between action and perception in an optimal way, taking into account of complex environments and the specification of high level tasks. We also work to treat new problems provided by imagery systems such as those resulting from an omnidirectional vision sensor or echographic probes. We are finally interested in revisiting traditional problems in computer vision (3D localization) through the visual servoing approach.

Thanks to the arrival of Patrick Rives and his students in the group in April 2012, which makes Lagadic now localized both in Rennes and Sophia Antipolis, the group now also focuses on building consistent representations of the environment that can be used to trigger and execute the robot actions. In its broadest sense, perception requires detecting, recognizing, and localizing elements of the environment, given the limited sensing and computational resources available on the embedded system. Perception is a fundamental issue for both the implementation of reactive behaviors, as is traditionally studied in the group, and the construction of the representations that are used at the task level. Simultaneous Localization and Mapping (SLAM) is thus now one of our research areas.

Among the sensory modalities, computer vision, range finder and odometry are of particular importance and interest for mobile robots due to their availability and extended range of applicability, while ultrasound images and force measurements are both required for our medical robotics applications. The fusion of complementary information provided by different sensors is thus also a central issue for modeling the environment, robot localization, control, and navigation.

Much of the processing must be performed in real time, with a good degree of robustness so as to accommodate with the large variability of the physical world. Computational efficiency and well-posedness of the methods developed are thus constant preoccupations of the group.