2024Activity reportProject-TeamRAINBOW
RNSR: 201822637G- Research center Inria Centre at Rennes University
- In partnership with:CNRS, Institut national des sciences appliquées de Rennes, Université de Rennes
- Team name: Sensor-based Robotics and Human Interaction
- In collaboration with:Institut de recherche en informatique et systèmes aléatoires (IRISA)
- Domain:Perception, Cognition and Interaction
- Theme:Robotics and Smart environments
Keywords
Computer Science and Digital Science
- A5.1.2. Evaluation of interactive systems
- A5.1.3. Haptic interfaces
- A5.1.7. Multimodal interfaces
- A5.1.9. User and perceptual studies
- A5.4.4. 3D and spatio-temporal reconstruction
- A5.4.6. Object localization
- A5.4.7. Visual servoing
- A5.6. Virtual reality, augmented reality
- A5.6.1. Virtual reality
- A5.6.2. Augmented reality
- A5.6.3. Avatar simulation and embodiment
- A5.6.4. Multisensory feedback and interfaces
- A5.9.2. Estimation, modeling
- A5.10.1. Design
- A5.10.2. Perception
- A5.10.3. Planning
- A5.10.4. Robot control
- A5.10.5. Robot interaction (with the environment, humans, other robots)
- A5.10.6. Swarm robotics
- A5.10.7. Learning
- A6.4.1. Deterministic control
- A6.4.3. Observability and Controlability
- A6.4.4. Stability and Stabilization
- A6.4.5. Control of distributed parameter systems
- A6.4.6. Optimal control
- A8.2.3. Calculus of variations
- A9.2. Machine learning
- A9.5. Robotics
- A9.7. AI algorithmics
- A9.9. Distributed AI, Multi-agent
Other Research Topics and Application Domains
- B2.5. Handicap and personal assistances
- B2.5.1. Sensorimotor disabilities
- B2.5.2. Cognitive disabilities
- B2.5.3. Assistance for elderly
- B5.1. Factory of the future
- B5.6. Robotic systems
- B8.1.2. Sensor networks for smart buildings
- B8.4. Security and personal assistance
1 Team members, visitors, external collaborators
Research Scientists
- Paolo Robuffo Giordano [Team leader, CNRS, Senior Researcher]
- François Chaumette [INRIA, Senior Researcher, HDR]
- Alexandre Krupa [INRIA, Senior Researcher, HDR]
- Claudio Pacchierotti [CNRS, Researcher, HDR]
- Esteban Restrepo [CNRS, Researcher, from Nov 2024]
- Marco Tognon [INRIA, ISFP]
Faculty Members
- Marie Babel [INSA RENNES, Professor]
- Vincent Drevelle [Univ. Rennes, Associate Professor]
- Maud Marchal [INSA RENNES, Professor]
- Éric Marchand [Univ. Rennes, Professor]
Post-Doctoral Fellow
- Tommaso Belvedere [CNRS, Post-Doctoral Fellow, from Jun 2024]
PhD Students
- Jose Eduardo Aguilar Segovia [INRIA]
- Lorenzo Balandi [INRIA]
- Maxime Bernard [CNRS]
- Szymon Bielenin [INRIA, from Oct 2024]
- Antoine Bout [INSA RENNES]
- Pierre-Antoine Cabaret [INRIA]
- Nicola De Carli [CNRS, until Apr 2024]
- Jessé De Oliveira Santana Alves [Univ. Rennes, from Nov 2024]
- Mael Gallois [INRIA, from Sep 2024]
- Glenn Kerbiriou [INTERDIGITAL, until Apr 2024]
- Ines Lacote [INRIA, until May 2024]
- Theo Le Terrier [INSA RENNES]
- Emilie Leblong [POLE ST HELIER]
- Maxime Manzano [INSA RENNES]
- Antonio Marino [Univ. Rennes]
- Paul Mefflet [Haption, CIFRE, from Feb 2024]
- Phillip Maximilian Mehl [INRIA, from Feb 2024]
- Lendy Mulot [INSA RENNES]
- Thibault Noel [INRIA, from Oct 2024]
- Thibault Noel [CREATIVE, until Sep 2024]
- Erwan Normand [Univ. Rennes]
- Mandela Ouafo Fonkoua [INRIA]
- Jim Pavan [INSA RENNES, from Oct 2024]
- Mattia Piras [INRIA]
- Lluis Prior Sancho [INRIA, from Nov 2024]
- Leon Raphalen [CNRS]
- Sara Rossi [INSA RENNES, from Feb 2024]
- Lev Smolentsev [INRIA, until Mar 2024]
- Ali Srour [CNRS, until Sep 2024]
- John Thomas [INRIA, until Apr 2024]
Technical Staff
- Riccardo Belletti [INRIA, from Apr 2024 until Jul 2024]
- Tommaso Belvedere [CNRS, from Mar 2024 until May 2024]
- Alessandro Colotti [INRIA, Engineer]
- Gianluca Corsini [CNRS, Engineer]
- Nicola De Carli [CNRS, Engineer, from May 2024]
- Louise Devigne [INSA RENNES, Engineer, from Sep 2024]
- Louise Devigne [INRIA, Engineer, until Aug 2024]
- Samuel Felton [INRIA, Engineer]
- Marco Ferro [CNRS, Engineer]
- Guillaume Gicquel [INSA RENNES, Engineer]
- Fabien Grzeskowiak [INSA RENNES, Engineer]
- Glenn Kerbiriou [INSA RENNES, Engineer, from Apr 2024]
- Romain Lagneau [INRIA, Engineer]
- Paul Mefflet [CNRS, Engineer, until Feb 2024]
- François Pasteau [INSA RENNES]
- Esteban Restrepo [CNRS, Engineer, until Oct 2024]
- Olivier Roussel [INRIA, Engineer]
- Fabien Spindler [INRIA, Engineer]
- Sebastien Thomas [INRIA, Engineer]
- Thomas Voisin [INRIA]
Interns and Apprentices
- Riccardo Belletti [INRIA, Intern, until Feb 2024]
- Martin Bichon Reynaud [ENS RENNES, Intern, until Jan 2024]
- Valeria Braglia [INRIA, Intern, from Feb 2024 until Aug 2024]
- Emanuele Buzzurro [INRIA, Intern, from Feb 2024 until Jul 2024]
- Giulio Franchi [INRIA, Intern, from Nov 2024]
- Tom Goalard [ENS Rennes, from Oct 2024]
- Nicolas Martinet [CNRS, Intern, until May 2024]
- Ilaria Pasini [INRIA, Intern, from Dec 2024]
- Francesca Porro [INRIA, Intern, from Sep 2024]
Administrative Assistant
- Hélène de La Ruée [Univ. Rennes]
Visiting Scientists
- Massimiliano Bertoni [UNIV PADOUE, until Mar 2024]
- Marco Cognetti [UNIV TOULOUSE III, from Jun 2024 until Jul 2024]
- Alessia Ivani [UNIV PISE, from Jun 2024 until Sep 2024]
- Matteo Lanzarini [UNIV BOLOGNE, from Feb 2024 until Jul 2024]
- Julien Mellet [UNIV NAPLES, from Apr 2024 until May 2024]
- Hiroki Ota [NAIST, from Oct 2024]
- Francesca Pagano [UNIV NAPLES, from Jun 2024 until Jun 2024]
- Francesca Pagano [UNIV NAPLES, until Feb 2024]
- Andrea Pupa [Université de Modène, from Jun 2024 until Jun 2024]
- Danilo Troisi [UNIV PISE, until Apr 2024]
2 Overall objectives
The long-term vision of the Rainbow team is to develop the next generation of sensor-based robots able to navigate and/or interact in complex unstructured environments together with human users. Clearly, the word “together”' can have very different meanings depending on the particular context: for example, it can refer to mere co-existence (robots and humans share some space while performing independent tasks), human-awareness (the robots need to be aware of the human state and intentions for properly adjusting their actions), or actual cooperation (robots and humans perform some shared task and need to coordinate their actions).
One could perhaps argue that these two goals are somehow in conflict since higher robot autonomy should imply lower (or absence of) human intervention. However, we believe that our general research direction is well motivated since: despite the many advancements in robot autonomy, complex and high-level cognitive-based decisions are still out of reach. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will most probably be for the next decades. On the other hand, robots are extremely capable of autonomously executing specific and repetitive tasks, with great speed and precision, and of operating in dangerous/remote environments, while humans possess unmatched cognitive capabilities and world awareness which allow them to take complex and quick decisions; the cooperation between humans and robots is often an implicit constraint of the robotic task itself. Consider for instance the case of assistive robots supporting injured patients during their physical recovery, or human augmentation devices. It is then important to study proper ways of implementing this cooperation; finally, safety regulations can require the presence at all times of a person in charge of supervising and, if necessary, of taking direct control of the robotic workers. For example, this is a common requirement in all applications involving tasks in public spaces, like autonomous vehicles in crowded spaces, or even UAVs when flying in civil airspace such as over urban or populated areas.
Within this general picture, the Rainbow activities will be particularly focused on the case of (shared) cooperation between robots and humans by pursuing the following vision: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments (e.g., outside completely defined factory settings). On the other hand, include human users in the loop for having them in (partial and bilateral) control of some aspects of the overall robot behavior. We plan to address these challenges from the methodological, algorithmic and application-oriented perspectives. The main research axes along which the Rainbow activities will be articulated are: three supporting axes (Optimal and Uncertainty-Aware Sensing; Advanced Sensor-based Control; Haptics for Robotics Applications) that are meant to develop methods, algorithms and technologies for realizing the central theme of Shared Control of Complex Robotic Systems.
3 Research program
3.1 Main Vision
The vision of Rainbow (and foreseen applications) calls for several general scientific challenges: high-level of autonomy for complex robots in complex (unstructured) environments, forward interfaces for letting an operator giving high-level commands to the robot, backward interfaces for informing the operator about the robot `status', user studies for assessing the best interfacing, which will clearly depend on the particular task/situation. Within Rainbow we plan to tackle these challenges at different levels of depth:
- the methodological and algorithmic side of the sought human-robot interaction will be the main focus of Rainbow. Here, we will be interested in advancing the state-of-the-art in sensor-based online planning, control and manipulation for mobile/fixed robots. For instance, while classically most control approaches (especially those sensor-based) have been essentially reactive, we believe that less myopic strategies based on online/reactive trajectory optimization will be needed for the future Rainbow activities. The core ideas of Model-Predictive Control approaches (also known as Receding Horizon) or, in general, numerical optimal control methods will play a role in the Rainbow activities, for allowing the robots to reason/plan over some future time window and better cope with constraints. We will also consider extending classical sensor-based motion control/manipulation techniques to more realistic scenarios, such as deformable/flexible objects (“Advanced Sensor-based Control” axis). Finally, it will also be important to spend research efforts into the field of Optimal Sensing, in the sense of generating (again) trajectories that can optimize the state estimation problem in presence of scarce sensory inputs and/or non-negligible measurement and process noises, especially true for the case of mobile robots (“Optimal and Uncertainty-Aware Sensing” axis). We also aim at addressing the case of coordination between a single human user and multiple robots where, clearly, as explained the autonomy part plays even a more crucial role (no human can control multiple robots at once, thus a high degree of autonomy will be required by the robot group for executing the human commands);
-
the interfacing side will also be a focus of the Rainbow activities. As explained above, we will be interested in both the forward (human robot) and backward (robot human) interfaces. The forward interface will be mainly addressed from the algorithmic point of view, i.e., how to map the few degrees of freedom available to a human operator (usually in the order of 3–4) into complex commands for the controlled robot(s). This mapping will typically be mediated by an “AutoPilot” onboard the robot(s) for autonomously assessing if the commands are feasible and, if not, how to least modify them (“Advanced Sensor-based Control” axis).
The backward interface will, instead, mainly consist of a visual/haptic feedback for the operator. Here, we aim at exploiting our expertise in using force cues for informing an operator about the status of the remote robot(s). However, the sole use of classical grounded force feedback devices (e.g., the typical force-feedback joysticks) will not be enough due to the different kinds of information that will have to be provided to the operator. In this context, the recent interest in the use of wearable haptic interfaces is very interesting and will be investigated in depth (these include, e.g., devices able to provide vibro-tactile information to the fingertips, wrist, or other parts of the body). The main challenges in these activities will be the mechanical conception (and construction) of suitable wearable interfaces for the tasks at hand, and in the generation of force cues for the operator: the force cues will be a (complex) function of the robot state, therefore motivating research in algorithms for mapping the robot state into a few variables (the force cues) (“Haptics for Robotics Applications” axis);
- the evaluation side that will assess the proposed interfaces with some user studies, or acceptability studies by human subjects. Although this activity will not be a main focus of Rainbow (complex user studies are beyond the scope of our core expertise), we will nevertheless devote some efforts into having some reasonable level of user evaluations by applying standard statistical analysis based on psychophysical procedures (e.g., randomized tests and Anova statistical analysis). This will be particularly true for the activities involving the use of smart wheelchairs, which are intended to be used by human users and operate inside human crowds. Therefore, we will be interested in gaining some level of understanding of how semi-autonomous robots (a wheelchair in this example) can predict the human intention, and how humans can react to a semi-autonomous mobile robot.

An illustration of the prototypical activities foreseen in Rainbow in which a human operator is in partial (and high-level) control of single/multiple complex robots performing semi-autonomous tasks
Figure 1 depicts in an illustrative way the prototypical activities foreseen in Rainbow. On the righthand side, complex robots (dual manipulators, humanoid, single/multiple mobile robots) need to perform some task with high degree of autonomy. On the lefthand side, a human operator gives some high-level commands and receives a visual/haptic feedback aimed at informing her/him at best of the robot status. Again, the main challenges that Rainbow will tackle to address these issues are (in order of relevance): methods and algorithms, mostly based on first-principle modeling and, when possible, on numerical methods for online/reactive trajectory generation, for enabling the robots with high autonomy; design and implementation of visual/haptic cues for interfacing the human operator with the robots, with a special attention to novel combinations of grounded/ungrounded (wearable) haptic devices; user and acceptability studies.
3.2 Main Components
Hereafter, a summary description of the four axes of research in Rainbow.
3.2.1 Optimal and Uncertainty-Aware Sensing
Future robots will need to have a large degree of autonomy for, e.g., interpreting the sensory data for accurate estimation of the robot and world state (which can possibly include the human users), and for devising motion plans able to take into account many constraints (actuation, sensor limitations, environment), including also the state estimation accuracy (i.e., how well the robot/environment state can be reconstructed from the sensed data). In this context, we will be particularly interested in devising trajectory optimization strategies able to maximize some norm of the information gain gathered along the trajectory (and with the available sensors). This can be seen as an instance of Active Sensing, with the main focus on online/reactive trajectory optimization strategies able to take into account several requirements/constraints (sensing/actuation limitations, noise characteristics). We will also be interested in the coupling between optimal sensing and concurrent execution of additional tasks (e.g., navigation, manipulation). Formal methods for guaranteeing the accuracy of localization/state estimation in mobile robotics, mainly exploiting tools from interval analysis. The interest of these methods is their ability to provide possibly conservative but guaranteed accuracy bounds on the best accuracy one can obtain with the given robot/sensor pair, and can thus be used for planning purposes or for system design (choice of the best sensors for a given robot/task). Localization/tracking of objects with poor/unknown or deformable shape, which will be of paramount importance for allowing robots to estimate the state of “complex objects” (e.g., human tissues in medical robotics, elastic materials in manipulation) for controlling its pose/interaction with the objects of interest.
3.2.2 Advanced Sensor-based Control
One of the main competences of the previous Lagadic team has been, generally speaking, the topic of sensor-based control, i.e., how to exploit (typically onboard) sensors for controlling the motion of fixed/ground robots. The main emphasis has been in devising ways to directly couple the robot motion with the sensor outputs in order to invert this mapping for driving the robots towards a configuration specified as a desired sensor reading (thus, directly in sensor space). This general idea has been applied to very different contexts: mainly standard vision (from which the Visual Servoing keyword), but also audio, ultrasound imaging, and RGB-D.
Use of sensors for controlling the robot motion will also clearly be a central topic of the Rainbow team too, since the use of (especially onboard) sensing is a main characteristic of any future robotics application (which should typically operate in unstructured environments, and thus mainly rely on its own ability to sense the world). We then naturally aim at making the best out of the previous Lagadic experience in sensor-based control to propose new advanced ways of exploiting sensed data for, roughly speaking, controlling the motion of a robot. In this respect, we plan to work on the following topics: “direct/dense methods” which try to directly exploit the raw sensory data in computing the control law for positioning/navigation tasks. The advantages of these methods is the little need for data pre-processing which can minimize feature extraction errors and, in general, improve the overall robustness/accuracy (since all the available data is used by the motion controller); sensor-based interaction with objects of unknown/deformable shapes, for gaining the ability to manipulate, e.g., flexible objects from the acquired sensed data (e.g., controlling online a needle being inserted in a flexible tissue); sensor-based model predictive control, by developing online/reactive trajectory optimization methods able to plan feasible trajectories for robots subjects to sensing/actuation constraints with the possibility of (onboard) sensing for continuously replanning (over some future time horizon) the optimal trajectory. These methods will play an important role when dealing with complex robots affected by complex sensing/actuation constraints, for which pure reactive strategies (as in most of the previous Lagadic works) are not effective. Furthermore, the coupling with the aforementioned optimal sensing will also be considered; multi-robot decentralised estimation and control, with the aim of devising again sensor-based strategies for groups of multiple robots needing to maintain a formation or perform navigation/manipulation tasks. Here, the challenges come from the need of devising “simple” decentralized and scalable control strategies under the presence of complex sensing constraints (e.g., when using onboard cameras, limited fov, occlusions). Also, the need of locally estimating global quantities (e.g., common frame of reference, global property of the formation such as connectivity or rigidity) will also be a line of active research.
3.2.3 Haptics for Robotics Applications
In the envisaged shared cooperation between human users and robots, the typical sensory channel (besides vision) exploited to inform the human users is most often the force/kinesthetic one (in general, the sense of touch and of applied forces to the human hand or limbs). Therefore, a part of our activities will be devoted to study and advance the use of haptic cueing algorithms and interfaces for providing a feedback to the users during the execution of some shared task. We will consider: multi-modal haptic cueing for general teleoperation applications, by studying how to convey information through the kinesthetic and cutaneous channels. Indeed, most haptic-enabled applications typically only involve kinesthetic cues, e.g., the forces/torques that can be felt by grasping a force-feedback joystick/device. These cues are very informative about, e.g., preferred/forbidden motion directions, but are also inherently limited in their resolution since the kinesthetic channel can easily become overloaded (when too much information is compressed in a single cue). In recent years, the arise of novel cutaneous devices able to, e.g., provide vibro-tactile feedback on the fingertips or skin, has proven to be a viable solution to complement the classical kinesthetic channel. We will then study how to combine these two sensory modalities for different prototypical application scenarios, e.g., 6-dof teleoperation of manipulator arms, virtual fixtures approaches, and remote manipulation of (possibly deformable) objects; in the particular context of medical robotics, we plan to address the problem of providing haptic cues for typical medical robotics tasks, such as semi-autonomous needle insertion and robot surgery by exploring the use of kinesthetic feedback for rendering the mechanical properties of the tissues, and vibrotactile feedback for providing with guiding information about pre-planned paths (with the aim of increasing the usability/acceptability of this technology in the medical domain); finally, in the context of multi-robot control we would like to explore how to use the haptic channel for providing information about the status of multiple robots executing a navigation or manipulation task. In this case, the problem is (even more) how to map (or compress) information about many robots into a few haptic cues. We plan to use specialized devices, such as actuated exoskeleton gloves able to provide cues to each fingertip of a human hand, or to resort to “compression” methods inspired by the hand postural synergies for providing coordinated cues representative of a few (but complex) motions of the multi-robot group, e.g., coordinated motions (translations/expansions/rotations) or collective grasping/transporting.
3.2.4 Shared Control of Complex Robotics Systems
This final and main research axis will exploit the methods, algorithms and technologies developed in the previous axes for realizing applications involving complex semi-autonomous robots operating in complex environments together with human users. The leitmotiv is to realize advanced shared control paradigms, which essentially aim at blending robot autonomy and user's intervention in an optimal way for exploiting the best of both worlds (robot accuracy/sensing/mobility/strength and human's cognitive capabilities). A common theme will be the issue of where to “draw the line” between robot autonomy and human intervention: obviously, there is no general answer, and any design choice will depend on the particular task at hand and/or on the technological/algorithmic possibilities of the robotic system under consideration.
A prototypical envisaged application, exploiting and combining the previous three research axes, is as follows: a complex robot (e.g., a two-arm system, a humanoid robot, a multi-UAV group) needs to operate in an environment exploiting its onboard sensors (in general, vision as the main exteroceptive one) and deal with many constraints (limited actuation, limited sensing, complex kinematics/dynamics, obstacle avoidance, interaction with difficult-to-model entities such as surrounding people, and so on). The robot must then possess a quite large autonomy for interpreting and exploiting the sensed data in order to estimate its own state and the environmental one (“Optimal and Uncertainty-Aware Sensing” axis), and for planning its motion in order to fulfil the task (e.g., navigation, manipulation) by coping with all the robot/environment constraints. Therefore, advanced control methods able to exploit the sensory data at its most, and able to cope online with constraints in an optimal way (by, e.g., continuously replanning and predicting over a future time horizon) will be needed (“Advanced Sensor-based Control” axis), with a possible (and interesting) coupling with the sensing part for optimizing, at the same time, the state estimation process. Finally, a human operator will typically be in charge of providing high-level commands (e.g., where to go, what to look at, what to grasp and where) that will then be autonomously executed by the robot, with possible local modifications because of the various (local) constraints. At the same time, the operator will also receive online visual-force cues informative of, in general, how well her/his commands are executed and if the robot would prefer or suggest other plans (because of the local constraints that are not of the operator's concern). This information will have to be visually and haptically rendered with an optimal combination of cues that will depend on the particular application (“Haptics for Robotics Applications” axis).
4 Application domains
The activities of Rainbow fall obviously within the scope of Robotics. Broadly speaking, our main interest is in devising novel/efficient algorithms (for estimation, planning, control, haptic cueing, human interfacing, etc.) that can be general and applicable to many different robotic systems of interest, depending on the particular application/case study. For instance, we plan to consider
- applications involving remote telemanipulation with one or two robot arms, where the arm(s) will need to coordinate their motion for approaching/grasping objects of interest under the guidance of a human operator;
- applications involving single and multiple mobile robots for spatial navigation tasks (e.g., exploration, surveillance, mapping). In the multi-robot case, the high redundancy of the multi-robot group will motivate research in autonomously exploiting this redundancy for facilitating the task (e.g., optimizing the self-localization of the environment mapping) while following the human commands, and vice-versa for informing the operator about the status of a multi-robot group. In the single robot case, the possible combination with some manipulation devices (e.g., arms on a wheeled robot) will motivate research into remote tele-navigation and tele-manipulation;
- applications involving medical robotics, in which the “manipulators” are replaced by the typical tools used in medical applications (ultrasound probes, needles, cutting scalpels, and so on) for semi-autonomous probing and intervention;
- applications involving a direct physical “coupling” between human users and robots (rather than a “remote” interfacing), such as the case of assistive devices used for easing the life of people with disabilities. Here, we will be primarily interested in, e.g., safety and usability issues, and also touch some aspects of user acceptability.
These directions are, in our opinion, very promising since nowadays and future robotics applications are expected to address more and more complex tasks: for instance, it is becoming mandatory to empower robots with the ability to predict the future (to some extent) by also explicitly dealing with uncertainties from sensing or actuation; to safely and effectively interact with human supervisors (or collaborators) for accomplishing shared tasks; to learn or adapt to the dynamic environments from small prior knowledge; to exploit the environment (e.g., obstacles) rather than avoiding it (a typical example is a humanoid robot in a multi-contact scenario for facilitating walking on rough terrains); to optimize the onboard resources for large-scale monitoring tasks; to cooperate with other robots either by direct sensing/communication, or via some shared database (the “cloud”).
While no single lab can reasonably address all these theoretical/algorithmic/technological challenges, we believe that our research agenda can give some concrete contributions to the next generation of robotics applications.
5 Highlights of the year
- C. Pacchierotti nominated IEEE RAS Distinguished Lecturer for the field of haptics
- P. Robuffo Giordano's term as IEEE RAS Distinguished Lecturer for Multi-Robot Systems has been renewed for 2025-2027
- C. Pacchierotti invited to give a keynote at ICRA 2024 in Yokohama, Japan.
- Project ANR PRC MATES, led by the team, has been accepted.
- M. Babel carried the Paralympic flame as part of the relay of innovations for the people with disabilities, as part of her academic chair.
- M. Marchal was a keynote speaker at ISMAR 2024 in Seattle, USA; VRST 2024 in Trier, Germany and SCA 2024 in Montreal, Canada.
- M. Tognon received the IROS 2024 Toshio Fukuda Young Professional Award for his contributions to aerial robotics.
5.1 Awards
6 New software, platforms, open data
6.1 New software
6.1.1 HandiViz
-
Name:
Driving assistance of a wheelchair
-
Keywords:
Health, Persons attendant, Handicap
-
Functional Description:
The HandiViz software proposes a semi-autonomous navigation framework of a wheelchair relying on visual servoing.
It has been registered to the APP (“Agence de Protection des Programmes”) as an INSA software (IDDN.FR.001.440021.000.S.P.2013.000.10000) and is under GPL license.
-
Contact:
Marie Babel
-
Participants:
François Pasteau, Marie Babel
-
Partner:
INSA Rennes
6.1.2 ViSP
-
Name:
Visual servoing platform
-
Keywords:
Computer vision, Robotics, Visual servoing (VS), Visual tracking
-
Scientific Description:
Since 2005, we develop and release ViSP [1], an open source library available from https://visp.inria.fr. ViSP standing for Visual Servoing Platform allows prototyping and developing applications using visual tracking and visual servoing techniques at the heart of the Rainbow research. ViSP was designed to be independent from the hardware, to be simple to use, expandable and cross-platform. ViSP allows designing vision-based tasks for eye-in-hand and eye-to-hand systems from the most classical visual features that are used in practice. It involves a large set of elementary positioning tasks with respect to various visual features (points, segments, straight lines, circles, spheres, cylinders, image moments, pose...) that can be combined together, and image processing algorithms that allow tracking of visual cues (dots, segments, ellipses...), or 3D model-based tracking of known objects or template tracking. Simulation capabilities are also available.
ViSP also provides an open-source dynamic simulator called FrankaSim based on CoppeliaSim and ROS for the Panda robot from Franka Robotics [2]. The simulator fully integrated in the ViSP ecosystem features a dynamic model that has been accurately identified from a real robot, leading to more realistic simulations. Conceived as a multipurpose research simulation platform, it is well suited for visual servoing applications as well as, in general, for any pedagogical purpose in robotics. All the software, models and CoppeliaSim scenes presented in this work are publicly available under free GPL-2.0 license.
A module dedicated to deep neural networks (DNN) is also available to facilitate image classification and object detection. This module can infer the convolutional networks Faster-RCNN, SSD-MobileNet, ResNet 10, Yolo v3, Yolo v4, Yolo v5, Yolo v7, Yolo v8 and Yolo v11, which simultaneously predict object boundaries and prediction scores at each position.
A new module dedicated to the visual tracking of an object using its model has just been introduced. Called RBT for Render-Based-Tracker, it enables complex objects to be localized in real time by robustly combining geometric features, colour-based features and depth map features in the minimisation process.
[1] E. Marchand, F. Spindler, F. Chaumette. ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, Special Issue on "Software Packages for Vision-Based Control of Motion", P. Oh, D. Burschka (Eds.), 12(4):40-52, December 2005. URL: https://hal.inria.fr/inria-00351899v1
[2] A. A. Oliva, F. Spindler, P. Robuffo Giordano and F. Chaumette. ‘FrankaSim: A Dynamic Simulator for the Franka Emika Robot with Visual-Servoing Enabled Capabilities’. In: ICARCV 2022 - 17th International Conference on Control, Automation, Robotics and Vision. Singapore, Singapore, 11th Dec. 2022, pp. 1–7. URL: https://hal.inria.fr/hal-03794415.
-
Functional Description:
ViSP provides simple ways to integrate and validate new algorithms with already existing tools. It follows a module-based software engineering design where data types, algorithms, sensors, viewers and user interaction are made available. Written in C++, ViSP is based on open-source cross-platform libraries (such as OpenCV) and builds with CMake. Several platforms are supported, including OSX, iOS, Windows and Linux. ViSP online documentation allows to ease learning. More than 307 fully documented classes organized in 18 different modules, with more than 475 examples and 114 tutorials are proposed to the user. ViSP is released under a dual licensing model. It is open-source with a GNU GPLv2 or GPLv3 license. A professional edition license that replaces GNU GPL is also available.
- URL:
-
Contact:
Fabien Spindler
-
Participants:
Romain Lagneau, Éric Marchand, Fabien Spindler, François Chaumette, Olivier Roussel
6.1.3 DIARBENN
-
Name:
Obstacle avoidance through sensor-based servoing
-
Keywords:
Servoing, Shared control, Navigation
-
Functional Description:
DIARBENN's objective is to define an obstacle avoidance solution adapted to a mobile robot such as a powered wheelchair. Through a shared control system, the system corrects progressively and if necessary the trajectory when approaching an obstacle while respecting the user's intention.
-
Contact:
Marie Babel
-
Participants:
Marie Babel, François Pasteau, Sylvain Guegan
-
Partner:
INSA Rennes
6.2 New platforms
The platforms described in the next sections are labeled by the University of Rennes and are parts of the French Research Infrastructure ROBOTEX 2.0 labeled by the French Ministry of Research.
6.2.1 Robot Vision Platform
Participant: François Chaumette, Eric Marchand, Fabien Spindler [contact].
We are using an industrial robot built by Afma Robots in the nineties to validate our research in visual servoing and active vision. This robot is a 6 DoF Gantry on which it is possible to mount a gripper and an RGB-D camera on its end effector (see Fig. 2). This equipment is mainly used to validate vision-based visual servoing and real-time tracking algorithms.
In 2024, this platform has been used to validate experimental results in 1 accepted publication 32.

In this image we can see our Gantry robot.
6.2.2 Mobile Robots
Participants: Marie Babel, François Pasteau, Fabien Spindler [contact].
To validate our research in personally assisted living topic (see Sect. 7.3.2), we have three electric wheelchairs, one from Permobil, one from Sunrise and the last from YouQ (see Fig. 3.a). The control of the wheelchair is performed using a plug and play system between the joystick and the low level control of the wheelchair. Such a system lets us acquire the user intention through the joystick position and control the wheelchair by applying corrections to its motion. The wheelchairs have been fitted with cameras, ultrasound and time of flight sensors to perform the required servoing for assisting handicapped people. A wheelchair haptic simulator completes this platform to develop new human interaction strategies in a virtual reality environment (see Fig. 3(b)).
Moreover, for fast prototyping of algorithms in perception, control and autonomous navigation, the team uses a Pioneer 3DX from Adept (see Fig. 3.d). This platform is equipped with various sensors needed for autonomous navigation and sensor-based control.
In 2024, these robots were used to obtain experimental results presented in 4 papers 26 19 50 51.
![]() |
![]() |
![]() |
(a) | (b) | (c) |
In the left image, our wheelchairs from Permobil, Sunrise and YouQ. In the middle image our wheelchair simulator. In the next image, our Pioneer P3DX mobile robot equipped with a camera mounted on a pan-tilt head.
6.2.3 Advanced Manipulation Platform
Participants: Alexandre Krupa, Claudio Pacchierotti, Paolo Robuffo Giordano, François Chaumette, Fabien Spindler [contact].
This platform consists of by 2 Panda lightweight 7 DoF arms from Franka Emika equipped with torque sensors in all seven axes. An electric gripper, a camera, a soft hand from qbrobotics or a Reflex TakkTile 2 gripper from RightHand Labs (see Fig. 4.b) can be mounted on the robot end-effector (see Fig. 4.a). A force/torque sensor from Alberobotics is also attached to one of the robots end-effector to provide greater accuracy in torque control.
Two Adept 6 DoF arms (one Viper 650 robot and one Viper 850 robot) and a 6 DoF Universal Robots UR5, which can also be fitted with a force sensor and a camera, complete the platform.
This setup is mainly used to manipulate deformable objects and to validate our activities in coupling force and vision for controlling robot manipulators (see Section 7.3.1) and in controlling the deformation of soft objects (Sect. 7.1.7). Other haptic devices (see Section 7.2) can also be coupled to this platform.
In 2024, 3 papers 18, 34, 35 and 2 PhD thesis 68, 70 were published including experimental results obtained with this platform.
![]() |
![]() |
![]() |
||
(a) | (b) | (c) |
In the left image our Franka robot equipped with the Pisa SoftHand grasping a box. In the middle image the Reflex TakkTile 2 gripper grasping a yellow ball. In the right image The 5 arms composing the platform with in the foreground a Franka, in the second plan our two Viper robots, then in the background, a UR 5 arm and our second Franka robot.
6.2.4 Unmanned Aerial Vehicles (UAVs)
Participants: Gianluca Corsini [contact], Paolo Robuffo Giordano, Marco Tognon, Claudio Pacchierotti, Pierre Perraud, Fabien Spindler.
Rainbow is involved in several activities concerning conception, modelling, control and perception for single and multiple aerial robots (ARs). Two indoor flying arenas are used to carry out the related experimental activities. The first arena is relatively small (3m x 5m x H1.8m) and is equipped with 11 Vicon cameras for motion capture. The second one, spanning a larger volume (about 9m x 9m x H2.5m), is equipped with 14 Qualisys cameras. However, the latter room is only available within the period from January to August. Compared to the former, the larger arena grants us with the possibility to fly multiple drones at the same time thanks to the larger volume and the great coverage offered by the larger number of cameras.
In these flying arenas, we operate several customized ARs which have been heavily customised by: reprogramming from scratch the low-level firmwares running on the onboard electronics (comprising flight and motor controllers), equipping each robot with an onboard computer (for instance a Jetson or a NUC board) running Linux Ubuntu and the TeleKyb3 software framework, and adding Realsense RGB-D cameras for onboard visual odometry and visual servoing.
Telekyb3 is an open-source framework based on the Genom3 software tool which has been developed at LAAS in Toulouse. It features a modular and formal structure tailored to code reusability, high performance and middleware abstraction. Telekyb3 comprises a set of algorithms dedicated to localization, navigation and low-level control of aerial robots in maneuvering and physical-interaction-based tasks.
The aerial robotic platform of the team includes quadrotors and hexarotors (see Fig. 5.a and 5.b, respectively) which have been internally designed both at the mechanical and the electronic level. While the quadrotors have a standard (in jargon, collinear) propeller orientation, the hexarotors have the motors tilted w.r.t. the main body. This property grants them with manuvering capabilities that cannot be replicated by conventional and commercial drones with collinear rotors.
For most of the mechanical components, we rely on custom parts which are then realized by exploiting 3D printing technology and water-jet and milling processes of carbon-fiber material. From the electronic standpoint, our robots feature a Mikrokopter-based flight controller running custom firmware. However, due to the unavailibility and aging of the latter board, a newer flight controller named Paparazzi, has been adopted and consequently the firmware adapted to the new board. The Paparazzi flight controller is part of an open-source and open-hardware project started at ENAC in Toulouse. This board has been choosen as it fits well our needs: it comprises more precise onboard sensors and sufficient programmable peripherals used to communicate with the other electronic modules and sensors.
Smaller commercial drones, namely the BitCraze Crazyflie, have been added to this robotic platform. Seen the tiny dimensions of these drones and the limited available space for experiments, these robots are perfect candidates to carry out research related to the control and perception of a team of multiple robots.
Among the different successful experiments, we can annoverate a visual servoing using ViSP to position the drone w.r.t. a target and to manipulate deformable objects (e.g. a cable), accurate positioning of a swarm of (more than 5) robots, contact-based interaction with flat surfaces (for instance, drawing on a whiteboard) by means of a hexarotor equipped with a rigid end-effector.
In 2024, 2 papers 33, 60 and 1 phD thesis 68 contains experimental results obtained with this platform.
![]() |
![]() |
|
(a) | (b) |
In the left image, one of our quadrotors. In the right image, our hexarotor with tilted propellers.
6.2.5 Interactive interfaces and systems
Participants: Claudio Pacchierotti, Paolo Robuffo Giordano, Maud Marchal, Marie Babel, Fabien Spindler [contact].
Interactive technologies enables the communication between artificial systems and human users. Examples of such technologies are haptic interfaces and virtual reality headsets.
Various haptic devices are used to validate our research in, e.g., shared control, extended reality. We design wearable haptics devices to give user feedback and use also some devices out of the shelf. We have a Virtuose 6D device from Haption (see Fig. 6.a). This device is used as master device in many of our shared control activities. An Omega 6 (see Fig. 6.b) from Force Dimension and devices from Ultrahaptics (see Fig. 6.c) complete this platform that could be coupled to the other robotic platforms.
Similarly, in order to augment the immersiveness of virtual scenarios, we make use of virtual and augmented reality headsets. We have HTC Vive headsets for VR and Microsoft Hololens for AR interactions (see Fig. 6.d).
In 2024, this platform was used to obtain experimental results presented in 1 paper 17.
![]() |
![]() |
![]() |
![]() |
|||
(a) | (b) | (c) | (d) |
Left image, our Virtuose 6D haptic device, in the middle-left our Omega6 haptic device, in the middle-right image the Ultraleap STRATOS device, and in the right our Microsoft Hololens 2 AR headset
6.2.6 Portable immersive room
Participants: François Pasteau, Fabien Grzeskowiak, Marie Babel [contact].
To validate our research on assistive robotics and its applications in virtual conditions, we very recently acquired a portable immersive room that is planned to be easily deployed in different rehabilitation structures in order to conduct clinical trials. The system has been designed by Trinoma company and has been funded by Interreg ADAPT project.
In 2024, this platform was used to prepare next clinical trials that will be conducted during 2025.
![]() |
![]() |
|
(a) | (b) |
On person sitting on the wheelchair simulator placed in the portable Immersive room
7 New results
7.1 Advanced Sensor-Based Control
7.1.1 Integrated Robust Planning and Control for Uncertain Robots
Participants: Tommaso Belvedere, Ali Srour, Paolo Robuffo Giordano.
The goal of this research activity is to propose an integrated approach for robust planning and control of robots whose models are affected by uncertainty in some of their parameters. The results are based on the notion of closed-loop state sensitivity developed in our group since several years, and are also related to the ANR project CAMP (Sect. 9.4.9).
Over the past years, we have developed several trajectory optimization algorithms leveraging the notion of closed-loop “state sensitivity”, “input sensitivity” and derived quantities. IN particular, exploiting these metrics we were able to construct tubes enveloping the bundle of perturbed trajectories given an uncertainty model (a range of variagion for the parameters around a nominal value). During this year we have continued working on this subject with the following contributions:
- in 60 we have performed an exstensive experimental validation of a sensitivity-aware trajectory planning for a quadorotor UAV with uncertain aerodynamic coefficients, mass and location of the center of mass. A benefit of the proposer approach is that it can be applied to any controller for the robot, even in case the controller is given and cannot be changed. To show this point, we made use of the popular PixHawk controller onboard the quadrotor, as it is a very common control strategy used by many groups. The results clearly showed the benefits of the proposed robust planning. This paper was awarded with the Best Paper Award at the ICUAS 2024 conference
- in 34, the approach proposed in 60 has been extended to the case of torque-controlled manipulator arms. Compared to the quadrotor case, a 7-dof torque-controlled manipulator has a much more complex dynamical model (with many more parameters), and the same goes for the employed control strategy (computed torque with integral term in our case). Nevertheless, we were able to show that our sensitivity-aware trajectory planning can also be used in this case for manipulation tasks. An experimental campaign has been performed with the manipulator handling payloads with different (and unknown) inertial parameters, showing the effectiveness of the proposed approach in reducing the effects of model uncertainties during motion
- in 10, the approach has been applied to a fully-actuated hexartor by also introducing the new concept of sensitivity w.r.t. the initial conditions (besides the parameters), which may also be uncertain because of, e.g., not perfect state estimation. Furthermore, a better formalization and algorithm for computing the tubes of perturbed trajectories has been proposed, improving over the previous method (in the sense of more accurately enveloping the bundle of perturbed state/input trajectories). Finally, we also considered the effects of any additional unmodeled dynamics treated as an uncertain parameter with a given range of variation. The experiments on the hexarotor clearly showed the ability of the proposed framework to produce intrinsic robust motions plans by minimizing the effects of uncertainties in the parameters and in the initial conditions
- together with Simon Wasiela and other colleagues at LAAS-CNRS, we have proposed in 36 an extension to the SAMP motion planner previously developed in 73 and meant to produce robust global plans, emphasizing the generation of trajectories with low sensitivity to model uncertainty. However, the high computational cost of the uncertainty tubes was a bottleneck in 73. In our work 36 we addressed this problem by proposing a novel framework that first incorporates a Gated Recurrent Unit (GRU) neural network to provide fast and accurate estimation of uncertainty tubes and then minimizes these tubes at given points along the trajectory. The approach was experimentally validated on a 3D quadrotor in two challenging scenarios: a navigation through a narrow window, and an in-flight “ring catching”' task with a perch. The experimental results demonstrated the robustness of the approach also in dealing with such a complex perching task for quadrotor.
7.1.2 UWB beacon navigation of assisted power wheelchair
Participants: Vincent Drevelle, Marie Babel, François Pasteau, Theo Le Terrier.
Ultra-wideband (UWB) radio is an emerging technology for indoor localization and object tracking applications. Contrary to vision, these sensors are low-cost, non-intrusive and easy to install on the wheelchair. They provide time-of-flight ranging between fixed beacons and mobile sensors. However, multipath or non-line-of-sight (NLOS) propagation can perturb range measurements in a cluttered indoors environment.
We designed a robust wheelchair positioning method, based on an extended Kalman filter with outlier identification and rejection. The method fuses UWB ranges with low-cost gyro and wheelchair joystick commands to estimate the orientation and position of the wheelchair. A demonstration of autonomous navigation in an apartment of the Pôle Saint-Hélier rehabilitation center was shown to practitioners and power wheelchair users during the Ambrougerien project demo day.
Then, a robust set-membership positioning approach was developed, based on interval constraint propagation. It explicitly accounts for the fact that multipath and NLOS propagation result in range measurements that exceed the actual distance. A reliable pose domain is then computed for the wheelchair at each measurement epoch.
7.1.3 Visual servo of the orientation of an Earth observation satellite
Participants: Alessandro Colotti, François Chaumette.
This study was done in the scope of the ANR Sesame project (see 9.4.6). We developed a method able to determine the complete set of equilibrium (i.e., the global minimum, local minimum, and saddle points) of image-based visual servoing when Cartesian coordinates of image points are used as inputs of the control scheme 16.
7.1.4 Visual servo of the orientation of an Earth observation satellite
Participants: Maxime Robic, Eric Marchand, François Chaumette.
This study was done in the scope of the BPI Lichie project (see 9.4.8). Its goal was to control the orientation of a satellite to track particular objects on the Earth. This year, we considered how to avoid motion-blur effects in the images acquired by the camera while it is gazing on a potentially moving object 32.
7.1.5 Multi-sensor-based control for accurate and safe assembly
Participants: John Thomas, François Pasteau, François Chaumette.
This study was also done in the scope of the BPI Lichie project (see 9.4.8). Its goal was to design sensor-based control strategies coupling vision and proximetry data for ensuring precise positioning while avoiding obstacles in dense environments 35, 70
7.1.6 Visual Exploration of an Indoor Environment
Participants: Thibault Noël, Eric Marchand, François Chaumette.
This study is done in collaboration with the Creative company in Rennes (see Section 7.1.6). It is devoted to the exploration of indoor environments by a mobile robot for a complete and accurate reconstruction of the environment 26.
7.1.7 Shape servoing of soft objects using Finite Element Model
Participants: Mandela Ouafo Fonkoua, Alexandre Krupa, François Chaumette.
This study takes place in the context of the BIFROST project (see Section 9.1.1). In 18, we proposed a visual control framework for accurately positioning feature points belonging to the surface of a 3D deformable object to desired 3D positions, by acting on a set of manipulated points using a robotic manipulator. This framework considers the dynamic behavior of the object deformation, that is, we do not assume that the object is in its static equilibrium during the manipulation. By relying on a coarse dynamic Finite Element Model (FEM), we have successfully formulated the analytical relationship expressing the motion of the feature points to the 6 degrees of freedom motion of a robot gripper. From this modeling step, a novel closed-loop deformation controller was designed. To be robust against model approximations, the whole shape of the object is tracked in real-time using an RGB-D camera, thus allowing to correct any drift between the object and its model on-the-fly. Experimental results have demonstrated that our approach can drive feature points of a deformable objet to desired positions very rapidly (in less than 5 seconds), thus being very far from a (simplified) quasi-static regime. Our methodology thus makes it possible to take into account the inertial properties of soft materials during rapid motions.
7.1.8 Multi-Robot Control Localization and Estimation
Participant: Paolo Robuffo Giordano, Claudio Pacchierotti, Esteban Restrepo, Nicola De Carli, Antonio Marino.
Systems composed by multiple robots are useful in several applications where complex tasks need to be performed. Examples range from target tracking, to search and rescue operations and to load transportation. We have been very active over the last years on the topics of coordination, estimation, localization and control of multiple robots under the possible guidance of a human operator (see, e.g., Sect. 9.4.10), and we recently started to explore the use of machine learning for replicating, or replacing, more analytical control/estimation strategies (with benefits in terms of reduced computational power and communication load).
During this year we have produced the the following contributions:
- in 15 we have proposed an observer scheme to estimate in a common frame the position and yaw orientation of a group of quadrotors from body-frame relative position measurements. The state of the robots is represented by their position and yaw orientation and the graph representing the sensing interaction among the robots is directed and it is only required to be weakly connected in addition to satisfy certain persistency of excitation conditions. The proposed scheme consists of three distinct estimation strategies coupled together, and for which we were able to draw strong conclusions on the stability of the whole system (cascade of three estimations) and validate the method via numerical simulations. This work is significant since it solves in a principled and rigorous way a longstanding problem: how to build in a decentralized way a coherent localization in a common frame from partial body-frame measurements. For instance, in the previous work 72 an analogous problem was solved in a more heuristic way without any formal guarantee of convergence for the whole estimation pipeline, as it is instead the case for 15 where we were finally able to provide a full formal characterization of the filter convergence
- in 41 we have revisited the topic of connectivity maintenance under the light of modern distributed QP-based control. The proposed framework is primarily motivated by the distributed implementation of Control Barrier Functions (CBFs), whose primary objective is to make minimal adjustments to a nominal controller while ensuring constraint satisfaction. By improving over some limitations in the current state-of-the-art, we were able to apply distributed CBFs to the problem of global connectivity maintenance in presence of communication and sensing constraints. This improves of the typical connectivity maintenance algorithms that are based on distributed gradient descents of potential functions that can be hard to tune in practice (in particular w.r.t. the number of robots in the group). The proposed CBF formulation is instead much cleaner and easy to tune, and with better numerical properties for actual implementations.
- in 58 we proposed a distributed strategy to achieve biconnectivity, instead of simple connectivity, for a group of robots that allows establishment/deletion of interaction links as well as addition/removal of agents at anytime while guaranteeing that the connectivity, and thus functionality, of the team is always preserved. Indeed, in the context of open multi-robot systems, that is, when the number of robots in the team is not fixed, merely preserving connectivity of the current graph does not prevent the loss of connectivity after a robot joins/leaves the group. The proposed approach is completely distributed and embeds into a unique gradient-based control multiple constraints and requirements: (i) limited inter-robot communication ranges, (ii) limited field of view, (iii) desired inter-agent distances, and (iv) collision avoidance. Numerical simulations illustrate the effectiveness of our approach.
- in 23 we studied the conditions for input-state stability (ISS) and incremental input-state stability (ISS) of Gated Graph Neural Networks (GGNNs). Indeed, GNNs excel in predicting and analyzing graphs.Recurrent models of GNNs can solve time-dependent problems and have been shown to provide a useful tool for analyzing and designing multi-agent algorithms. In 23 we showed that this recurrent version of Graph Neural Networks (GNNs) can be expressed as a dynamical distributed system and, as a consequence, can be analysed using model-based techniques to assess its stability and robustness properties. Then, the stability criteria found can be exploited as constraints during the training process to enforce the internal stability of the neural network. These findings are demonstrated in two distributed control examples, flocking and multi-robot motion control, showing that using these conditions increases the performance and robustness of the gated GNNs.
- in 24 we considered an end-to-end trajectory planning algorithm tailored for multi-UAV systems for generating collision-free trajectories in environments populated with both static and dynamic obstacles, leveraging point cloud data. Our approach consists of a 2-branch neural network fed with sensing and localization data, able to communicate intermediate learned features among the agents. One network branch crafts an initial collision-free trajectory estimate, while the other devises a neural collision constraint for subsequent optimization, ensuring trajectory continuity and adherence to physical actuation limits. Extensive simulations in challenging cluttered environments, involving up to 25 robots and 25% obstacle density, show a collision avoidance success rate in the range of 100–85%. We also introduced a saliency map computation method acting on the point cloud data, which offers qualitative insights into the proposed methodology.
- in 52 we have instead proposed the Liquid-Graph Time-constant (LGTC) network, a continuous graph neural network (GNN) model for control of multi-agent systems based on the recent Liquid Time Constant (LTC) network. We analyzed its stability leveraging contraction analysis and proposed a closed-form model that preserves the model contraction rate and does not require solving an ODE at each iteration. Compared to discrete models like the previous Graph Gated Neural Networks (GGNNs), the higher expressivity of the proposed model guarantees remarkable performance while reducing the large amount of communicated variables normally required by GNNs, thus mitigating one drawback of GNNs. We evaluated our model on a distributed multi-agent control case study (flocking) taking into account variable communication range and scalability under non-instantaneous communication.
- in 54 we revisited the classical problem of connectivity maintenance of a UAV group. Differently from typical model-based connectivity-maintenance approaches, the proposed technique uses machine learning to attain significantly more scalability in terms of number of UAVs that can be part of the robotic team. It uses Supervised Deep Learning (SDL) with Artificial Neural Networks (ANN), so that each robot can extrapolate the necessary actions for keeping the team connected in one computation step, regardless the size of the team. We compared the performance of our proposed approach vs. a state-of-the-art model-based connectivity-maintenance algorithm when managing a team composed of two, four, six, and ten aerial mobile robots. The results showed that our approach can keep the computational cost almost constant as the number of drones increases, reducing it significantly with respect to model-based techniques. For example, our SDL approach needs 83% less time than a state-of-the-art model-based connectivity maintenance algo- rithm when managing a team of ten drones.
- in 45 we used machine learning techniques for addressing a very different problem in the context of multi-robots: accurate tracking of mico-scale robots for minimally invasive surgery applications (this work is in the context of the H2020 Rego project coordinated by our team). Indeed, accurately tracking the position of the moving agents at the micro-scale remains a significant challenge, particularly for multi-agent systems operating in cluttered and unknown environments. In order to address this issue, we introduced a graph-based multi-agent 3D tracking algorithm for a micro-agent control system. This algorithm integrates image information with the control inputs used to navigate the micro agents. We combined Convolutional Neural Networks and Graph Neural Networks to effectively extract features from image sources, and combine them with historical data and control inputs. The primary novelty of this algorithm is its ability to make predictions when the target is occluded in the 2D detection results. The proposed system achieved a tracking error of 0.15 mm, outperforming standard model-based tracking techniques.
- in 31 we solve the tracking-in-formation problem for a group of underactuated autonomous marine vehicles interconnected over a directed topology. The agents are subject to hard inter-agent constraints, i.e. connectivity maintenance and collision avoidance, and soft constraints, specifically on the non-negativity of the surge velocity, as well as to constant disturbances in the form of unknown ocean currents. The control approach is based on an input-output feedback linearization for marine vehicles and on the edge-based framework for multi-agent consensus under constraints. High-fidelity simulations are provided to illustrate our results.
- in 63 we propose an adaptive control strategy for the simultaneous estimation of topology and synchronization in complex dynamical networks with unknown, time-varying topology. We introduce two auxiliary networks: one that satisfies the persistent excitation condition to facilitate topology estimation, while the other, a uniform delta persistently exciting network, ensures the boundedness of both weight estimation and synchronization errors, assuming bounded time-varying weights and their derivatives. A relevant numerical example shows the efficiency of our methods.
7.1.9 Safe Control of Mobile Manipulators
Participant: Tommaso Belvedere.
Mobile robots working among humans must be controlled to ensure the safety of both humans and the robot and it is essential to avoid collisions. This requires a combination of strategies to safely control the robot despite the inherent unpredictable nature of humans, and to reliably estimate the human motion based on sensor measurements. Orthogonally to this, the safety of the robot is also dependent on its ability to maintain balance. In fact, wheeled mobile robots (i.e., without a fixed base) need to adapt their motion to ensure that wheels always remain in contact with the ground to avoid a potentially catastrophic tip-over. This is particularly important when the robot is navigating over non-flat ground or when dynamically manipulating the environment. Two papers on these topics have been produced this year in collaboration with Sapienza University of Rome, Italy.
- In 40, a Vision-based control scheme is developed to allow safe navigation among a human crowd. The method leverages Control Barrier Functions (CBF) to generate robot movements that safely avoid humans. Its main contribution is in the human detection and crown prediction pipeline, which uses the YOLO-v8 model to detect humans and subsequently estimate their motion through Kalman Filters. Moreover, a strategy to exploit the pan-tilt action of the camera is devised to maximize the human detection reliability, significantly improving the success rate when navigating in a tight crowded environment. This paper was awarded with the Best Paper Award at the HFR 2024 conference.
- In 64, we have proposed a real-time optimization-based controller which ensures the robot is able to maintain balance when pick- ing up and carrying heavy objects. It leverages the concept of Zero Moment Point to describe conditions of dynamic balance essential when fast movements are required, and CBFs to minimally deviate from the desired motion. It also proposes an extension of CBFs to allow for input-level constraints in the discrete time, while maintaining the useful properties of CBFs.
7.1.10 Whole-body predictive control of Humanoid Robots
Participant: Tommaso Belvedere.
While interest in humanoid robots is on the rise and the first industrial deployments are emerging, there are still many challenges related to their complexity that hinder the use of modern optimization-based control methods. Historically, in fact, the most popular approaches used simplified models to reduce the number of optimization variables. This has several disadvantages and limitations that can only be overcome through the use of full models that reflect the dynamic and kinematic capabilities of such robots. In 37, we proposed an efficient scheme based on Model Predictive Control that is capable of exploiting the full capabilities of a humanoid robot. It leverages an ad-hoc formulation of the dynamics that allows the real-time solution of the related optimization problem and the study of its feasibility region. This region is then exploited to actively improve the robustness of the system against disturbances. The proposed method is shown to outperform the baseline (utilizing a simplified model) in robustness and dynamic locomotion capabilities, while maintaining the ability to run in real-time at frequencies larger than 100 Hz.
7.2 Haptic Cueing for Robotic Applications and Virtual Reality (VR)
We coordinated a special issue on this topic 42.
7.2.1 Wearable haptics for human-centered robotics, Virtual Reality (VR), and Augmented Reality (AR)
Participants: Claudio Pacchierotti, Maud Marchal, Eric Marchand, Lisheng Kuang.
We have been working on wearable haptics since few years now, both from the hardware (design of interfaces) and software (rendering and interaction techniques) points of view. This line of research has continued also in this year.
In 21, we present a versatile 4-DoF hand wearable haptic device tailored for VR. Its adaptable design accommodates various end-effectors, facilitating a wide spectrum of tactile experiences. Comprising a fixed upper body attached to the hand's back and interchangeable end-effectors on the palm, the device employs articulated arms actuated by four servo motors. The work outlines its design, kinematics, and a positional control strategy enabling diverse end-effector functionality. Through three distinct end-effector demonstrations mimicking interactions with rigid, curved, and soft surfaces, we showcase its capabilities. Human trials in immersive VR confirm its efficacy in delivering immersive interactions with varied virtual objects, prompting discussions on additional end-effector designs.
In 22, we present a 4-DoF wearable haptic device for the palm, able to provide the sensation of interacting with slanted surfaces and edges. It is composed of a static upper body, secured to the back of the hand, and a mobile end-effector, placed in contact with the palm. They are connected by two articulated arms, actuated by four servo motors housed on the upper body and along the arms. The end-effector is a foldable flat surface that can make/break contact with the palm to provide pressure feedback, move sideways to provide skin stretch and tangential motion feedback, and fold to elicit the sensation of interacting with different curvatures. We also present a position control scheme for the device, which is then quantitatively evaluated.
In 46, we introduced a 7-DoF hand-mounted haptic device. It is composed of a parallel mechanism characterised by eight legs with an articulated diamond-shaped structure, in turn connected to an origami-like shape-changing end-effector. The device can render surface and edge touch simulations as well as apply normal, shear, and twist forces to the palm. The paper presented the device's mechanical structure, a summary of its kinematic model, actuation control, and preliminary device evaluation, characterizing its workspace and force output.
In 27, we addressed key challenges in virtual object manipulation in AR, including limited visual occlusion and the absence of haptic feedback. We investigated the role of visuohaptic rendering of the hand as sensory feedback through two experiments. The first examined six visual hand renderings, showing the user's hand via an AR avatar. The second evaluated visuo-haptic feedback, comparing two vibrotactile techniques applied at four delocalized hand positions, combined with the two most effective visual renderings from the first experiment. The results revealed that vibrotactile feedback near the contact point enhanced perceived effectiveness, realism, and usefulness, while contralateral hand rendering, though disliked, achieved the best performance.
In 56, we investigated whether such wearable haptic augmentations are perceived differently in AR vs. VR and when touching with a virtual hand instead of one's own hand. We first designed a system for real-time rendering of vibrotactile virtual textures without constraints on hand movements, integrated with an immersive visual AR/VR head-set. We then conducted a psychophysical study with 20 participants to evaluate the haptic perception of virtual roughness textures on a real surface touched directly with the finger (1) without visual augmentation, (2) with a realistic virtual hand rendered in AR, and (3) with the same virtual hand in VR. On average, participants overestimated the roughness of haptic textures when touching with their real hand alone and underestimated it when touching with a virtual hand in AR, with VR in between. Exploration behaviour was also slower in VR than with real hand alone, although subjective evaluation of the texture was not affected.
In 55, we investigated the perception of simultaneous visual and haptic texture augmentation of real tangible surfaces touched directly with the fingertip in AR, using a wearable vibrotactile haptic device worn on the middle phalanx. When sliding on a tangible surface with an AR visual texture overlay, vibrations are generated based on data-driven texture models and finger speed to augment the haptic roughness perception of the surface. In a user study with twenty participants, we investigated the perception of the combination of nine representative pairs of visuo-haptic texture augmentations. Participants integrated roughness sensations from both visual and haptic modalities well, with haptics predominating the perception, and consistently identified and matched clusters of visual and haptic textures with similar perceived roughness.
In 59, we reported preliminary results in the design and implementation of an integrated system that includes dynamic simulation of the interaction with deformable objects and tissues, a VR environment with finger motion tracking, and haptic feedback provided by a wearable device. In addition, it explores the challenges and advances in integrating such technologies with a focus on creating realistic tactile experiences. It also addresses the complexity of combining hardware and software components, proposing some solutions to overcome integration problems.
We have also written a survey on the topic of cutaneous haptic feedback for human-centered robotic teleoperation 29. The article presents an overview on cutaneous haptic interaction followed by a review of the literature on cutaneous/tactile feedback systems for robotic teleoperation, categorizing the considered systems according to the type of cutaneous stimuli they can provide to the human operator. It ends with a discussion on the role of cutaneous haptics in robotics and the perspectives of the field.
7.2.2 Affective and persuasive haptics for Virtual Reality (VR)
Participants: Claudio Pacchierotti, Daniele Troisi.
Affective and persuasive haptics in Virtual Reality (VR) constitute an evolving frontier that explores the integration of tactile feedback to evoke emotional responses and influence user behavior within virtual environments. By leveraging haptic technologies, these systems aim to create immersive experiences that go beyond visual and auditory stimuli, introducing touch as a compelling tool for emotional engagement and persuasion.
In 61, we investigated the influence of contact force applied to the human's fingertip on the perception of hot and cold temperatures, studying how variations in contact force may affect the sensitivity of cutaneous thermoreceptors or their interpretation. A psychophysical experiment involved 18 participants exposed to cold (20 °C) and hot (38 °C) thermal stimuli at varying contact forces, ranging from gentle (0.5 N) to firm (3.5 N) touch. Results show a tendency to overestimate hot temperatures (hot feels hotter than it really is) and underestimate cold temperatures (cold feels colder than it really is) as the contact force increases. This result might be linked to the increase in the fingertip contact area that occurs as the contact force between the fingertip and the plate delivering the stimuli grows.
In 57, we investigated the influence of thermal haptic feedback on stress during a cognitive task in virtual reality. We hypothesized that cool feedback would help reduce stress in such a task where users are actively engaged. We designed a haptic system using Peltier cells to deliver thermal feedback to the left and right trapezius muscles. A user study was conducted on 36 participants to investigate the influence of different temperatures (cool, warm, neutral) on users' stress during mental arithmetic tasks. Results show that the impact of the thermal feedback depends on the participant's temperature preference. Interestingly, a subset of participants (36%) felt less stressed with cool feedback than with neutral feedback, but had similar performance levels, and expressed a preference for the cool condition. Emotional arousal also tended to be lower with cool feedback for these participants.
In 43, we conducted a study on thermal feedback during simulated social interactions with a virtual agent. We tested three conditions: warm, cool, and neutral. Results showed that warm feedback positively influenced users' perception of the agent and significantly enhanced persuasion and thermal comfort. Multiple users reported the agent feeling less 'robotic' and more 'human' during the warm condition. Moreover, multiple studies have previously shown the potential of vibrotactile feedback for social interactions. A second study thus evaluated the combination of warmth and vibrations for social interactions. The study included the same protocol and three similar conditions: warmth, vibrations, and warm vibrations. Warmth was perceived as more friendly, while warm vibrations heightened the agent's virtual presence and persuasion. These results encourage the study of thermal haptics to support positive social interactions.
7.2.3 Mid-Air Haptic Feedback
Participants: Claudio Pacchierotti, Maud Marchal, Thomas Howard, Guillaume Gicquel, Lendy Mulot.
In the framework of H2020 projects H-Reality and E-TEXTURE, we have been working to develop novel mid-air haptics paradigms that can convey the information spectrum of touch sensations in the real world, motivating the need to develop new, natural interaction techniques. Both projects ended in 2022, but we have continued to work on this exciting subject.
In 25, we propose the use of non-coplanar mid-air haptic devices for providing simultaneous tactile feedback to both hands during bimanual VR manipulation. We discuss coupling schemes and haptic rendering algorithms for providing bimanual haptic feedback in bimanual interactions with virtual environments. We then present two human participant studies, assessing the benefits of bimanual ultrasound haptic feedback in a two-handed grasping and holding task and in a shape exploration task. Results suggest that the use of multiple non-coplanar UMH devices could be an interesting approach for enriching unencumbered haptic manipulation in virtual environments.
In 53, we formalized a pipeline for computing the intersection between a user's hand and a 3D virtual object. Together with state-of-the-art sampling strategies, this forms an end-to-end design process for rendering 3D objects with UMH. A user study demonstrated the significant impact of intersection strategy design choices on the perception of 3D object properties, specifically infill density. We illustrated that different strategies can alter the perception of how hollow or filled an object is, which can be challenging to render in mid-air. By providing a standardized way to report and study 3D object rendering with UMH, this work aimed to facilitate and motivate further exploration of perceptual effects via UMH technologies.
We also organized a special issue on the IEEE Trans. Haptics around this topic 28.
7.2.4 Encounter-Type Haptic Devices
Participants: Claudio Pacchierotti, Lisheng Kuang, Elodie Bouzbib.
Encounter-Type Haptic Displays (ETHDs) provide haptic feedback by positioning a tangible surface for the user to encounter. This allows users to freely elicit haptic feedback with a surface during a virtual simulation. ETHDs differ from most of current haptic devices which rely on an actuator always in contact with the user.
In 13, we introduced PalmEx, aiming to enhance haptic exoskeleton gloves in VR by incorporating palmar force-feedback, a crucial but often lacking aspect. Our approach, demonstrated through a self-contained hardware system, integrates a palmar contact interface into hand exoskeletons, enhancing grasping sensations and manual haptic interactions in VR. By extending existing taxonomies, we evaluated PalmEx's capabilities for virtual object exploration and manipulation. Technical assessments optimized virtual-physical interaction delays, followed by a user study (N=12) examining PalmEx's design space. Findings highlighted PalmEx's superior rendering capabilities for realistic grasping in VR, emphasizing the significance of palmar stimulation. This innovation offers an affordable solution to augment high-end consumer hand exoskeletons, addressing the deficiency in in-hand haptic sensations.
7.2.5 Multimodal Cutaneous Haptics to Assist Navigation and Interaction in VR
Participants: Louise Devigne, Marco Aggravi, Inès Lacôte, Pierre-Antoine Cabaret, François Pasteau, Maud Marchal, Claudio Pacchierotti, Marie Babel.
Within the project Inria Challenge 9.4.7, we got interested on using cutaneous haptics for aiding the navigation of people with sensory disabilities. In particular, we investigated the ability of vibrotactile sensations and tap stimulations in conveying haptic motion and sensory illusions 67.
In 38, 65, we presented a handheld multi-actuator haptic device, which provides localized vibrotactile feedback in a small form-factor. To isolate the vibrations generated from the different actuators, we designed an original 3D printed deformable structure integrated into the handle. We evaluateed the benefits of our isolation structure in a vibrometry study, comparing the proposed version to a rigid structure. Finally, we showcased the use of the proposed handle in a virtual navigation task, showing its capabilities for applications where multiple and distinct haptic stimuli need to be provided to the user's hand.
In 39, we presented the design and experimental evaluation of haptic rendering techniques for navigating using localized vibrotactile stimuli provided by the multi-actuator haptic handle. We presented two haptic rendering schemes which are then used in combination with three navigation strategies to guide users along a path. We evaluated these techniques in a user study where 18 participants walk in a 88 m room, following haptic cues displayed by the handle.
In 48, we designed a haptic handle composed of a cylindrical soft plastic casing, which houses five custom voice-coil actuators distributed around the handle. We carried out a human subject study enrolling 14 participants to investigate the impact of using uni-manual or bi-manual conditions and to identify the most effective tactile patterns in a navigation assistance scenario. We tested the use of either vibration bursts or pressure “taps” to convey different directions of motion, relying on the concept of the apparent haptic motion illusion. Results show that the proposed technique is an effective approach for providing navigational cues. We identified specific patterns that were highly effective both in uni or bi-manual conditions in conveying directional instructions towards the front (93.7%), the back (90.5%), the left (97.2%), and the right (84.5%) directions.
In 47, we evaluated the capacity of tactile patterns delivered by haptic handles to guide users walking with a walker while they are asked to follow a predefined path. We conducted a user study on 18 participants who used two haptic handles mounted on a walker in actual walking condition. We implemented three types of vibrotactile patterns for guidance: uni-manual (one handle), bi-manual (two handles), and dual (combining one-handle and two-handles patterns) were used depending on the direction. We also compared vibration and tapping stimulation modes to test for their potential influence on the guiding strategy and the user’s preference. Results showed no significant effect of the strategy or the stimulation mode on the accuracy of following the target path. However, they showed that bi-manual conditions presented a higher satisfaction rate, gave a sensation of being mentally less demanding to users, improved the confidence rate in succeeding the task, and increased the navigation speed.
In 14, we focused on advancing Virtual Reality (VR) manipulation by exploring enhanced haptic feedback. While tangible objects offer realistic haptic sensations, their static properties limit adaptability to virtual interactions. Contrastingly, vibrotactile feedback presents dynamic cues, such as impacts or textures, yet current VR controllers offer limited vibration patterns. This study investigated spatializing vibrotactile cues within tangible objects to broaden the range of sensations and interactions in VR. Through perception studies, we assessed the feasibility and advantages of leveraging multiple actuators for rendering schemes. Results indicated discernible vibrotactile cues from localized actuators and reveal their benefits for specific rendering methods, underscoring the potential for enriched VR experiences.
7.2.6 Digital Twins for robotics and industrial training
Participants: Claudio Pacchierotti.
Among the most recent enabling technologies, Digital Twins (DTs) emerge as data-intensive network-based computing solutions in multiple domains—from Industry 4.0 to Connected Health. A DT works as a virtual system for replicating, monitoring, predicting, and improving the processes and the features of a physical system – the Physical Twin (PT), connected in real-time with its DT. Such a technology, based on advances in fields like the Internet of Things (IoT) and machine learning, proposes novel ways to face the issues of complex systems as in Human-Robot Interaction (HRI) domains.
In 30, we investigated the correlation between fine motor skill training in VR, haptic feedback, and physiological arousal. Designing a buzzwire task with a custom vibrotactile attachment for Geomagic Touch, we conducted a controlled experiment with 73 participants across three feedback conditions: visual/kinesthetic, visual/vibrotactile, and visual-only. Results showed performance improvement across all conditions post-training, with no reported changes in self-efficacy or perceived presence and task load. Interestingly, arousal levels remained consistent across feedback conditions, yet positive performance changes correlated with higher arousal levels. These findings suggest haptic feedback's potential to influence arousal, prompting further exploration to enhance VR-based motor skill training.
7.3 Shared Control Architectures
7.3.1 Shared Control for Remote Manipulation
Participants: Paolo Robuffo Giordano, Claudio Pacchierotti, Marco Ferro, Leon Raphalen, Paul Mefflet.
As teleoperation systems become more sophisticated and flexible, the environments and applications where they can be employed become less structured and predictable. This desirable evolution toward more challenging robotic tasks requires an increasing degree of training, skills, and concentration from the human operator. In this respect, shared control algorithms have been investigated as one of the main tools to design complex but intuitive robotic teleoperation systems, helping operators in carrying out several increasingly difficult robotic applications such as assisted vehicle navigation, surgical robotics, brain-computer interface manipulation, rehabilitation. Indeed, this approach makes it possible to share the available degrees of freedom of the robotic system between the operator and an autonomous controller.
Along this line of research, in the context of the Horizon Europe Rego project (Sect. 9.3.1), we are starting to investigate how to employ shared control strategies for allowing a human operator to control a group of micro-robots for drug delivery and micro-assembly.
In 17, we presented the experimental evaluation of a haptic shared control teleoperation framework for the locomotion of multiple microrobots, relying on a kinesthetic haptic interface and a custom electromagnetic system. Six combinations of haptic and shared control strategies are evaluated during a safe 3D navigation scenario in a cluttered environment. 18 participants are asked to steer two spherical magnetic microrobots among obstacles to reach a predefined goal, under different conditions. For each condition, participants are provided with different obstacle avoidance and navigation guidance cues. Results show that providing assistance in avoiding obstacles guarantees safer performance, regardless if the assistance is autonomous or delivered through a haptic repulsive force.
In 62, we presented a novel approach for enabling a human operator to effectively control the motion of multiple robots. Leveraging a shared control data-driven approach, we enabled a single user to control the 9 degrees of freedom related to the pose and shape of a swarm. Our methodology was evaluated through an experimental campaign conducted in simulated 3D environments featuring a narrow cylindrical path, which could represent, e.g., blood vessels, industrial pipes. Subjective measures of cognitive load were assessed using a post-experiment questionnaire, comparing different levels of autonomy of the system. Results show substantial reductions in operator cognitive load when compared to conventional teleoperation techniques, accompanied by enhancements in task performance, including reduced completion times and fewer instances of contact with obstacles.
7.3.2 Shared Control of a Wheelchair for Navigation Assistance
Participants: Louise Devigne, François Pasteau, Marie Babel.
Power wheelchairs allow people with motor disabilities to have more mobility and independence. In order to improve the access to mobility for people with disabilities, we previou,sly designed a semi-autonomous assistive wheelchair system which progressively corrects the trajectory as the user manually drives the wheelchair and smoothly avoids obstacles.
As a part of the Ambrougerien project 9.5.3, the Ultra Wide Band navigation algorithm described in 7.1.2 has been coupled with the obstacle avoidance solution for powered wheelchair previously developped in our team. This setup has been clinicaly tested with 80 users and provided autonomous indoor navigation in narrow environment with integrated safety features (see Fig. 8).
Moreover, the Cirris laboratory at Univertisty of Laval (Canada) has acquired and set up our collision avoidance wheelchair kit composed of 48 sensors and a control board running the shared control algortihm. This will enable us to conduct international multicentric clinical trials and find new use cases for this solution inside the rehabilitation process.
Finally, we pushed forward our collaboration with ST Microelectronics company, giving us the opportunity to have early access to their image and sensor portfolio, the same sensors we use in our shared control solution for wheelchairs.

Demonstration of shared control colision avoidance with UWB-based autonomous navigation in narrow environment.
7.3.3 Multisensory power wheelchair simulator
Participants: Sylvain Guegan, Louise Devigne, François Pasteau, Marie Babel.
Power wheelchairs are one of the main solutions for people with reduced mobility to maintain or regain autonomy and a comfortable and fulfilling life. However, driving a power wheelchair in a safe way is a difficult task that often requires training methods based on real-life situations. Although these methods are widely used in occupational therapy, they are often too complex to implement and unsuitable for some people with major difficulties.
In this context, we collaborated with clinicians, to develop a Virtual Reality based power wheelchair simulator. This simulator is an innovative training tool adapted to any type of situations and impairments. It relies on a modular and versatile workflow enabling not only easy interfacing with any virtual display, but also with any user interface such as wheelchair controllers or feedback devices. A clinical trial has demonstrated the relevance of the simulator 19.
To increase users perception of presence and decrease cybersickness, we proposed a novel motion cueing algorithm (MCA) to accommodate for the 4 DoF of the motion platform. This novel MCA is currently being tested with clincians in Rennes.
This multisensory power wheelchair simulator has been duplicated and set up at the Cirris Laboratory at University of Laval (Canada) to develop new usage scenarios and perform multicentric clinical studies.

Participant driving in a virtual environment with our simulator
7.3.4 Integrating social interaction in a VR power wheelchair driving simulator
Participants: Emilie Leblong, Fabien Grzeskowiak, Sebastien Thomas, François Pasteau, Anne-Hélène Olivier, Marie Babel.
Navigating in the city while driving a powered wheelchair, in a complex and dynamic environment made of various interactions with other humans, can be challenging for a person with disabilities. Learning how to drive a powered wheelchair remains then a major issue for the clinical teams prescribing these technical mobility aids. Immersive environments provide opportunities to learn and transfer skills to real life. This opens up new areas of application, such as rehabilitation, where people with neurological disabilities can learn to drive a power wheelchair through immersive simulators.
To promote the transfer of skills from virtual to real, the use of such a platform requires the deployment of environmentally friendly interactive populated virtual environments. However, these are currently empty of any pedestrians, even though the question of social interaction in the framework of an inclusive urban mobility is fundamental. Hence, to expose these specific users to daily-life study interaction situations, it is important to ensure realistic interactions with the virtual humans that populate the simulated environment. While non-verbal pedestrian-pedestrian interactions have been extensively studied, understanding pedestrian-wheelchair user interactions during locomotion is still an open research area.
Our objective is then to better understand how pedestrians and powered wheelchair users interact to improve dynamic virtual environments by including virtual humans that faithfully reproduce the behaviors modeled in terms of the simulator user's reaction in a handicap situation. We then investigate the regulation of interpersonal distance (i.e., proxemics) between a pedestrian and a PWC user in real and virtual situations. We designed 2 experiments in which 1) participants had to reach a goal by walking (respectively driving a PWC) and avoid a static PWC confederate (respectively a standing confederate) and 2) participants had to walk to a goal and avoid a static confederate seated on a PWC in real and virtual conditions 50. Our results showed that interpersonal distances were significantly different whether the pedestrian avoided the power wheelchair user or vice versa. We also showed an influence of the orientation of the person to be avoided. We proposed a proof of concept by adapting existing microscopic crowd simulation algorithms to consider the specificity of pedestrian-PWC user interactions.
7.3.5 Upper-limb exoskelon for reach-to-grasp assistance for power wheelchair users
Participants: Marie Babel, Maxime Manzano, Mael Gallois, Sylvain Guégan, Elise Larribeau, Charles Pontonnier.
Wearable Upper-Limb (UL) assistive robots are designed to increase autonomy and social participation for people with UL impairments as they assist tasks involved in Activities of Daily Living (ADLs). When an active device is coupled with a power wheelchair, it is usually controlled through push-buttons located near the wheelchair joystick, thus preventing bimanual tasks and requiring a large mental load to perform complex UL trajectories. Therefore, there is a need for strategies to detect user intent and get rid of manual control, allowing a distinctive, intuitive control of devices with multiple active degrees of freedom.
Assistive devices are then to be designed with the objective of use in daily-life as well as broad adoption by end users. In this context, it is necessary to tackle usability challenges by properly detecting and acting in accordance to user intents while minimizing the device installation complexity as well. If using force/torque sensors is advantageous to detect user intent compared to EMG interfaces, it remains difficult to correctly translate the detected intent into actuator motions. Focusing on upper-limb assistive robots, the user voluntary force is commonly used with a controller based on an admittance approach which leads to relatively poor reactivity and requires the user to develop force throughout the movement which can lead to fatigue, particularly for people with upper-limb impairments. We then proposed a Force-Triggered (FT) controller which can initiate and maintain movement only from short force impulses 51. The user voluntary forces are retrieved from the total interaction forces by subtracting the passive component measured beforehand during a calibration phase. An experiment was performed with one participant without impairment, equipped with an upper-limb exoskeleton prototype designed from recommendations of physical medicine therapists. This preliminary work has highlighted the potential of the proposed FT controller. Also, it provides directions for future work and clinical trials with end-users to assess the proposed FT approach usability while used alone or in the form of an hybrid controller between FT and admittance strategies.
In addition, patients with neurological diseases (multiple sclerosis, stroke...) experience a reduction in their force generation capacities, but also potentially in their motor control, from sensory integration to force production. The assistance system needs to be individualized according to impairment and force generation capacity variability. Thus, understanding and quantifying these capacities through measurement and modeling is of prior importance to enhance the control of such systems. We then explored the possibility to assess joint torque capacities of patients presenting post-stroke sequels or multiple sclerosis impairment to use it as guidance in a shared control scheme 49.
7.4 Aerial Physical Interaction
7.4.1 Manipulation of a deformable wire by two UAVs
Participants: Lev Smolentsev, Alexande Krupa, François Chaumette.
This study takes place in the context of the CominLabs MAMBO project (see Section 9.5.1). In 33, we proposed a visual servoing approach for manipulating a suspended flexible cable attached between two quadrotor drones. We designed a leader-follower control strategy, where a human operator controls the rigid motion of the cable by teleoperating one drone (the leader), while the second drone (the follower) equipped with an onboard RGB-D camera performs a shape visual servoing task to autonomously apply a desired deformation to the cable. The proposed cable shape visual servoing approach controlling the follower drone has the advantage to rely on a simple geometrical model (a parabola) of the cable that only requires the knowledge of its length. A robust image processing pipeline was developed for detecting and tracking in real-time the cable shape from the data provided by the onboard RGB-D camera. An additional robotic task performing simultaneously with the shaping task was also designed to autonomously maintain the best visibility of the cable in the field of view of the onboard camera by controlling the yaw angular motion of the follower drone by visual servoing. Experimental results demonstrated the effectiveness of the proposed visual control approach to shape a flexible cable into a desired shape. In addition, we demonstrated experimentally that such system can be used to perform an aerial transport task by grasping with the cable an object fitted with a hook, then moving and releasing it at another location 68.
7.4.2 Estimation of Interaction forces
Participants: Marco Tognon, Massimiliano Bertoni, Lluis Prior.
Together with Massimiliano Bertoni, we are investigating how to estimate interaction forces for aerial manipulators using a standard camera and a deformable end-effector. This approach leverages visual data to observe deformations, eliminating the need for heavy and expensive Force-Torque sensors. Additionally, in collaboration with Lluis Prior, we are exploring how to equip aerial manipulators with skin-like sensors to provide them with a sense of touch. These advancements aim to enhance the capabilities of aerial manipulators, enabling more precise and robust interaction with their environment.
7.4.3 Controlled Shaking of Trees With an Aerial Manipulator
Participants: Marco Tognon.
The work in 20 presents a control strategy for shaking flexible systems, such as trees, using an aerial manipulator. Applications include fruit harvesting and environmental monitoring. The proposed method relies on self-excited oscillations to induce vibrations at the system’s natural frequency, maximizing amplification without requiring prior knowledge of system parameters.
A simplified 1-degree-of-freedom model, derived via the Rayleigh–Ritz method, analyzes the dynamic interaction between the aerial manipulator and the tree. Indoor experiments validate the approach, showing accurate predictions of vibration frequency and amplitude, with errors as low as 3.82% and 3.61%, respectively.
This study demonstrates UAVs’ potential for remote interaction with flexible structures, enabling tasks in agriculture and environmental science that typically require large, ground-based equipment.
7.4.4 Design and Control of Aerial Manipulators for Physical Interaction Tasks
Participants: Marco Tognon, Lorenzo Balandi, Phillip Maximilian Mehl, Lluis Prior, Mattia Piras, Giulio Franchi.
The design and control of novel aerial manipulators aim to improve precision, stability, and efficiency in mobile robotic applications, particularly for aerial manipulation tasks. Aerial manipulators often face challenges due to reaction forces and torques induced by motion, impacting performance and control complexity.
The work in 71 proposes a 2-degree-of-freedom (DOF) planar manipulator designed with force-balancing principles to minimize reaction forces and torques. The manipulator integrates two four-bar linkages optimized using counter-masses and extended links, resulting in a dynamically balanced structure. Theoretical analyses and simulations highlight a 59% reduction in reaction torques and lower actuation requirements compared to an unbalanced design. The balanced mechanism maintains a constant reaction force vector caused by gravity, enhancing its suitability for mobile platforms, including aerial robots. This work demonstrates the potential of force-balanced manipulators for precise and stable operations, especially in aerial applications where disturbances must be minimized. Future developments may focus on extending this approach to dynamic interactions and load-handling tasks.
The work in 44 proposes a hybrid motion/force control strategy that leverages passive dynamics to improve alignment and stability during surface interactions. Unlike traditional approaches relying solely on active control, this method selectively disables angular motion control along specific axes and enables direct force control, allowing the system to passively align with the surface through rotational dynamics. Theoretical analysis introduces two key conditions—friction-enforcing and rotation-enforcing—which guide hardware design and control implementation to guarantee stable contact and alignment. The framework is validated through real-world experiments using a fully-actuated aerial vehicle, demonstrating its ability to maintain full contact and stability across differently oriented surfaces. This work highlights the potential of passive dynamics for aerial manipulators, enabling robust physical interaction without excessive control complexity. Future research may focus on extending this approach to dynamic and uneven surfaces, further enhancing its applicability in industrial and inspection tasks.
The work in 12 presents Geranos, a novel multirotor aerial robot designed for transporting and assembling poles with high precision and autonomy. The system features a ring-shaped structure that allows it to grasp poles at their center of mass, minimizing inertial effects. A two-part gripping mechanism combines passive centering and self-locking clamps to secure the load without requiring continuous actuation. Geranos uses a tilted-rotor configuration with four primary propellers for vertical lift and four auxiliary propellers for lateral precision. This setup enables full position and attitude control, allowing the robot to move sideways without tilting, ensuring precise placement even for long and heavy poles. Experimental demonstrations highlight Geranos’ ability to autonomously stack 2-meter-long poles with sub-5 cm placement accuracy, showcasing its suitability for applications in construction and infrastructure deployment. Future work will focus on scaling the design for larger payloads and enabling outdoor operations through visual perception and GPS-based localization.
7.4.5 Cooperative Multi-Aerial Robot Manipulation
Participants: Marco Tognon, Szymon Bielenin, Nicola De Carli, Valeria Braglia, Riccardo Belletti, Emanuele Buzzurro.
We are currently developing a novel control framework for cooperative aerial transportation tasks using multiple UAVs. Our approach focuses on a Distributed Nonlinear Model Predictive Control (DNMPC) framework, specifically designed for teams of underactuated aerial robots connected to a payload via cables. Unlike traditional centralized methods, which require high computational resources and extensive communication, we adopt a partition-based distributed optimization approach. This enables each UAV to handle only a subset of the optimization problem, significantly reducing complexity and communication overhead. The framework leverages an Alternating Direction Method of Multipliers (ADMM) algorithm, ensuring scalability as the number of robots increases. The distributed NMPC generates optimal references for low-level controllers, enabling precise trajectory tracking of both the position and orientation (full pose) of the payload. This architecture supports real-time execution, making it suitable for dynamic environments. We validate our framework through simulations and real-world experiments using the Fly-Crane system, where three UAVs equipped with dual cables cooperate to manipulate a load. Results demonstrate the robustness of the approach, showing accurate tracking performance even under constraints, while achieving reduced computational and communication demands. Future developments will explore integrating dynamic load models to further enhance precision and extending the framework to larger fleets of UAVs, maintaining its scalability and adaptability to real-world scenarios.
7.5 Sensor design for physical interaction and shared control
7.5.1 Capacitive and pressure sensor through one-shot 3D printing
Participants: Marie Babel, José Eduardo Aguilar-Segovia, Sylvain Lefebvre.
Measuring interaction forces between robots and humans is a major challenge in physical human-robot interactions. Nowadays, conventional force/torque sensors suffer from bulkiness, high cost, and stiffness, which limit their use in soft robotics. Additive manufacturing paves the way for augmenting parts with sensors fabricated in-situ, i.e. directly within the part, using functional materials deposited at the same time as the structural materials. However, achieving this goal at low cost remains challenging. The design of parametric capacitive sensors that can be embedded in complex designs is then of major interest. In particular, the sensors can be manufactured on multi-material extrusion 3D printers using commercially available non-conductive and conductive thermoplastic polyurethane. We propose to design a parameterized foam-like structure sandwiched between two conductive plates, in order to tune both the mechanical and capacitive responses of the sensor. The effect of changing the parameters of the foam-like structure on the sensor behavior is investigated. Conductive traces and shields are directly integrated within the components, alongside all the structural elements that form the designs. The devices are fully functional immediately after fabrication, requiring no additional processing or assembly.
In this context, we introduced in 11 a novel torque sensor manufactured with material extrusion technology. Our approach relies on capacitive structures, which are at the same time the deformable and sensing parts of the sensor, making it very compact. The sensor characteristics can be modulated thanks to material extrusion technology. We conduct experiments in a dedicated test bench to characterize the proposed torque sensor. From the characterization results, we implement a torque estimator based on the deformation angle estimate calculated from capacitance changes. The proposed torque sensor is able to measure torques within a +/- 2.5 N.m range with a maximum error of up to a deformation angle velocity of 35 degrees/s. It is also able to measure its deformation angle with a maximum error of 0.4°. The accuracy of our sensor makes it suitable to ensure fine control in physical human-robot interaction applications.
8 Bilateral contracts and grants with industry
8.1 Bilateral contracts with industry
8.1.1 IRT JV Happy2
Participant: François Chaumette.
No Inria Rennes 13521, duration: 72 months.
F. Chaumette is in secondment (at 20%) at IRT Jules Verne in Nantes since 2018. This year, he was involved the Happy 2 project with Airbus & Naval group to give his expertise in visual servoing and to develop basic software with ViSP.
8.1.2 Trasys/NRB
Participants: Romain Lagneau, Fabien Spindler, François Chaumette.
No Inria Rennes 2023000390, duration: 19 months.
This project started in May 2023. It is in collaboration with the Trasys/NRB company in Belgium. Its goal is to develop an embedded vision-based localization system with respect to satellite parts.
8.1.3 Sopra-Steria
Participants: François Pasteau, Marie Babel, Sylvain Guegan, Fabien Grzeskowiak.
INSA Rennes, duration: 12 months.
This project funded by Sopra Steria aimed to design new assitive robotics and to support clinical trials activities.
8.2 Bilateral grants with industry
8.2.1 Creative
Participants: Thibault Noël, François Chaumette, Eric Marchand.
No Inria Rennes 2022000032, duration: 36 months.
This project funded by Creative started in October 2021. It supports Thibault Noël's PhD who benefits from a CIFRE grant (see Section 7.1.6).
8.2.2 IRT JV Perform
Participant: François Chaumette.
No Inria Rennes 16107, duration: 36 months.
This project funded by IRT Jules Verne started in November 2021. It is achieved in cooperation with Stéphane Caro from LS2N in Nantes to support Sophie Rousseau's PhD at IRT Jules Verne about sensor-based control and vibration reduction of cable-driven parallel robots
8.2.3 Commande partagé centré sur l'humain pour la télémanipulation robotique à différentes échelles
Participant: Claudio Pacchierotti, Paolo Robuffo Giordano.
Convention cifre 2023/1119. Duration: 36 months.
This project is funded by Haption, Laval, and funds the PhD of Paul Mefflet on shared control for tele-manipulation
9 Partnerships and cooperations
9.1 International initiatives
9.1.1 Participation in other International Programs
BIFROST
Participants: Mandela Ouafo Fonkoua, Alexandre Krupa, François Chaumette, Fabien Spindler.
-
Title:
A Visual-Tactile Perception and Control Framework for Advanced Manipulation of 3D Compliant Objects
-
Duration:
July 2021 - December 2025
-
Coordinator:
Sintef Ocean (Norway)
-
Partners:
- Sintef Ocean (Norway)
- MIT (USA)
-
Inria contact:
Alexandre Krupa
-
Summary:
This project is granted by The Research Council of Norway. Its main objective is to develop a visual-tactile perception and control framework for advanced manipulation of 3D compliant objects. The Rainbow group is in charge of elaborating novel visual servoing approaches fusing visual and tactile feedback for dexterous manipulation of soft objects.
9.2 International research visitors
9.2.1 Visits to international teams
Research stays abroad
- Alexandre Krupa spent one week at the SINTEF Ocean Institute in Trondheim, Norway, as part of the BIFROST collaborative project (August 2024).
- Marco Tognon spent one week at the Department of Engineering Cybernetics, NTNU - Norwegian University of Science and Technology, in Trondheim, Norway, hosted by Prof. Kostas Alexis, as part of the mobilité Asgard 2024 program (June 2024).
9.3 European initiatives
9.3.1 Horizon Europe
REGO
Participants: Claudio Pacchierotti, Paolo Robuffo Giordano, Maro Ferro.
REGO project on cordis.europa.eu
-
Title:
Cognitive robotic tools for human-centered small-scale multi-robot operations
-
Duration:
From October 1, 2022 to September 30, 2026
-
Partners:
- INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
- CENTRE HOSPITALIER UNIVERSITAIRE DE RENNES (CHU RENNES), France
- UNIVERSITEIT TWENTE (UNIVERSITEIT TWENTE), Netherlands
- SCUOLA SUPERIORE DI STUDI UNIVERSITARI E DI PERFEZIONAMENTO S ANNA (SSSA), Italy
- FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA (IIT), Italy
- HAPTION SA (HAPTION), France
- CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
- HELMHOLTZ-ZENTRUM DRESDEN-ROSSENDORF EV (HZDR), Germany
-
Inria contact:
Claudio Pacchierotti
-
Coordinator:
Claudio Pacchierotti
-
Summary:
Robots are still often regarded as large machines with links, gears, and electric motors, autonomously interacting with the surrounding environment. Despite the great research efforts in robotics and human-robot interaction (HRI), the way we design, use, and control robots has not fundamentally changed in the past 20 years. We see in small-scale wireless multi-robot systems and cognitive HRI a revolutionary answer to nowadays robots limitations. Instead of large, tethered machines, that are difficult for the human user to control, REGO proposes an innovative set of AI-powered, modular, microsized swarms of robots. They are wirelessly steered by electromagnetic fields as well as able to react to other external stimuli, and then naturally controlled by humans through intuitive dexterous interfaces and interaction techniques. Taking advantage of AI multi-robot control strategies, these robots can team up and collaborate to fulfill complex tasks in a robust and unprecedented flexible way. By exploiting multisensory interaction techniques and cognitive shared control, the operator will achieve an unparalleled level of seamless interaction and continuous collaboration with the robotic team. According to the application at hand, the robotic team will feature different task-specific characteristics (e.g., biocompatibility for medical procedures, biodenitrification for cleaning water, ability to carry drugs to fight infections) and be dispatched through various delivery systems, including a stimuli-responsive milli-scale wireless robotic carrier developed within the project. To achieve this revolution, REGO will develop magnetic multi-robot motion control systems, autonomous swarm control techniques for micro-sized robots, human-robot haptic-centered interfaces, and cognitive shared-control techniques. REGO enables the next generation of AI-powered interactive small-size multi-robots systems, with increased capabilities to work with each other and their human operators.
GuestXR
Participant: Claudio Pacchierotti.
GuestXR project on cordis.europa.eu
-
Title:
GuestXR: A Machine Learning Agent for Social Harmony in eXtended Reality
-
Duration:
From January 1, 2022 to December 31, 2025
-
Partners:
- INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
- UNIWERSYTET WARSZAWSKI (UNIWARSAW), Poland
- VIRTUAL BODYWORKS SL (Virtual Bodyworks S.L.), Spain
- UNIVERSITEIT MAASTRICHT, Netherlands
- UNIVERSITAT DE BARCELONA (UB), Spain
- FUNDACIO EURECAT (EURECAT), Spain
- REICHMAN UNIVERSITY (REICHMAN UNIVERSITY), Israel
- CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
- G.TEC MEDICAL ENGINEERING GMBH (G.TEC MEDICAL ENGINEERING GMBH), Austria
-
Inria contact:
Anatole LECUYER
- Coordinator:
-
Summary:
Immersive online social spaces will soon become ubiquitous. However, there is also a warning that we need to heed from social media.
User content is the ‘lifeblood of social media’. However, it often stimulates antisocial interaction and abuse, ultimately posing a danger to vulnerable adults, teenagers, and children.
In the VR space this is backed up by the experience of current virtual shared spaces. While they have many positive aspects, they have also become a space full of abuse.
Our vision is to develop GuestXR, a socially interactive multisensory platform system that uses eXtended Reality (virtual and augmented reality) as the medium to bring people together for immersive, synchronous face-to-face interaction with positive social outcomes.
The critical innovation is the intervention of artificial agents that learn over time to help the virtual social gathering realise its aims. This is an agent that we refer to as “The Guest” that exploits Machine Learning to learn how to facilitate the meeting towards specific outcomes.
Underpinning this is neuroscience and social psychology research on group behaviour, which will deliver rules to Agent Based Models (ABM).
The combination of AI with immersive systems (including haptics and immersive audio), virtual and augmented reality will be a hugely challenging research task, given the vagaries of social meetings and individual behaviour. Several proof of concept applications will be developed during the project, including a conflict resolution application in collaboration with the UN. A strong User Group made up of a diverse range of stakeholders from industry, academia, government and broader society will provide continuous feedback. An Open Call will be held to bring in artistic support and additional use cases from wider society. Significant work is dedicated to ethics “by design”, to identify problems and look eventually towards an appropriate regulatory framework for such socially interactive systems.
9.4 National initiatives
9.4.1 Equipex+ Tirrex
Participants: Fabien Spindler, François Chaumette.
no Inria Rennes OIP 03-22-01, duration: 8 years.
This large national project devoted to open robotics platforms started in December 2021. Rainbow is responsible of the manipulation axis for which a new M4 platform (Multi-arm Multi-sensor Mobile Manipulator) will be designed and installed in our lab. This year, we ended the negotiations with potential suppliers and selected PAL Robotics as provider. The M4 platform should be delivered in April 2025. Rainbow is also a member of the axis devoted to aerial robotics.
9.4.2 PEPR O2R
Participants: Maud Marchal, Marie Babel.
duration: 8 years.
The Organic Robotics program proposes to implement a responsible and socially acceptable robotics. This PEPR will intensify the multidisciplinary approach of the community (digital, life sciences, engineering, environmental, social sciences) in a strategy that radically differs from the current vision of robotics and its limitations. The Organic Robotics program therefore aims to initiate a shift in robotics allowing to create a new generation of robots capable of interacting and working in symbiosis with humans. We propose to consider the robot, no longer as an automation machine, but as a tool, in line with those that humans have created, used and optimized in order to explore and act on their environment. More efficient, modular, reconfigurable and adaptive, organic robots will become an extension of humans. The PEPR O2R started in September 2023. M. Marchal has contributed to the proposal writing and is a member of the executive committee.
Marie Babel participates to the ASSISTMOV project that aims to design innovative robotic assistance through upper-limb exoskeleton. She is a member of the steering committe of ASSISTMOV project.
9.4.3 PEPR O2R - AS2 structuring action
Participants: Alexandre Krupa, Fabien Spindler.
-
Title:
PEPR O2R - AS2 structuring action “Robot motion with physical interactions and social adaptation”
-
Duration:
January 2024 - December 2031
-
Coordinator:
LASS-CNRS (Toulouse)
-
Partners:
- Gepeto (LAAS), IDH (LIRMM), Willow (Inria), Auctus (Inria), Rainbow (Inria), Robioss (PPRIME), CERCA (CNRS), ICNA (ONERA), Habiter le Monde (UPJV)
-
Inria contact:
Alexandre Krupa
-
Summary:
In the context of the national PEPR O2R robotic exploration program, Alexandre Krupa is involved in the AS2 structuring action. This structuring action aims to rethink the problem of motion generation in robotic systems, taking a global approach and redefining research objectives in conjunction with the Human and Social Sciences. Within this AS2 action, the Rainbow group is involved on the development of multi-sensor control strategies for the control of physically interacting robotic systems. Since January 2024, Alexandre Krupa has been co-supervising a PhD student based at LIRMM with Philippe Fraisse (LIRMM) and Andrea Cherubini (LS2N), whose thesis focuses on the robotic manipulation of deformable objects using the LIRMM dual-arm robot BAZAR.
9.4.4 PEPR eNSEMBLE
Participant: Maud Marchal.
duration: 8 years.
The purpose of eNSEMBLE (Future of Digital Collaboration) is to fundamentally redefine digital tools for collaboration. To address this challenge, a paradigm shift in the design of collaborative systems is needed, comparable to the one that saw the advent of personal computing. To collaborate in a fluid and natural way while taking advantage of computer capabilities, collaboration and sharing must become native features of computer systems, in the same way that files or applications are today. To achieve this goal, we need to invent mixed (i.e. physical and digital) collaboration spaces that do not simply replicate the physical world in virtual environments, enabling co-located and/or geographically distributed teams to work together smoothly and efficiently. The PEPR eNSEMBLE will start in September 2023. M. Marchal has contributed to the writing of the proposal.
9.4.5 ANR Marsurg
Participants: Eric Marchand, François Chaumette, Fabien Spindler.
no Inria 16162, duration: 48 months.
This project started in September 2021. It involves a consortium managed by ISIR (Paris) with Pixee Medical and Rainbow group. It aims at researching markerless augmented reality solution for orthopedic surgery
9.4.6 ANR Sesame
Participants: François Chaumette, Alessandro Colotti.
no Inria 13722, duration: 70 months.
This project started in January 2019. It involves a consortium managed by LS2N (Nantes) with LIP6 (Paris) and Rainbow group. It aims at analysing singularity and stability issues in visual servoing.
9.4.7 Inria Challenge DORNELL
Participants: Marie Babel, Claudio Pacchierotti, Maud Marchal, François Pasteau, Sylvain Guegan, Louise Devigne, Marco Aggravi, Inès Lacôte, Pierre-Antoine Cabaret, Lisheng Kuang.
-
Title:
DORNELL: A multimodal, shapeable haptic handle for mobility assistance of people with disabilities
-
Duration:
November 2020 - December 2024
-
Coordinators:
Marie Babel, Claudio Pacchierotti
-
Partners:
- Potioc Inria team
- MFX Inria team
- LGCGM (Rennes)
- Centre de rééducation Pôle Saint Hélier (Rennes)
- ISIR (Paris)
- Institut des jeunes aveugles (Yzeure)
-
Inria contact:
Marie Babel, Claudio Pacchierotti
-
Summary:
While technology helps people to compensate for a broad set of mobility impairments, visual perception and/or cognitive deficiencies still significantly affect their ability to move safely and easily. We propose an innovative multisensory, multimodal, smart haptic handle that can be easily plugged onto a wide range of mobility aids, including white canes, precanes, walkers, and power wheelchairs. Specifically fabricated to fit the needs of a person, it provides a wide set of ungrounded tactile sensations (e.g., pressure, skin stretch, vibrations) in a portable and plug-and-play format – bringing haptics in assistive technologies all at once. The project will address important scientific and technological challenges, including the study of multisensory perception, the use of new materials for multimodal haptic feedback, and the development of a haptic rendering API to adapt the feedback to different assistive scenarios and user’s wishes. We will co-design DORNELL with users and therapists, driving our development by their expectations and needs.
9.4.8 BPI Lichie
Participants: John Thomas, François Chaumette.
no Inria 14876, duration: 72 months.
This project started in March 2020. It involves a consortium managed by Airbus Defense and Space (Toulouse) with many companies, Onera and Inria. It aims at designing a new constellation of satellites with on-board imaging facilities. Robotics for the assembly of the satellites is also studied. As for Rainbow, this project funded Maxime Robic and John Thomas PhDs (see Sections 7.1.5 and 7.1.4.
9.4.9 ANR CAMP
Participants: Paolo Robuffo Giordano, Fabien Spindler, Ali Srour, Tommaso Belvedere, Salvatore Marcellini.
-
Title:
Intrinsically-Robust and Control-Aware Motion Planning for Robots in Real-World Conditions
-
Duration:
October 2020 - June 2026
-
Coordinator:
P. Robuffo Giordano
-
Partners:
- LAAS (Toulouse)
- Univ. Twente (Netherlands)
-
Inria contact:
P. Robuffo Giordano
-
Summary:
An effective way of dealing with the complexity of robots operating in real (uncertain) environments is the paradigm of “feedforward/feedback” or “planning/control”: in a first step a suitable nominal trajectory (feedforward) for the robot states/controls is planned exploiting the available information (e.g., a model of the robot and of the environment). While there has been an effort in proposing “robust planners” or more “global controllers” (e.g., Model Predictive Control (MPC)), a truly unified approach that fully exploits the techniques of the motion planning and control/estimation communities is still missing and the existing state-of-the-art has several important limitations, namely (1) lack of generality, (2) lack of computational efficiency, and (3) poor robustness. In this respect, the ambition of CAMP is to (1) develop a general and unified “intrinsically-robust and control-aware motion planning framework” able to address all the above-mentioned issues, and to (2) demonstrate the applicability of this new framework to real robots in real-world challenging tasks. In particular we envisage two robotics demonstrators for showing at best the effectiveness and generality of our methodology: (1) an indoor pick- and-place/assembly task involving a 7-dof torque-controlled arm for a first validation in “controlled conditions” and (2) an outdoor cooperative mobile manipulation task involving an aerial manipulator (a quadrotor UAV equipped with an onboard arm) and a skid-steering mobile robot with an onboard arm for a final validation in much less favorable experimental conditions (see Sect. 7.1.1)
9.4.10 ANR MULTISHARED
Participants: Paolo Robuffo Giordano, Claudio Pacchierotti, Vincent Drevelle, Nicola De Carli, Maxime Bernard, Esteban Restrepo.
-
Title:
Shared-Control Algorithms for Human/Multi-Robot Cooperation
-
Duration:
September 2020 - October 2025
-
Coordinator:
P. Robuffo Giordano
-
Inria contact:
P. Robuffo Giordano
-
Summary:
The goal of the Chaire AI MULTISHARED is to significantly advance the state-of-the-art in multi-robot autonomy and human-multi-robot interaction for allowing a human operator to intuitively control the coordinated motion of a multi-UAV group navigating in remote environments, with a strong emphasis on the division of roles between multi-robot autonomy (in controlling its motion/configuration and online decision-making) and human intervention/guidance for providing high-level commands to the group while being most aware of the group status via VR and haptics technology (see Sect. 7.1.8).
9.4.11 ANR JCJC AirHandyBot
Participants: Marco Tognon, Paolo Robuffo Giordano, Lorenzo Balandi, Mattia Piras, Maximilian Mehl, Gianluca Corsini, Fabien Spindler.
-
Title:
Aerial Robots for True Manipulation of Dynamic and Uncertain Environments
-
Duration:
November 2023 - October 2026
-
Coordinator:
M. Tognon
-
Inria contact:
M. Tognon
-
Summary:
One of the main goals of robotics is to realize autonomous systems that can help human operators in tasks that are hard and dangerous (e.g., in elevated areas). It is therefore important to conceive robots that can perform physical work executing complex tasks requiring the interaction with the environment and the manipulation of objects. In particular, having aerial robots able to interact with the environment, would open the door new applications in dangerous and hardly accessible area, like manipulation of objects, contact-based inspection, and construction. Aiming to show the feasibility of Aerial Physical Interaction (APhI), previous works focused on the design and control of aerial manipulators. However, current investigations and applications are still limited to simple interaction tasks, involving limited contact behaviors with static and rigid surfaces, moreover performed in known and structured environments. To deploy aerial manipulators in real scenarios, they must be able to perform more complex manipulation tasks in less structured situations. Because of the application, scientific interest, and possible future impact of APhI, FlyHandyBot aims to enhance APhI capabilities of highly dynamical aerial manipulators by considering: - manipulations tasks of movable and articulated objects, relying on onboard sensors only; - real scenarios characterized by disturbances and uncertainties due to system modeling errors, noisy and imprecise measurements coming from lightweight onboard sensors, imprecise actuation models due to complex aerodynamic effects, and partially unknown environments. The investigation, including fundamental theoretical results, real experiments and practical demonstrations, will focus on the design of new conception, modeling and control methods to make aerial robots much more precise, robust and safe while performing physical interaction tasks in real environments. This will allow aerial robots to be in the future valid companions of human operators.
9.4.12 AeX AEROTouch
Participants: Marco Tognon, Paolo Robuffo Giordano, Lluis Prior, Gianluca Corsini, Fabien Spindler.
-
Title:
Aerial Robots with the Sense of Touch
-
Duration:
November 2023 - October 2026
-
Coordinator:
M. Tognon
-
Inria contact:
M. Tognon
-
Summary:
Researchers are trying to make aerial robots perform physical work. Current methodologies show promising results, but they fail in real scenarios, mostly because of inaccurate visual perception. Inspired by nature, this project investigates how to also provide aerial robots with the sense of touch and how to use it for improving their manipulation capabilities.
9.5 Regional initiatives
9.5.1 CominLabs MAMBO
Participants: Lev Smolentsev, Alexandre Krupa, François Chaumette, Paolo Robuffo Giordano, Fabien Spindler.
-
Title:
Manipulation of Soft Bodies with Multiple Drones
-
Duration:
October 2020 - December 2024
-
Coordinator:
LS2N (Nantes)
-
Partners:
- LS2N (Nantes)
-
Inria contact:
Alexandre Krupa
-
Summary:
This project was funded by the Labex CominLabs. It was led by the ARMEN team at LS2N (Nantes) and implied the collaboration of the Rainbow Project-Team. Its objective was to propose a scientific framework for allowing the manipulation of an object by the combined action of two drones. The proposed solution was to manipulate a deformable body (a slender beam or a cable) attached between the two drones in order to grasp an object on the floor and move it to another location. In the scope of this project, the Rainbow group was involved in the elaboration and experimental validation of new approaches for controlling the 2 drones by visual servoing using data provided by an onboard RGB-D camera (see Section 7.4.1).
9.5.2 CominLabs EM-ART
Participants: Marco Ferro, Claudio Pacchierotti, Paolo Robuffo Giordano.
-
Title:
Electromagnetic artificial human: paradigm shift in dosimetry for 5G and beyond
-
Duration:
June 2022 - December 2025
-
Coordinator:
IETR (Rennes)
-
Inria Contact:
Claudio Pacchierotti
-
Summary:
The growth of mobile data traffic driven by wireless user terminals and data-intensive applica- tions has led to a surge in demand for ultra-low latency and ultra-high data rates. This has prompted the wireless industry to explore underused spectrum above 6 GHz for the development of 5G/6G mobile communications. However, the shift to higher microwave frequencies poses challenges for exposure assessment due to limitations in conventional dosimetry techniques. EM-ART aims to address this by proposing a novel approach for accurate, realistic, and high-sensitivity dosimetry measurements at frequencies relevant to 5G/6G. The goal is to address public concerns about environmental safety and facilitate the certification of emerging millimeter-wave technologies in 5G devices.
9.5.3 Ambrougerien
Participants: Marie Babel, François Pasteau, Vincent Drevelle, Theo Le Terrier.
-
Title:
Autonomie,MoBilité et fauteuil ROUlant robotisé : GEolocalisation indoor et Recharge IntelligENte
-
Duration:
December 2020 - December 2024
-
Coordinator:
DK Innovation (Plérin)
-
Partners:
- INSA Rennes,
- Hoppen (Rennes)
- Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
-
Inria contact:
Marie Babel
-
Summary:
This project started in December 2020 and is supported by Brittany region and Rennes Métropole. AMBROUGERIEN aims at supporting the independence of people in electric wheelchairs. A dedicated interface allows the wheelchair to move autonomously to secure the transfer and to return to an intelligent induction recharging base. Information on the internal state of the wheelchairs facilitates fleet management.
9.5.4 Acacdemic Chair IH2A
Participants: Marie Babel, François Pasteau, Vincent Drevelle, Theo Le Terrier, Louise Devigne, Emilie Leblong, Anne-Hélène Olivier, Claudio Pacchierotti, Maud Marchal, Fabien Grzeskowiak, Maxime Manzano, Mael Gallois.
-
Title:
Academic Chair on Innovations, Handicap, Autonomy and Accessibility (IH2A)
-
Duration:
December 2020 - December 2024
-
Coordinator:
Marie Babel
-
Partners:
- INSA Rennes
- Université Rennes
- Université Rennes 2
- Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
- CHU Pontchaillou Rennes
- M2S
-
Inria contact:
Marie Babel
-
Summary:
This research chair (Innovations, Handicap, Autonomy and Accessibility - IH2A) aims to propose the most appropriate technological solutions to compensate for sensorimotor impairments that limit people's mobility and autonomy in everyday tasks and leisure activities. The IH2A Chair aims to structure these activities from a social, scientific and clinical point of view and to be an effective and innovative tool for the development of large-scale research in this field. The creation of a new type of multidisciplinary research and innovative collaborative experiments will allow the clinical and scientific validation of the technical assistance offered, while ensuring the accessibility of the solutions deployed.
9.5.5 Hubert
Participants: Marie Babel, François Pasteau, Vincent Drevelle, Fabien Grzeskowiak.
-
Title:
Hubert
-
Duration:
January 2023 - December 2025
-
Coordinator:
BA Healthcare (Pacé)
-
Partners:
- INSA Rennes
- CIMTECH (Pacé),
- Centre de médecine physique et de réadaptation Pôle Saint Hélier (Rennes)
-
Inria contact:
Marie Babel
-
Summary:
The aim of this project is to create a range of robotized walkers for geriatric use in health and social care establishments, to give residents or patients greater independence. Inspired by existing prototypes at BA HEALTHCARE, this new range will be aimed at two types of user: those who still have some mobility, but reduced, and those with little or no mobility. The aimis to offer users an aid to mobility and the transition fromsitting to standing.
10 Dissemination
10.1 Promoting scientific activities
10.1.1 Scientific events: organisation
General chair, scientific chair
- M. Marchal has been General Chair of Eurohaptics 2024, Lille, France, July 2024.
- M. Marchal was a co-organizer of the common days between GdR IG-RV and AFIHM, Lille, France, May 2024.
- M. Babel has been co-general chair of the Journées Nationales du GdR Robotique, Paris, France, 2024
Member of the organizing committees
- C. Pacchierotti has been Program Co-Chair of the Eurohaptics Conference, Lille, France, 2024.
10.1.2 Scientific events: selection
Chair of conference program committees
- C. Pacchierotti has been part of the Conference Editorial Board, Eurohaptics Conference, Lille, France, 2024.
- M. Tognon has been area chair of conference the Robotics: Science and Systems, Delft, Netherlands, 2024.
Member of the conference program committees
- M. Marchal was a Supercommittee member of IEEE VR 2024, Orlando, US.
- M. Marchal was an International Program committee member of IEEE ISMAR 2024, Seattle, US.
- M. Marchal was an International Program committee member of ACM/Eurographics SCA 2024, Montreal, Canada.
Reviewer
- P. Robuffo Giordano: IEEE ICRA (1)
- F. Chaumette: IEEE IROS (2), ICSTCC (1)
- A. Krupa: IEEE ICRA (1)
- C. Pacchierotti: IEEE ICRA (3), ICUAS (1), IEEE Humanoids (4), IEEE IROS (3)
- E. Restrepo: IEEE CDC (2), ACC (1)
- M. Tognon: IEEE IROS (1), ICRA (1)
- V. Drevelle: IEEE IROS (1)
- M. Marchal: IEEE ISMAR (4), ACM Siggraph (2)
10.1.3 Journal
Member of the editorial boards
- P. Robuffo Giordano is Editor of the IEEE Transactions on Robotics
- C. Pacchierotti is Associate Editor of the IEEE Transactions on Haptics
- C. Pacchierotti is Associate Editor of the International Journal of Robotics Research
- M. Tognon is Associate Editor of the IEEE Transactions on Robotics
- M. Babel is Associate Editor of Springer Social Robotics and IEEE Robotics and Automation Letters
- M. Marchal is Associate Editor-In-Chief of IEEE Transactions on Visualization and Computer Graphics
- M. Marchal is Associate Editor of IEEE Transactions on Haptics
- M. Marchal is Associate Editor of ACM Transactions on Applied Perception
- M. Marchal is Associate Editor of Computers & Graphics
Reviewer - reviewing activities
- F. Chaumette: IEEE Transactions on Industrial Electronics (1)
- A. Krupa: IEEE T-RO (1)
- C. Pacchierotti: IEEE Transactions on Human-Machine Systems (1), Journal of Field Robotics (2), Advanced Intelligent Systems (1), IEEE Transactions on Haptics (4), IEEE Transactions on Visualization and Computer Graphics (3), SPJ Research (1), Science Advances (2), IEEE Robotics and Automation Letters (1), IEEE Transactions on Robotics (1), Science Robotics (1)
- E. Restrepo: IEEE T-RO (2), IEEE TAC (3), IEEE TCNS (1), IEEE TCST (1), Automatica (1), IEEE CYB (1)
- M. Marchal: ACM Trans. on Graphics (1)
10.1.4 Invited talks
- P. Robuffo Giordano. “How to Review a Scientific Paper: Some Guidelines”. IEEE RAS Young Reviewers Program (YRP) Event@ICRA24, May 2024
- P. Robuffo Giordano. “Intrinsic Robust Planning for Uncertain Robots”. Sapienza PhD ABRO Lectures, Sapienza Univ, Rome, Italy, June 2024
- P. Robuffo Giordano. “Recent Advances in Shared Control for Telemanipulation and Tele-navigation at the Macro and Micro scale””. IROS 2024 Workshop on Multisensory Transparency-Augmented Teleoperation in Extreme Environments, October 2024
- P. Robuffo Giordano. “Intrinsic Robust Planning for Uncertain Robots”. RPC Doctoral Day, LS2N, Nantes, France, November 2024
- P. Robuffo Giordano. “Intrinsic Robust Planning for Uncertain Robots”. ONERA, Toulouse, France, December 2024
- C. Pacchierotti. “Immersive Virtual Reality and Wearable Haptics.” Workshop on “Enabling Haptic Interaction in Extended Reality: Challenges, Directions, and Opportunities” at the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Seattle, USA, 2024.
- C. Pacchierotti. “Haptics for biomedical applications.” Seminar for the M2 course on Biomedical Engineering, University of Twente, Enschede, The Netherlands, 2024.
- C. Pacchierotti. “Cutaneous haptics in human-centered robotics.” Seminar for the Robotics and Mechatronics (RaM) group, University of Twente, Enschede, The Netherlands, 2024.
- C. Pacchierotti. “Cutaneous haptics in human-centered robotics and immersive interaction.” University of Lisbon, Lisbon, Portugal, 2024.
- C. Pacchierotti. “RĔGO: Cognitive robotic tools for human-centered small-scale multi-robot operations.” Workshop on “Enabling artificial agents to communicate with humans through touch” at Eurohaptics, Lille, France, 2024.
- C. Pacchierotti. “Beyond force feedback: the role of cutaneous haptics in human-centered robotics.” Keynote presentation at the IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024.
- C. Pacchierotti. “The potential of haptic feedback for medical robotics: from robot-assisted surgery to microrobotics.” Workhop on “Translational Research in Medical Robotics: From Lab Bench to Clinical Use” at IEEE ICRA, Yokohama, Japan, 2024.
- C. Pacchierotti. “Wearable haptics for immersive experiences.” Keio University Graduate School of Media Design, Tokyo, Japan, 2024.
- C. Pacchierotti. “Haptics for human-centered robotics and Virtual Reality.” Seminar for the Dept. Cogntive and Brain Sciences, Indian Institute of Technology Gandhinagar, online, 2024.
- E. Restrepo. “Open Multi-Robot Systems: Towards Resilient Robotic Teams.” Seminar for the ARS Control group, Univeristy of Modena and Reggio Emilia, Reggio Emilia, Italy, 2024.
- M. Tognon. “Advancements in aerial physical interaction: design control and collaborations”. University of Tokyo, Tokyo, Japan. May 2024.
- M. Tognon. “Advancements in aerial physical interaction: design control and collaborations”. Norwegian University of Science and Technology, Trondheim, Norway. June 2024.
- M. Tognon. “Advancements in aerial physical interaction: design control and collaborations”. Polytechnic of Turin, Turin, Italy. September 2024.
- M. Tognon. “Advancements in aerial physical interaction: design control and collaborations”. Laboratoire des Sciences du Numérique de Nantes (LS2N) CNRS, Nantes, France. December 2024.
- M. Tognon. “Advancements in aerial physical interaction: design control and collaborations”. University of Strasbourg, Strasbourg, France. December 2024.
- M. Tognon. “Advancements in aerial physical interaction: design control and collaborations”. IDSIA USI-SUPSI Istituto Dalle Molle di studi sull'intelligenza artificiale, Lugano, Switzerland. December 2024.
- M. Tognon. “Advancements in aerial physical interaction: design control and collaborations”. Polytechnic of Milan, Milan, Italy. December 2024.
- M. Babel “Co-design and evaluation of robotic mobility assistance for people with disabilities: from need to use”, LIG keynote speech, Grenoble, October 2024
- Marie Babel, “Co-conception et évaluation clinique d’un simulateur multisensoriel de conduite de fauteuil roulant électrique”, IFRATH, Paris, May 2024
- M. Marchal. “Exploring novel haptic modalities within mixed environments”, Keynote speaker, IEEE International Symposium on Mixed and Augmented Reality, Seattle, US, October 2024.
- M. Marchal. “Playing with tangibles in Virtual Reality”, Keynote speaker, ACM Symposium on Virtual Reality Software and Technology, Trier, Germany, October 2024.
- M. Marchal. “Is data the only lever for designing interactive simulations?”, Keynote speaker, ACM/Eurographics Symposium on Computer Animation, Montreal, Canada, August 2024.
- M. Marchal. “(Im)possible deformations in robotics: historical evolution”. Keynote speaker, Summer School on deformable robotics, Lille, July 2024.
10.1.5 Leadership within the scientific community
- Marie Babel serves as the Deputy Director of the GdR Robotique
- C. Pacchierotti is Distinguished Lecturer of the IEEE Robotics & Automation Society for the field of haptics (Region 8: Europe, Middle East and Africa, 2024 – 2026).
- C. Pacchierotti is Co-Chair of the IEEE Technical Committee on Telerobotics (2023 – 2026).
- C. Pacchierotti is Senior Chair of the IEEE Technical Committee on Haptics (2021 – 2024).
- C. Pacchierotti is the co-organizer (co-animateur) of the scientific thematics 4 “Human-centered robotics” (TS4 “Robotique centrée sur l'humain”, 2024 onwards) of GdR robotique.
- P. Robuffo Giordano is member (elected) of the Section 07 of the Comité National de la Recherche Scientifique.
- M. Tognon is the co-organizer (co-animateur) of the scientific thematics 3 “Heterogeneity and Complexity” (TS3 “Hétérogénéité et Complexité”, 2024 onwards) of GdR Robotique.
- M. Tognon is Co-Chair of the IEEE Technical Committee Aerial Robotics and UAVs (2024 onwards).
- F. Chaumette serves as a member of the Scientific Council of the Mathematics and Computer Science Department of INRAE. He is also a founding member of the Scientific Council of the GdR Robotique.
- M. Marchal serves as Adjoint Deput Director of the GdR IG-RV.
- M. Marchal is a member of the Eurographics steering committee and a member of the diversity board.
- M. Marchal is a member of the ISMAR Steering Committee.
10.1.6 Scientific expertise
- P. Robuffo Giordano served as expert/reviewer for the euRobotics “George Giralt” award for the best European PhD thesis in robotics. He was reviewer for the H2020 projects ACROBA and SIMAR, and for EU Consolidator Grants. He was member of the HCERES committee for evaluating the Heudiasyc lab, and committee member of the Chaire professeur junior (CPJ) CNRS “Véhicules autonomes et transport”, Heudiasyc, France. Finally, he is member of the Steering Committee (Comité de Pilotage) for the PEPR “Accélération en Robotique”'
- F. Chaumette served as the Chair of the 2024 IEEE ICRA Best Paper Award in Robot Vision. He was the president of the Steering Committee of AIST/CNRS JRL lab at Tsukuba held in May 2024. He also served in the selection committee for a Professor position at Université Clermont Auvergne and for an Assistant Professor position at Université Picardie Jules Verne. He also served in the selection committe for two INRAE Researcher positions.
- M. Babel is the vice-president of the Comité d’évaluation of ANR (CE33 - Interaction, robotique). She serves since 2017 as an expert for the International Mission of the French ResearchMinistry (MEIRIES). She also serves as a member of the Selection and Validation Committee of the Pôle Images et Réseaux.
- M. Marchal is IEEE VGTC Best VR Dissertation award chair 2024. She was also a member of the Best Eurohaptics Dissertation award committee as well as a member of the committee “Rising Stars in Computer Graphics program”'.
10.1.7 Research administration
- V. Drevelle is a member of the laboratory council of IRISA.
- A. Krupa is the president of the CUMIR (“Commission des Utilisateurs des Moyens Informatiques pour la Recherche”) of Centre Inria de l'Université de Rennes since February 2023.
- C. Pacchierotti and F. Spindler are part of the Comité de centre (2023 – 2026), Centre Inria de l’Université de Rennes.
- F. Chaumette is a member of the Inria COERLE committee (in charge of the ethical aspects of all Inria research) since 2019.
- Eric Marchand is head of the Matisse Doctoral School (ED 601, Université de Rennes).
- M. Marchal is an elected member of the Scientific Council of INSA Rennes and the Council of the IRISA component of INSA Rennes.
10.2 Teaching - Supervision - Juries
10.2.1 Teaching
François Chaumette:
- Master SIVOS: “Visual Servoing”, 9 hours, M2, Université de Rennes
- Master ENS: “Visual servoing”, 6 hours, M1, Ecole Nationale Supérieure de Rennes;
- Master ESIR3: “Visual servoing”, 9 hours, M2, Ecole supérieure d'ingénieurs de Rennes.
Alexandre Krupa:
- Master ESIR3: “Ultrasound visual servoing”, 9 hours, M2, Esir Rennes
Eric Marchand:
- Master Esir2: “BINP”, 9 hours, M1, Esir Rennes
- Master Esir2: “Computer vision: geometry”, 24 hours, M1, Esir Rennes
- Master Esir3: “Robotics Vision 1”, 12 hours, M2, Esir Rennes
- Master Esir3: “Robotics Vision 2”, 7 hours, M2, Esir Rennes
- Master SIVOS: “Geometric Computer Vision”, 8 hours, M2, Université de Rennes
- Master ENS: “Computer vision”, 6 hours, M2, ENS Rennes
- Master MIA: “Augmented reality”, 4 hours, M2, Université de Rennes
Marie Babel:
- Master INSA2: “Robotics”, 26 hours, M1, INSA Rennes
- Master INSA1: “Concepts de la logique à la programmation”, 20 hours, L3, INSA Rennes
- Master INSA1: “Langage C”, 12 hours, L3, INSA Rennes
- Master INSA2: “Computer science project”, 30 hours, M1, INSA Rennes
- Master INSA1: “Practical studies”, 16 hours, L3, INSA Rennes
- Master INSA2: “Image analysis”, 26 hours, M1, INSA Rennes
- Master INSA1: “Remedial math courses”, 50 hours, L3, INSA Rennes
- Master INSA 1: “Probability”, 14 hours, L3, INSA Rennes
- Master INSA: tutoring and support for students with disabilities, 30 hours, INSA Rennes
- Master SIVOS: “Mecatronics for healthcare”, 12 hours, M2, ENS Rennes
Claudio Pacchierotti:
- Master “Artificial Intelligence & Advanced Visual Computing”: INF644 – Virtual/Augmented Reality & 3D Interactions”, 6 hours, M2, École Polytechnique
- Master SIF: “Virtual Reality and Multi-Sensory Interaction”, 4 hours, M2, IRISA.
Maud Marchal:
- Master INSA1: “Computer Graphics”, 20 hours, M1, INSA Rennes.
- Master INSA1: “Complexity and algorithms”, 26 hours, L3, INSA Rennes.
- Master INSA2: “Human-Computer Interaction”, 15 hours, M2, INSA Rennes.
- Master INSA1: “Software design for medical applications”, 6 hours, M1, INSA Rennes.
- Master SIF: “Computer Graphics”, 20 hours, M2, Univ. Rennes.
Vincent Drevelle:
- Master Info IL: “Artificial intelligence”, 12 hours, M1, Université de Rennes
- Master Info IA: “Artificial intelligence algorithmics”, 10.5 hours, M1, Université de Rennes
- Licence Info: “Principles of computer systems”, 52 hours, L1, Université de Rennes
- Licence Miage: “Computer programming”, 66 hours, L3, Université de Rennes
- Licence Info: “Advanced computer programming”, 28.5 hours, L3, Université de Rennes
- Master EEEA SE: “Localization, Multisensor data fusion”, 21 hours, M2, Université de Rennes
- Master Info IL: “Mobile robotics”, 33 hours, M2, Université de Rennes
Fabien Spindler:
- Master SIVOS: “Geometric Computer Vision and Visual Servoing”, 12 hours, M2, Université de Rennes
Paolo Robuffo Giordano:
- Master 2 SPIA HCR: “Energy-Based Modeling and Control for Robotics”, 12 hours, M2, Ecole Nationale Supérieure de Rennes
Marco Tognon:
- Master 2 Optimal Control “Optimal Control for Aerial Vehciles” 2h, M2, Polytechnic of Milan, Milan, Italy
- Master 2 DRONE “Introduction to Aerial Physical Interaction” 2h, M2, Ecole Centrale de Nantes, Nantes, France
- International Summer School of Automatic Control Grenoble MOBROB: “Robotics 2”, 33 hours, M1, Ecole Nationale Supérieure de Rennes
- Master ENS: “Robotics 2”, 33 hours, M1, Ecole Nationale Supérieure de Rennes
- Master 2 SPIA HCR: “Energy-Based Modeling and Control for Robotics”, 6 hours, M2, Ecole Nationale Supérieure de Rennes
François Pasteau:
- Master INSA2: “Robotics”, 26 hours, M1, INSA Rennes
- Master INSA1: “Concepts de la logique à la programmation”, 12 hours, L3, INSA Rennes
- Master INSA1: “Practical studies”, 8 hours, L3, INSA Rennes
- Master INSA2: “Image analysis”, 6 hours, M1, INSA Rennes
- Master INSA2: “Internet of Things”, 12 hours, M2, ENS Rennes
10.2.2 Supervision
- Ph.D. completed: Ali Srour, “Robust Planning for Robotic Systems”, defended in December 2024, supervised by Paolo Robuffo Giordano, Marco Cognetti (LAAS-CNRS) and Antonio Franchi (Univ Twente) 69
- Ph.D. completed: Pierre-Antoine Cabaret, “Design of multiactuator haptic devices and rendering methods for navigation and virtual interactions', defended in December 2024, supervised by Marie Babel, Maud Marchal, and Claudio Pacchierotti.
- Ph.D. completed: Inès Lacote, “Investigating the Apparent Haptic Motion Illusion to Provide Navigation Guidance in a Handle', defended in June 2024, supervised by Maud Marchal and Claudio Pacchierotti 67
- Ph.D completed: Glenn Kerbiriou, “Data-driven Eye Region Reconstruction, Modeling and Animation”, defended in June 2024, supervised by Maud Marchal and Quentin Avril (Interdigita).
- Ph.D. completed: John Thomas, "Assembly Task in Congested Area using Sensor-based Control", defended in April 2024, supervised by François Chaumette 70.
- Ph.D. completed: Nicola De Carli, “Active Perception and Localization for Multi-Robot Systems', defended in April 2024, supervised by Paolo Robuffo Giordano and Paolo Salaris (Univ Pisa) 66
- Ph.D. completed: Lev Smolentsev, “Shape visual servoing of a suspended cable”, defended in March 2024, supervised by Alexandre Krupa and François Chaumette 68.
- Ph.D. in progress: Jessé Alves, “Human-robot interaction and shared control for robotic telemanipulation”,started in November 2024, supervised by Claudio Pacchierotti
- Ph.D. in progress: Lluis Prior, “Integrate tactile sensing in aerial robots by evaluating and adapting tactile sensors for flight conditions”, started in November 2024, supervised by Marco Tognon
- Ph.D. in progress: Szymon Bielenin, “Robust and Agile Transportation of Cable Suspended-Loads with Multi-Drone Systems”, started in October 2024, supervised by Marco Tognon, Paolo Robuffo Giordano and Claudio Pacchierotti
- Ph.D. in progress: Paul Mefflet, “Advanced Shared Control Techniques for Tele-Manipulation”, started in March 2024, supervised by Paolo Robuffo Giordano
- Ph.D. in progress: Sara Rossi, “Tactile haptic perception and rendering for robotic prosthesis”,started in February 2024, supervised by Maud Marchal and Claudio Pacchierotti
- Ph.D. in progress: Maximilian Mehl, “Learning-based Control of an Aerial Manipulator for Complex Manipulation Tasks”, started in February 2024, supervised by Marco Tognon
- Ph.D. in progress: Théo Le Terrier, “Sensor-based control of a power wheelchair with interval set-membership methods”, started in October 2023, supervised by Marie Babel and Vincent Drevelle.
- Ph.D. in progress: Lorenzo Balandi, “Full-Body Design and Control of an Aerial Manipulator for Advance Physical Interaction”, started in October 2023, supervised by Marco Tognon and Paolo Robuffo Giordano
- Ph.D. in progress: Léon Raphalen, “Human-centered shared control of multi-robot systems at the microscale”,started in September 2023, supervised by Claudio Pacchierotti
- Ph.D. in progress: Lendy Mulot, “Design of coupling schemes for vibrotactile rendering in virtual reality.””,started in October 2022, supervised by Maud Marchal and Claudio Pacchierotti
- Ph.D. in progress: Mandela Ouafo Fonkoua,“Visual perception and visual servoing for dexterous robotic manipulation of compliant objects”, started in October 2022, supervised by Alexandre Krupa and François Chaumette.
- Ph.D. in progress: Antonio Marino, “Machine learning techniques for the control of multi-robot systems”,started in September 2022, supervised by Paolo Robuffo Giordano and Claudio Pacchierotti
- Ph.D. in progress: Sophie Rousseau, "Sensor-based control and vibration reduction for cable-driven parallel robots", started in November 2021, supervised by Stéphane Caro (LS2N), François Chaumette, and Nicolo Pedemonte (IRT Jules Verne)
- Ph.D. in progress: Thibault Noël, "Exploration of indoor environments”, started in October 2021, supervised by Eric Marchand and François Chaumette
- Ph.D. in progress: Maxime Bernard, “Shared control for multi-robot systems”, started in October 2021, supervised by Paolo Robuffo Giordano and Claudio Pacchierotti
- Ph.D. in progress: Erwan Normand ”Étude de la perception et de la manipulation d'objets virtuels en réalité augmentée à l'aide de dispositifs haptiques portables" started in October 2021, supervised by Maud Marchal, Eric Marchand and Claudio Pacchierotti
- Ph.D. in progress: Maxime Manzano, “Reach-to-grasp in activities of daily living: shared control of a multisensory assistive device to compensate for upper extremity neuromuscular disorders.”, started in September 2022, supervised byMarie Babel and Sylvain Guégan
- Ph.D. in progress: Jose Eduardo Aguilar Segovia, “Design of haptic feedback using innovative shapeable materials”, started in October 2022, supervised byMarie Babel, Sylvain Lefebvre (MFXteam, Nancy) and Sylvain Guégan
- Ph.D. in progress: Emilie Leblong, “Taking into account social interactions in a virtual reality power wheelchair driving simulator: promoting learning for inclusive mobility”, started in October 2020, supervised by Marie Babel and Anne-Hélène Olivier (Virtus team)
- Ph.D. in progress: Maël Gallois, “Shared control by muscle mapping for upper limb assistance in neuromuscular and neurodegenerative pathologies”, started in September 2024, supervised by Marie Babel, Charles Pontonnier (Combo team), Nicolas Vignais (Université Rennes 2/M2S) and Sylvain Guégan (LGCGM)
- Ph.D in progress: Jim Pavan, "Design of 3D interaction techniques for interacting with deformable surfaces in mixed reality using multimodal haptics", started in October 2024, supervised by Maud Marchal.
10.2.3 Juries
HdR and PhD juries
- P. Robuffo Giordano: Pedro Roque (PhD, Opponent, KTH, Sweden), Florian Sansou (PhD, Reviewer, ENAC), Gabriel Quere (PhD, President, ENSTA Paris), Marco Tognon (HDR, President, IRISA), Fabio Morbidi (HDR, jury member, University of Amiens)
- C. Pacchierotti: Charlélie Saudrais (PhD reviewer, Sorbonne Université)
- F. Chaumette: Hugo Bildstein (PhD, reviewer, LAAS), Stéphane Caron (HDR, member, ENS-PSL), Oumanayma Bounou (PhD, reviewer, ENS-PSL)
- M. Babel: Lucas Quesada(PhD, reviewer, University of Paris Saclay), Erwan Landais (PhD, reviewer, University Bordeaux), Niranjan Deshpande (PhD, jury member, Grenoble Alpes University), Alexis Poignant (PhD, reviewer, Sorbonne University), Camille Marques Alves (PhD, reviewer, Lorraine University), Gaëtan Courtois (PhD, reviewer, Université Polytechnique Hauts-de-France), John Thomas (PhD, President, Rennes University), Assem Sadek (PhD, President, INSA Lyon), Huiseok Moon (PhD, President, Paris Est Creteil University)
- E. Marchand: S. Guégan (HDR, jury member, INSA Rennes)
- M. Marchal: Lisa Bachini (PhD, President, Aix Marseille University), Detjon Brahimaj (PhD, Reviewer, Univ. Lille), Benjamin Delbos (PhD, Reviewer, INSA Lyon), Zhuzhi Fan (PhD, member, Univ. Grenoble Alpes), Valérian Faure (PhD, Reviewer, ENSAM), Nicolas Fourrier (PhD, President, Centrale Nantes), Jean Jouve (PhD, President, Univ. Grenoble Alpes), Anderson Maciel (Habilitation, Reviewer, Univ. Lisbon, Portugal), Martin Maunsbach (PhD, Reviewer, Univ. Copenhaguen, Denmark), Chloé Paliard (PhD, Reviewer, Telecom Paris), Nicolas Pelegrin (PhD, Reviewer, Aix Marseille University), Camille Truong-Allie (PhD, President, Mines-Paris-PSL), Mohamed Younes (PhD, President, Univ. Rennes), Jing Zhang (PhD, member, Univ. Côte d'Opale)
10.3 Popularization
10.3.1 Participation in Live events
- M. Babel, F. Pasteau, L. Devigne, T. Voisin, S. Thomas, F. Grzeskowiak, M.Manzano, P.A. Cabaret, J.E. Aguilar Segovia, M. Gallois: During the Fête de la Sciences, within the Village des Sciences in INSA Rennes, they hosted a booth for school students, featuring demonstrations of mobility assistance devices for people with disabilities, in October 2024
- M. Babel, C. Pacchierotti, T. Voisin, L. Devigne: During the TechInn’Vitré event, 23-25 Février 2024, they hosted a booth for school students and general audience, featuring demonstrations of mobility assistance devices for people with disabilities and innovative 3D printint, in February 2024
- F. Spindler and M. Ouafo Fonkoua hosted the Inria stand at the Humanoids 2024 conference held in Nancy from 21 to 24 November 2024. They presented a number of robotics and computer vision demonstrations several results from the team's research, for instance 18
11 Scientific production
11.1 Major publications
- 1 articleConnectivity-Maintenance Teleoperation of a UAV Fleet with Wearable Haptic Feedback.IEEE Transactions on Automation Science and EngineeringJune 2020, 1-20HALDOI
- 2 articleDetermination of All Stable and Unstable Equilibria for Image-Point-Based Visual Servoing.IEEE Transactions on Robotics40July 2024, 3406-3424HALDOI
- 3 articleDeformation Control of a 3D Soft Object using RGB-D Visual Servoing and FEM-based Dynamic Model.IEEE Robotics and Automation Letters98August 2024, 6943-6950HALDOI
- 4 articleEquilibria, Stability, and Sensitivity for the Aerial Suspended Beam Robotic System Subject to Parameter Uncertainty.IEEE Transactions on Robotics395October 2023, 3977 - 3993HALDOI
- 5 articleHumanoid robots in aircraft manufacturing.IEEE Robotics and Automation Magazine264December 2019, 30-45HALDOI
- 6 articleVision-Based Reactive Planning for Aggressive Target Tracking while Avoiding Collisions and Occlusions.IEEE Robotics and Automation Letters34October 2018, 3725 - 3732HALDOI
- 7 articleOnline Optimal Perception-Aware Trajectory Generation.IEEE Transactions on Robotics2019, 1-16HALDOI
- 8 articleA Shared-control Teleoperation Architecture for Nonprehensile Object Transportation.IEEE Transactions on RoboticsJune 2021, 1-15HALDOI
- 9 articleAltering the Stiffness, Friction, and Shape Perception of Tangible Objects in Virtual Reality Using Wearable Haptics.IEEE Transactions on Haptics (ToH)131January 2020, 167-174HALDOI
11.2 Publications of the year
International journals
International peer-reviewed conferences
Conferences without proceedings
Doctoral dissertations and habilitation theses
Reports & preprints
11.3 Cited publications
- 72 inproceedingsDecentralized Connectivity Maintenance for Quadrotor UAVs with Field of View Constraints.IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'23Detroit, United StatesIEEEOctober 2023, 11111-11118HALDOIback to text
- 73 inproceedingsA Sensitivity-Aware Motion Planner (SAMP) to Generate Intrinsically-Robust Trajectories.IEEE International Conference on Robotics and Automation (ICRA)London, United Kingdom2023HALback to textback to text