EN FR
EN FR

2025Activity​​​‌ reportProject-TeamACENTAURI

RNSR:​ 202124072D

Creation​‌ of the Project-Team: 2021​​ May 01

Each year,​​​‌ Inria research teams publish​ an Activity Report presenting​‌ their work and results​​ over the reporting period.​​​‌ These reports follow a​ common structure, with some​‌ optional sections depending on​​ the specific team. They​​​‌ typically begin by outlining​ the overall objectives and​‌ research programme, including the​​ main research themes, goals,​​​‌ and methodological approaches. They​ also describe the application​‌ domains targeted by the​​ team, highlighting the scientific​​​‌ or societal contexts in​ which their work is​‌ situated.

The reports then​​ present the highlights of​​​‌ the year, covering major​ scientific achievements, software developments,​‌ or teaching contributions. When​​ relevant, they include sections​​​‌ on software, platforms, and​ open data, detailing the​‌ tools developed and how​​ they are shared. A​​​‌ substantial part is dedicated​ to new results, where​‌ scientific contributions are described​​ in detail, often with​​​‌ subsections specifying participants and​ associated keywords.

Finally, the​‌ Activity Report addresses funding,​​ contracts, partnerships, and collaborations​​​‌ at various levels, from​ industrial agreements to international​‌ cooperations. It also covers​​ dissemination and teaching activities,​​​‌ such as participation in​ scientific events, outreach, and​‌ supervision. The document concludes​​ with a presentation of​​​‌ scientific production, including major​ publications and those produced​‌ during the year.

Keywords​​

Computer Science and Digital​​​‌ Science

  • A5.10.2. Perception
  • A5.10.3.​ Planning
  • A5.10.4. Robot control​‌
  • A5.10.5. Robot interaction (with​​ the environment, humans, other​​​‌ robots)
  • A5.10.6. Swarm robotics​
  • A6.2.3. Probabilistic methods
  • A6.2.4.​‌ Statistical methods
  • A6.2.5. Numerical​​ Linear Algebra
  • A6.2.6. Optimization​​​‌
  • A6.4.2. Stochastic control
  • A6.4.3.​ Observability and Controlability
  • A6.4.4.​‌ Stability and Stabilization
  • A6.4.6.​​ Optimal control
  • A7.1.4. Quantum​​​‌ algorithms
  • A8.2. Optimization
  • A8.3.​ Geometry, Topology
  • A8.11. Game​‌ Theory
  • A9.2. Machine learning​​
  • A9.2.1. Supervised learning
  • A9.2.3.​​​‌ Reinforcement learning
  • A9.2.4. Optimization​ and learning
  • A9.2.5. Bayesian​‌ methods
  • A9.2.6. Neural networks​​
  • A9.2.8. Deep learning
  • A9.5.​​​‌ Robotics and AI
  • A9.6.​ Decision support
  • A9.10. Hybrid​‌ approaches for AI
  • A9.12.4.​​ 3D and spatio-temporal reconstruction​​​‌
  • A9.12.5. Object tracking and​ motion analysis
  • A9.12.7. Visual​‌ servoing

Other Research Topics​​ and Application Domains

  • B5.1.​​​‌ Factory of the future​
  • B5.6. Robotic systems
  • B7.2.​‌ Smart travel
  • B7.2.1. Smart​​ vehicles
  • B7.2.2. Smart road​​​‌
  • B8.2. Connected city

1​ Team members, visitors, external​‌ collaborators

Research Scientists

  • Ezio​​ Malis [Team leader​​​‌, Inria, Senior​ Researcher, HDR]​‌
  • Philippe Martinet [Inria​​, Senior Researcher,​​​‌ HDR]
  • Patrick Rives​ [Inria, Emeritus​‌, HDR]

Post-Doctoral​​ Fellows

  • Minh Quan Dao​​​‌ [Inria, Post-Doctoral​ Fellow, until Aug​‌ 2025]
  • Siddharth Singh​​ Savner [Inria,​​ Post-Doctoral Fellow]

PhD​​​‌ Students

  • Mohamed Mahmoud Ahmed‌ Maloum [SAFRAN]‌​‌
  • Emmanuel Alao [CNRS​​]
  • Matteo Azzini [​​​‌UNIV COTE D'AZUR]‌
  • Ayan Barui [UNIV‌​‌ COTE D'AZUR]
  • Shamik​​ Basu [Inria,​​​‌ from Oct 2025]‌
  • Kaushik Bhowmik [INRIA‌​‌, CHROMA, co-supervision]​​
  • Thomas Campagnolo [NXP​​​‌]
  • Enrico Fiasche [‌UNIV COTE D'AZUR]‌​‌
  • Monica Fossati [UNIV​​ COTE D'AZUR]
  • Gires​​​‌ Fotsing Takam [Inria‌, from Oct 2025‌​‌]
  • Stefan Larsen [​​Inria]
  • Fabien Lionti​​​‌ [Inria, until‌ Oct 2025]
  • Diego‌​‌ Navarro Tellez [CEREMA​​, until Nov 2025​​​‌]
  • Andrea Pagnini [‌Inria]
  • Mathilde Theunissen‌​‌ [LS2N, co-supervision​​]

Technical Staff

  • Mohamed​​​‌ Malek Aifa [Inria‌, Engineer]
  • Erwan‌​‌ Amraoui [Inria,​​ Engineer]
  • Marie Aspro​​​‌ [Inria, Engineer‌]
  • Jon Aztiria Oiartzabal‌​‌ [Inria, Engineer​​]
  • Nicolas Chleq [​​​‌Inria, Engineer]‌
  • Matthias Curet [Inria‌​‌, Engineer, until​​ Apr 2025]
  • Andres​​​‌ Gomez Hernandez [Inria‌, Engineer]
  • Pierre‌​‌ Joyet [Inria,​​ Engineer]
  • Pardeep Kumar​​​‌ [Inria, Engineer‌]
  • Fabien Lionti [‌​‌Inria, Engineer,​​ from Dec 2025]​​​‌
  • Quentin Louvel [Inria‌, Engineer, until‌​‌ Oct 2025]
  • Diego​​ Navarro Tellez [CEREMA​​​‌, Engineer, from‌ Dec 2025]
  • Louis‌​‌ Verduci [Inria,​​ Engineer]

Interns and​​​‌ Apprentices

  • Souhail Benomar [‌INRIA, Intern,‌​‌ from May 2025 until​​ Aug 2025]
  • Enrico​​​‌ Dondero [INRIA,‌ Intern, from Mar‌​‌ 2025 until Aug 2025​​]

Administrative Assistants

  • Marylene​​​‌ Fontana [Inria,‌ from Nov 2025]‌​‌
  • Nathalie Nordmann [Inria​​, from Jul 2025​​​‌ until Oct 2025]‌
  • Stephanie Verdonck [Inria‌​‌, until Jun 2025​​]

Visiting Scientists

  • Jose​​​‌ Francisco Ambriz Gutierrez [‌IPN MEXICO, from‌​‌ Mar 2025 until May​​ 2025]
  • Rafael Eric​​​‌ Murrieta Cid [CIMAT‌, until Aug 2025‌​‌]
  • Ramses Adalid Reyes​​ Beltran [CIMAT,​​​‌ from Feb 2025 until‌ Jun 2025, Visiting‌​‌ student]

2 Overall​​ objectives

The goal of​​​‌ ACENTAURI is to study‌ and to develop intelligent,‌​‌ autonomous and mobile robots​​ that collaborate between them​​​‌ to achieve challenging tasks‌ in dynamic environments. The‌​‌ team focuses on perception,​​ decision and control problems​​​‌ for multi-robot collaboration by‌ proposing an original hybrid‌​‌ model-driven / data driven​​ approach to artificial intelligence​​​‌ and by studying efficient‌ algorithms. The team focuses‌​‌ on robotic applications like​​ environment monitoring and transportation​​​‌ of people and goods.‌ In these applications, several‌​‌ robots will share multi-sensor​​ information eventually coming from​​​‌ infrastructure. The team will‌ demonstrate the effectiveness of‌​‌ the proposed approaches on​​ real robotic systems like​​​‌ Autonomous Ground Vehicles (AGVs)‌ and Unmanned Aerial Vehicles‌​‌ (UAVs) together with industrial​​ partners.

The scientific objectives​​​‌ that we want to‌ achieve are to develop:‌​‌

  • robots that are able​​ to perceive in real-time​​​‌ through their sensors unstructured‌ and changing environments (in‌​‌ space and time) and​​​‌ are able to build​ large scale semantic representations​‌ taking into account the​​ uncertainty of interpretation and​​​‌ the incompleteness of perception.​ The main scientific bottlenecks​‌ are (i) how to​​ exceed purely geometric maps​​​‌ to have semantic understanding​ of the scene and​‌ (ii) how to share​​ these representations between robots​​​‌ having different sensomotoric capabilities​ so that they can​‌ possibly collaborate together to​​ perform a common task.​​​‌
  • autonomous robots in the​ sense that they must​‌ be able to accomplish​​ complex tasks by taking​​​‌ high-level cognitive-based decisions without​ human intervention. The robots​‌ evolve in an environment​​ possibly populated by humans,​​​‌ possibly in collaboration with​ other robots or communicating​‌ with infrastructure (collaborative perception).​​ The main scientific bottlenecks​​​‌ are (i) how to​ anticipate unexpected situations created​‌ by unpredictable human behavior​​ using the collaborative perception​​​‌ of robots and infrastructure​ and (ii) how to​‌ design robust sensor-based control​​ law to ensure robot​​​‌ integrity and human safety.​
  • intelligent robots in the​‌ sense that they must​​ (i) decide their actions​​​‌ in real-time on the​ basis of the semantic​‌ interpretation of the state​​ of the environment and​​​‌ their own state (situation​ awareness), (ii) manage uncertainty​‌ both on sensor, control​​ and dynamic environment (iii)​​​‌ predict in real-time the​ future states of the​‌ environment taking into account​​ their security and human​​​‌ safety, (iv) acquire new​ capacities and skills, or​‌ refine existing skills through​​ learning mechanisms.
  • efficient algorithms​​​‌ able to process large​ amount of data and​‌ solve hard problems both​​ in robotic perception, learning,​​​‌ decision and control. The​ main scientific bottlenecks are​‌ (i) how to design​​ new efficient algorithms to​​​‌ reduce the processing time​ with ordinary computers and​‌ (ii) how to design​​ new quantum algorithms to​​​‌ reduce the computational complexity​ in order to solve​‌ problems that are not​​ possible in reasonable time​​​‌ with ordinary computers.

3​ Research program

The research​‌ program of ACENTAURI will​​ focus on intelligent autonomous​​​‌ systems, which require to​ be able to sense,​‌ analyze, interpret, know and​​ decide what to do​​​‌ in the presence of​ dynamic and living environment.​‌ Defining a robotic task​​ in a living and​​​‌ dynamic environment requires to​ setup a framework where​‌ interactions between the robot​​ or the multi-robots system,​​​‌ the infrastructure and the​ environment can be described​‌ from a semantic level​​ to a canonical space​​​‌ at different levels of​ abstraction. This description will​‌ be dynamic and based​​ on the use of​​​‌ sensory memory and short/long​ term memory mechanism. This​‌ will require to expand​​ and develop (i) the​​​‌ knowledge on the interaction​ between robots and the​‌ environment (both using model-driven​​ or data-driven approaches), (ii)​​​‌ the knowledge on how​ to perceive and control​‌ these interactions, (iii) situation​​ awareness, (iv) hybrid architectures​​​‌ (both using model-driven or​ data-driven approaches), for monitoring​‌ the global process during​​ the execution of the​​​‌ task.

Figure 1 illustrates​ an overview of the​‌ global systems highlighting the​​ core topics. For the​​​‌ sake of simplicity, we​ will decompose our research​‌ program in three axes​​ related to Perception, Decision​​ and Control. However, it​​​‌ must be noticed that‌ these axes are highly‌​‌ interconnected (e.g. there is​​ a duality between perception​​​‌ and control) and all‌ problems should be addressed‌​‌ in a holistic approach.​​ Moreover, Machine Learning is​​​‌ in fact transversal to‌ all the robot's capacities.‌​‌ Our objective is the​​ design and the development​​​‌ of a parameterizable architecture‌ for Deep Learning (DL)‌​‌ networks incorporating a priori​​ model-driven knowledge. We plan​​​‌ to do this by‌ choosing specialized architectures depending‌​‌ on the task assigned​​ to the robot and​​​‌ depending on the input‌ (from standard to future‌​‌ sensor modalities). These DL​​ networks must be able​​​‌ to encode spatio-temporal representations‌ of the robot's environment.‌​‌ Indeed, the task we​​ are interested in considers​​​‌ evolution in time of‌ the environment since the‌​‌ data coming from the​​ sensors may vary in​​​‌ time even for static‌ elements of the environment.‌​‌ We are also interested​​ to develop a novel​​​‌ network for situation awareness‌ applications (mainly in the‌​‌ field of autonomous driving,​​ and proactive navigation).

Figure 1

The​​​‌ image is a diagram‌ illustrating the interaction between‌​‌ the environment, hardware, and​​ software in a robotic​​​‌ system. On the left,‌ the environment includes elements‌​‌ like landscape, seasons, weather,​​ humans, animals, and robots.​​​‌ These elements interact with‌ the hardware, which consists‌​‌ of infrastructure (with sensors​​ and devices) and robots​​​‌ (sensors and actuators). On‌ the right, the software‌​‌ section shows machine learning​​ processes, divided into perception,​​​‌ decision, and control phases,‌ further categorized into various‌​‌ functions like localization, mapping,​​ awareness, planning, and efficient​​​‌ algorithms. The diagram highlights‌ how data flows from‌​‌ the environment through the​​ hardware to the software,​​​‌ guiding robotic actions and‌ decisions.

Figure 1:‌​‌ Intelligent autonomous mobile robot​​ system overview highlighting core​​​‌ axis of research and‌ methodologies, Machine Learning and‌​‌ Efficient Algorithms.

Another transversal​​ issue concerns the efficiency​​​‌ of the algorithms involved.‌ Either we must process‌​‌ a large amount of​​ data (for example using​​​‌ a standard full HD‌ camera (1920x1080 pixels) the‌​‌ data size to process​​ is around 5 Terabits/hour)​​​‌ or the problem is‌ hard to solve even‌​‌ when the underlying graph​​ is planar. For example,​​​‌ path optimization problems for‌ multiple robots are all‌​‌ non-deterministic polynomial-time complete (NP-complete).​​ A particular emphasis will​​​‌ be given to efficient‌ numerical analysis algorithms (in‌​‌ particular for optimization) that​​ are omnipresent in all​​​‌ research axes. We will‌ also explore a completely‌​‌ different and radically new​​ methodology with quantum algorithms.​​​‌ Several quantum basic linear‌ algebra subroutines (BLAS) (Fourier‌​‌ transforms, finding eigenvectors and​​ eigenvalues, solving linear equations)​​​‌ exhibit exponential quantum speedups‌ over their best known‌​‌ classical counterparts. This quantum​​ BLAS (qBLAS) translates into​​​‌ quantum speedups for a‌ variety of algorithms including‌​‌ linear algebra, least-squares fitting,​​ gradient descent, Newton's method.​​​‌ The quantum methodology is‌ completely new to the‌​‌ team, therefore the practical​​ interest of pursuing such​​​‌ research direction should be‌ validated in the long-term.‌​‌

The research program of​​ ACENTAURI will be decomposed​​​‌ in the following three‌ research axes:

3.1 Axis‌​‌ A: Augmented spatio-temporal perception​​​‌ of complex environments

The​ long-term objective of this​‌ research axis is to​​ build accurate and composite​​​‌ models of large-scale environments​ that mix metric, topological​‌ and semantic information. Ensuring​​ the consistency of these​​​‌ various representations during the​ robot exploration and merging/sharing​‌ observations acquired from different​​ viewpoints by several collaborative​​​‌ robots or sensors attached​ to the infrastructure, are​‌ very difficult problems. This​​ is particularly true when​​​‌ different sensing modalities are​ involved and when the​‌ environments are time-varying. A​​ recent trend in Simultaneous​​​‌ Localization And Mapping is​ to augment low-level maps​‌ with semantic interpretation of​​ their content. Indeed, the​​​‌ semantic level of abstraction​ is the key element​‌ that will allow us​​ to build the robot’s​​​‌ environmental awareness (see Axis​ B). For example, the​‌ so-called semantic maps have​​ already been used in​​​‌ mobile robot navigation, to​ improve path planning methods,​‌ mainly by providing the​​ robot with the ability​​​‌ to deal with human-understandable​ targets. New studies to​‌ derive efficient algorithms for​​ manipulating the hybrid representations​​​‌ (merging, sharing, updating, filtering)​ while preserving their consistency​‌ are needed for long-term​​ navigation.

3.2 Axis B:​​​‌ Situation awareness for decision​ and planning

The long-term​‌ objective of this research​​ axis is to design​​​‌ and develop a decision-making​ module that is able​‌ to (i) plan the​​ mission of the robots​​​‌ (global planning), (ii) generate​ the sub-tasks (local objectives)​‌ necessary to accomplish the​​ mission based on Situation​​​‌ Awareness and (iii) plan​ the robot paths and/or​‌ sets of actions to​​ accomplish each subtask (local​​​‌ planning). Since we have​ to face uncertainties, the​‌ decision module must be​​ able to react efficiently​​​‌ in real-time based on​ the available sensor information​‌ (on-board or attached to​​ an IoT infrastructure) in​​​‌ order to guarantee the​ safety of humans and​‌ things. For some tasks,​​ it is necessary to​​​‌ coordinate a multi-robots system​ (centralized strategy), while for​‌ other each robot evolves​​ independently with its own​​​‌ decentralized strategy. In this​ context, Situation Awareness is​‌ at the heart of​​ an autonomous system in​​​‌ order to feed the​ decision-making process, but also​‌ can be seen as​​ a way to evaluate​​​‌ the performance of the​ global process of perception​‌ and interpretation in order​​ to build a safe​​​‌ autonomous system. Situation Awareness​ is generally divided into​‌ three parts: perception of​​ the elements in the​​​‌ environment (see Axis A),​ comprehension of the situation,​‌ and projection of future​​ states (prediction and planning).​​​‌ When planning the mission​ of the robot, the​‌ decision-making module will first​​ assume that the configuration​​​‌ of the multi-robot system​ is known in advance,​‌ for example one robot​​ on the ground and​​​‌ two robots on the​ air. However, in our​‌ long-term objectives, the number​​ of robots and their​​​‌ configurations may evolve according​ to the application objectives​‌ to be achieved, particularly​​ in terms of performance,​​​‌ but also to take​ into account the dynamic​‌ evolution of the environment.​​

3.3 Axis C: Advanced​​​‌ multi-sensor control of autonomous​ multi-robot systems

The long-term​‌ objective of this research​​ axis is to design​​ multi-sensor (on-board or attached​​​‌ to an IoT infrastructure)‌ based control of potentially‌​‌ multi-robots systems for tasks​​ where the robots must​​​‌ navigate into a complex‌ dynamic environment including the‌​‌ presence of humans. This​​ implies that the controller​​​‌ design must explicitly deal‌ not only with uncertainties‌​‌ and inaccuracies in the​​ models of the environment​​​‌ and of the sensors,‌ but also to consider‌​‌ constraints to deal with​​ unexpected human behavior. To​​​‌ deal with uncertainties and‌ inaccuracies in the model,‌​‌ two strategies will be​​ investigated. The first strategy​​​‌ is to use Stochastic‌ Control techniques that assume‌​‌ known probability distribution on​​ the uncertainties. The second​​​‌ strategy is to use‌ system identification and reinforcement‌​‌ learning techniques to deal​​ with differences between the​​​‌ models and the real‌ systems. To deal with‌​‌ unexpected human behavior, we​​ will investigate Stochastic Model​​​‌ Predictive Control (MPC) techniques‌ and Model Predictive Path‌​‌ Integral (MPPI) control techniques​​ in order to anticipate​​​‌ future events and take‌ optimal control actions accordingly.‌​‌ A particular emphasis will​​ be given to the​​​‌ theoretical analysis (observability, controllability,‌ stability and robustness) of‌​‌ the control laws.

4​​ Application domains

ACENTAURI focus​​​‌ on two main applications‌ in order to validate‌​‌ our researches using the​​ robotics platforms described in​​​‌ section 7.2. We‌ are aware that ethical‌​‌ questions may arise when​​ addressing such applications. ACENTAURI​​​‌ follows the recommendations of‌ the Inria ethical committee‌​‌ like for example confidentiality​​ issues when processing data​​​‌ (RGPD).

4.1 Environment monitoring‌ with a collaborative robotic‌​‌ system

The first application​​ that we will consider​​​‌ concerns monitoring the environment‌ using an autonomous multi-robots‌​‌ system composed by ground​​ robots and aerial robots​​​‌ (see Figure 2).‌ The ground robots will‌​‌ patrol following a planned​​ trajectory and will collaborate​​​‌ with the aerial drones‌ to perform tasks in‌​‌ structured (e.g. industrial sites),​​ semi-structured (e.g. presence of​​​‌ bridges, dams, buildings) or‌ unstructured environments (e.g. agricultural‌​‌ space, forest space, destroyed​​ space). In order to​​​‌ provide a deported perception‌ to the ground robots,‌​‌ an aerial drone will​​ be in operation while​​​‌ the second one will‌ be recharging its batteries‌​‌ on the ground vehicle.​​ Coordinated and safe autonomous​​​‌ take-off and landing of‌ the aerial drones will‌​‌ be a key factor​​ to ensure the continuity​​​‌ of service for a‌ long period of time.‌​‌ Such a multi-robot system​​ can be used to​​​‌ localize survivors in case‌ of disaster or rescue,‌​‌ to localize and track​​ people or animals (for​​​‌ surveillance purpose), to follow‌ the evolution of vegetation‌​‌ (or even invasion of​​ insects or parasites), to​​​‌ follow evolution of structures‌ (bridges, dams, buildings, electrical‌​‌ cables) and to control​​ actions in the environment​​​‌ like for example in‌ agriculture (fertilization, pollination, harvesting,‌​‌ ...), in forest (rescue),​​ in land (planning firefighting).​​​‌ Successful achievement of such‌ applications requires to build‌​‌ a representation of the​​ environment and localize the​​​‌ robots in the map‌ (see Axis A in‌​‌ section 3.1), to​​ re-plan the tasks of​​​‌ each robot when unpredictable‌ events occurs (see Axis‌​‌ B in section 3.2​​​‌) and to control​ each robot to execute​‌ the tasks (see Axis​​ C in section 3.3​​​‌). Depending on the​ application field, the scale​‌ and the difficulty of​​ the problems to be​​​‌ solved will be increasing.​ In the Smart Factories​‌ field, we have a​​ relatively small size environment,​​​‌ mostly structured, with highly​ instrumented (sensors) and with​‌ the possibility to communicate.​​ In the Smart Territories​​​‌ field, we have large​ semi-structured or unstructured environments​‌ that are not instrumented.​​ To set up demonstrations​​​‌ of this application, we​ intend to collaborate with​‌ industrial partners and local​​ institutions. For example, we​​​‌ plan to set up​ a collaboration with the​‌ Parc Naturel Régional des​​ Prealpes d'Azur to monitor​​​‌ the evolution of fir​ trees infested by bark​‌ beetles.

Figure 2

The image shows​​ three drones flying above​​​‌ a terrain map, each​ with a visual field​‌ indicated by a colored​​ cone pointing downwards. The​​​‌ map is divided into​ sections with red and​‌ blue lines outlining different​​ areas. There is a​​​‌ rover depicted on the​ map, possibly conducting some​‌ task. The terrain appears​​ to be a combination​​​‌ of green, brown, and​ yellow areas, representing various​‌ types of land. The​​ drones are connected by​​​‌ green lines, suggesting they​ are working together to​‌ cover the mapped area.​​ (Description generated at December​​​‌ 17th, 2025 by Albert​ AI with the model​‌ Mistral-Small-3.2-24B)

Figure 2:​​ Environment monitoring with a​​​‌ collaborative robotic system composed​ by aerial and ground​‌ robots.

4.2 Transportation of​​ people and goods with​​​‌ autonomous connected vehicles

The​ second application that we​‌ will consider, concerns the​​ transportation of people and​​​‌ goods with autonomous connected​ vehicles (see Figure 3​‌). ACENTAURI will contribute​​ to the development of​​​‌ Autonomous Connected Vehicles (e.g.​ Learning, Mapping, Localization, Navigation)​‌ and the associated services​​ (e.g. towing, platooning, taxi).​​​‌ We will develop efficient​ algorithms to select on-line​‌ connected sensors coming from​​ the infrastructure in order​​​‌ to extend and enhance​ the embedded perception of​‌ a connected autonomous vehicle.​​ In cities, there exists​​​‌ situations where visibility is​ very bad for historical​‌ reason or simply occasionally​​ because of traffic congestion,​​​‌ service delivery (trucks, buses)​ or roadworks. It exists​‌ also situation where danger​​ are more important and​​​‌ where a connected system​ or intelligent infrastructure can​‌ help to enhance perception​​ and then reduce the​​​‌ risk of accident (see​ Axis A in section​‌ 3.1). In ACENTAURI,​​ we will also contribute​​​‌ to the development of​ assistance and service robotics​‌ by re-using the same​​ technologies required in autonomous​​​‌ vehicles. By adding the​ social level in the​‌ representation of the environment​​, and using techniques​​​‌ of proactive and social​ navigation, we will offer​‌ the possibility of the​​ robot to adapt its​​​‌ behavior in presence of​ humans (see Axis B​‌ in section 3.2).​​ ACENTAURI will study sensing​​​‌ technology on SDVs (Self-Driving​ Vehicles) used for material​‌ handling to improve efficiency​​ and safety as products​​​‌ are moved around Smart​ Factories. These types of​‌ robots have the ability​​ to sense and avoid​​ people, as well as​​​‌ unexpected obstructions in the‌ course of doing its‌​‌ work (see Axis C​​ in section 3.3).​​​‌ The ability to automatically‌ avoid these common disruptions‌​‌ is a powerful advantage​​ that keeps production running​​​‌ optimally. To set up‌ demonstrations of this application,‌​‌ we will continue the​​ collaboration with industrial partners​​​‌ (Renault) and with the‌ Communauté d'Agglomération Sophia Antipolis‌​‌ (CASA). Experiments with 2​​ autonomous Renault Zoe cars​​​‌ will be carried out‌ in a dedicated space‌​‌ lend by CASA. Moreover,​​ we propose, with the​​​‌ help of the Inria‌ Service d'Expérimentation et de‌​‌ Développement (SED), to set​​ up a demonstration of​​​‌ an autonomous shuttle to‌ transport people in the‌​‌ future extended Inria/Univ. Côte​​ d'Azur site.

Figure 3

The image​​​‌ shows an intersection with‌ a cyclist and two‌​‌ cars, one red and​​ one silver. The vehicles​​​‌ are equipped with communication‌ technology, depicted by green‌​‌ signal waves. The cyclist​​ is riding on a​​​‌ designated bike lane, while‌ the silver car is‌​‌ moving forward and the​​ red car is waiting​​​‌ at a crosswalk where‌ a pedestrian stands. The‌​‌ roads have clear lane​​ markings and directional arrows.​​​‌ The signals suggest the‌ vehicles are communicating to‌​‌ enhance safety at the​​ intersection. (Description generated at​​​‌ December 17th, 2025 by‌ Albert AI with the‌​‌ model Mistral-Small-3.2-24B)

Figure 3​​: Transportation of people​​​‌ and goods with autonomous‌ connected vehicles in human‌​‌ populated environments.

5 Social​​ and environmental responsibility

ACENTAURI​​​‌ is concerned with the‌ reduction of its environmental‌​‌ footprint activities and it​​ is involved in several​​​‌ research projects related to‌ the environmental challenges.

5.1‌​‌ Footprint of research activities​​

The main footprint of​​​‌ our research activities comes‌ from travels and power‌​‌ consumption (computers and computer​​ cluster). Concerning travels, after​​​‌ the limitation due to‌ the COVID-19 pandemic, they‌​‌ have increased again but​​ we make our best​​​‌ efforts to prioritize visioconferencing.‌ Concerning power consumption, besides‌​‌ classical actions to reduce​​ the waste of energy,​​​‌ our research focus on‌ efficient optimization algorithms to‌​‌ minimize the computation time​​ of computers onboard of​​​‌ our robotic platforms.

5.2‌ Impact of research results‌​‌

We proposed several projects​​ related to the environmental​​​‌ challenges. We give below‌ two examples of projects‌​‌ that have been recently​​ finished and one that​​​‌ is ongoing.

One current‌ project concerns the autonomous‌​‌ vehicles in agricultural application​​ in collaboration with INRAE​​​‌ Clermont-Ferrand in the context‌ of the PEPR "Agrologie‌​‌ et numérique". We aim​​ to develop robotic approaches​​​‌ for the realization of‌ new cultural practices, capable‌​‌ of acting as a​​ lever for agroecological practices​​​‌ (see NINSAR Project in‌ section 10.4.4).

6‌​‌ Highlights of the year​​

6.1 Team progression

This​​​‌ year has been dedicated‌ to the stabilization of‌​‌ our team to its​​ nominal size. In particular​​​‌ we have:

  • welcomed two‌ new members to our‌​‌ team, each bringing unique​​ skills, experiences, and perspectives.​​​‌
  • organized one workshop in‌ Korea in context of‌​‌ the AISENSE associated team​​ with the AVELAB of​​​‌ KAIST in Korea.
  • strengthened‌ industrial transfer by initiating‌​‌ three startup creation projects:​​​‌
    • Marie Aspro and Matthias​ Curet , in collaboration​‌ with NAVAL GROUP, maturation​​ phase at Inria Startup​​​‌ Studio (ISS) starting in​ February 2026.
    • Enrico Fiasché​‌ , maturation phase at​​ Inria Startup Studio (ISS)​​​‌ starting in March 2026.​
    • Diego Navarro , in​‌ collaboration with CEREMA, maturation​​ phase at ISS starting​​​‌ in June 2026.

We​ organized a two days​‌ team seminar in Sophia​​ Antipolis to foster scientific​​​‌ discussion and collaborations between​ team members (see Figure​‌ 4) and invited​​ our main industrial collaborators​​​‌ (NXP, SAFRAN, NAVAL GROUP)​ to discussion future collaborations.​‌

Figure 4

Picture of the ACENTAURI​​ team at the seminar​​​‌ in Sophia Antipolis

Figure​ 4: The ACENTAURI​‌ team at the seminar​​ in Sophia Antipolis

We​​​‌ have also continued a​ biweekly robotic seminar involving​‌ both Inria (HEPHAISTOS and​​ ACENTAURI) and I3S (OSCAR,​​​‌ ROBOTVISION, ...) robotic teams​ in order to disseminate​‌ information about the latest​​ advancements, trends, and research​​​‌ in robotics.

6.2 Awards​

  • Mathilde Theunissen won both​‌ the prize of the​​ jury and of the​​​‌ public for its MT180​ (Ma Thèse en 180​‌ secondes) presentation on "Multi-robot​​ localization and navigation for​​​‌ infrastructure monitoring" at Journée​ des Jeunes Chercheuses et​‌ Jeunes Chercheurs en Robotique​​ in October 2025.

7​​​‌ Latest software developments, platforms,​ open data

This year,​‌ the work focused primarily​​ on setting up and​​​‌ using robotic platforms to​ produce datasets. The platforms​‌ were deployed and operated​​ to collect data more​​​‌ efficiently and in a​ structured way. This approach​‌ resulted in reliable datasets​​ tailored to the needs​​​‌ of the team's collaborative​ projects.

7.1 Latest software​‌ developments

7.1.1 OPENROX

  • Keywords:​​
    Robotics, Library, Localization, Pose​​​‌ estimation, Homography, Mathematical Optimization,​ Computer vision, Image processing,​‌ Geometry Processing, Real time​​
  • Functional Description:

    Cross-platform C​​​‌ library for real-time robotics:​

    - sensors calibration -​‌ visual identification and tracking​​ - visual odometry -​​​‌ lidar registration and odometry​

  • News of the Year:​‌

    Several modules have been​​ added:

    - multispectral visual​​​‌ servoing - camera -​ lidar calibration - lidar​‌ SLAM

    Python and C++​​ Plugins have been developed​​​‌ to use OPENROX in​ ROS2 nodes.

  • Contact:
    Ezio​‌ Malis
  • Partner:
    Robocortex

7.1.2​​ MAT - Multisensor acquisition​​​‌ tools

  • Keywords:
    Sensors, Multi-Cameras,​ Python, C++, C, Sensor​‌ Calibration, 3D reconstruction
  • Functional​​ Description:
    Tools developped for​​​‌ the usage of a​ multisensor system consisting of​‌ a LiDAR along multiple​​ cameras.
  • Contact:
    Ezio Malis​​​‌
  • Participants:
    Erwan Amraoui, Marie​ Aspro, Ezio Malis

7.1.3​‌ Generic tools for presence​​ detection on a mesh​​​‌ with CGAL

  • Keywords:
    3D,​ Algorithm, C++, CGAL, Mesh,​‌ Mesh refinement, Python, Point​​ cloud, Anomaly detection
  • Functional​​​‌ Description:
    A set of​ generic tools for mesh​‌ pre-processing (conversion, subdivision, ray​​ tracing, reverse ray tracing),​​​‌ mesh face detection (point-to-face​ association), and more general​‌ evaluation tasks (classification, clustering,​​ coloring) of a mesh​​​‌ with respect to a​ point cloud registered onto​‌ it.
  • Contact:
    Marie Aspro​​
  • Participants:
    Marie Aspro, Erwan​​​‌ Amraoui, Pierre Alliez, Ezio​ Malis

7.2 New platforms​‌

Participants: Ezio Malis,​​ Philippe Martinet, Nicolas​​​‌ Chleq, Pierre Joyet​, Quentin Louvel,​‌ Malek Aifa, Louis​​ Verduci, Jon Aztiria​​ Oiartzabal.

7.2.1 ICAV​​​‌ platform

ICAV platform has‌ been funded by PUVSOPHIA‌​‌ project (CASA, PACA Region​​ and state), self funding,​​​‌ Digital Reference Center from‌ Univ. Côte d'Azur, and‌​‌ Academy 1 from Univ.​​ Côte d'Azur. We have​​​‌ now two autonomous vehicles,‌ one instrumented vehicle, many‌​‌ sensors (Real Time Kinematics​​ GPS, Lidars, Cameras), Communications​​​‌ devices (C-V2X, IEEE 802.11p),‌ and one standalone localization‌​‌ and mapping system.

ICAV​​ platform is composed of​​​‌ (see Figure 5):‌

  • ICAV1 is an old‌​‌ generation of ZOE. It​​ has been bought fully​​​‌ robotized and intrumented. It‌ is equipped with Velodyne‌​‌ Lidar VLP16, low cost​​ Inertial Measurement Unit (IMU)​​​‌ and GPS, three cameras‌ and one embedded computer.‌​‌ ICAV1 has been scrapped​​ in 2025.
  • ICAV2 is​​​‌ a new generation of‌ ZOE which has been‌​‌ instrumented and robotized in​​ 2021. It is equipped​​​‌ with Velodyne Lidar VLP16,‌ low cost IMU and‌​‌ GPS, three cameras, two​​ solidstate Lidars RS-M1, one​​​‌ embedded computer and one‌ NVIDIA Jetson AGX Xavier.‌​‌
  • ICAV3 will be instrumented​​ with different LIDARS and​​​‌ multi cameras system (LADYBUG5+)‌
  • A ground truth RTK‌​‌ system. An RTK GPS​​ base station has been​​​‌ installed and a local‌ server configured inside the‌​‌ Inria Center. Each vehicle​​ is equipped with an​​​‌ RTK GPS receiver and‌ connected to a local‌​‌ server in order to​​ compute a centimeter localization​​​‌ accuracy.
  • A standalone localization‌ and mapping system. This‌​‌ system is composed of​​ a Velodyne Lidar VLP16,​​​‌ low cost IMU and‌ GPS, and one NVIDIA‌​‌ Jetson AGX Xavier.
  • A​​ communication system vehicle-to-everything (V2X)​​​‌ based on the technology‌ C-V2X and IEEE 802.11p.‌​‌
  • Different lidar sensors (Ouster​​ OS2-128, RS-LIDAR16, RS-LIDAR32, RS-Ruby),​​​‌ and one multi-cameras system‌ (LADYBUG5+)

The main applications‌​‌ of this platform are:​​

  • datasets acquisition
  • localization, Mapping,​​​‌ Depth estimation, Semantization
  • autonomous‌ navigation (path following, parking,‌​‌ platooning, ...), proactive navigation​​ in shared space
  • situation​​​‌ awareness and decision making‌
  • V2X communication
  • autonomous landing‌​‌ of UAVs on the​​ roof.
Figure 5.a
 
Figure 5.b
 
Figure 5.c
   

The image shows​​​‌ a robotic platform with‌ various components. At the‌​‌ top, it has a​​ 360-degree camera and an​​​‌ RS-LiDAR sensor. Below those,‌ there is a stereo/depth‌​‌ camera. The robot is​​ mounted on a Scout​​​‌ Mini base with omni-wheels‌ for mobility. It is‌​‌ equipped with an onboard​​ PC, an AX3000 WiFi​​​‌ router, and an NVIDIA‌ Jetson AGX Xavier for‌​‌ processing. The robot appears​​ to be designed for​​​‌ navigation and environmental sensing‌ tasks.

The image shows‌​‌ a robotic platform with​​ various components. At the​​​‌ top, it has a‌ 360-degree camera and an‌​‌ RS-LiDAR sensor. Below those,​​ there is a stereo/depth​​​‌ camera. The robot is‌ mounted on a Scout‌​‌ Mini base with omni-wheels​​ for mobility. It is​​​‌ equipped with an onboard‌ PC, an AX3000 WiFi‌​‌ router, and an NVIDIA​​ Jetson AGX Xavier for​​​‌ processing. The robot appears‌ to be designed for‌​‌ navigation and environmental sensing​​ tasks.

Figure 5:​​​‌ Overview of ICAV platform‌ (ICAV1, ICAV2-1, ICAV3)

ICAV2‌​‌ has been used by​​ Maria Kabtoul in order​​​‌ to demonstrate the effectiveness‌ of autonomous navigation of‌​‌ a car in a​​​‌ crowd.

 

7.2.2 Indoor autonomous​ mobile platform

The mobile​‌ robot platform has been​​ funded by the MOBI-DEEP​​​‌ project in order to​ demonstrate autonomous navigation capabilities​‌ in encumbered and crowded​​ environment. This platform is​​​‌ composed of (see Figure​ 6):

  • one omnidirectional​‌ mobile robot (SCOOT MINI​​ with mecanum wheels from​​​‌ AGIL-X)
  • one NVIDIA Jetson​ AGX Xavier for deep​‌ learning algorithm implementation
  • one​​ general labtop
  • one Robosense​​​‌ RS-LIDAR16
  • one Ricoh Z1​ 360° camera
  • one Sony​‌ RGB-D D455 camera
Figure 6

The​​ image shows a robotic​​​‌ platform labeled Scout Mini​ with omni-wheels for mobility.​‌ It is equipped with​​ various devices: a 360-degree​​​‌ camera at the top,​ an RS-LiDAR sensor, a​‌ stereo/depth camera, an onboard​​ PC, an AX3000 WiFi​​​‌ router, and an NVIDIA​ Jetson AGX Xavier. The​‌ robot is designed for​​ advanced navigation and data​​​‌ processing tasks.

Figure 6​: Overview of MOBI-DEEP​‌ platform

The main applications​​ of this platform are:​​​‌

  • indoor datasets acquisition
  • localization,​ Mapping, depth estimation, Semantization​‌
  • proactive navigation in shared​​ space
  • pedestrian detection and​​​‌ tracking.

This platform was​ used in MOBI-DEEP project​‌ for integration of different​​ work from the consortium.​​​‌ It is used to​ demonstrate new results on​‌ social navigation.

7.2.3 Outdoor​​ autonomous mobile platform

The​​​‌ mobile robot platform has​ been funded by the​‌ NINSAR project in order​​ to demonstrate autonomous navigation​​​‌ capabilities in unstructured environment.​ This platform is composed​‌ of (see Figure 7​​):

  • one mobile robot​​​‌ (Hunter 2 from AGIL-X)​
  • one general labtop
  • one​‌ Robosense RS-RUBY
Figure 7.a
 
Figure 7.b
     

The image​​ shows a robotic platform​​​‌ labeled Hunter 2. It​ is equipped with various​‌ devices: an RS-RUBY sensor,​​ an onboard PC, an​​​‌ AX3000 WiFi router. The​ robot is designed for​‌ advanced navigation and data​​ processing tasks.

The image​​​‌ shows a robotic platform​ labeled Hunter 2. It​‌ is equipped with various​​ devices: an RS-RUBY sensor,​​​‌ an onboard PC, an​ AX3000 WiFi router. The​‌ robot is designed for​​ advanced navigation and data​​​‌ processing tasks.

Figure 7​: Overview of NINSAR​‌ platform

The main applications​​ of this platform are:​​​‌

  • outdoor datasets acquisition
  • localization,​ Mapping
  • control, state estimation​‌

This platform is used​​ in NINSAR project for​​​‌ control and state estimation​ testing.

7.2.4 E-Wheeled platform​‌

E-WHEELED is an AMDT​​ Inria project (2019-2022) coordinated​​​‌ by Philippe Martinet. The​ aim is to provide​‌ mobility to things by​​ implementing connectivity techniques. It​​​‌ makes available an Inria​ expert engineer (Nicolas Chleq)​‌ in ACENTAURI in order​​ to demonstrate the Proof​​​‌ of Concept using a​ small size demonstrato (see​‌ Figure 8). Due​​ to the COVID19, the​​​‌ project has been delayed.​

Figure 8

The image shows two​‌ robotic platforms side by​​ side. Each platform has​​​‌ a pair of large​ black wheels with a​‌ five-spoke design. Atop each​​ platform is a green​​​‌ circuit board with a​ stack of black components.​‌ The platforms are supported​​ by a metal frame,​​​‌ and various colored wires​ connect the components on​‌ the boards.

Figure 8​​: Overview of E-wheeled​​​‌ platform

7.2.5 Moving Living​ Lab global platform

Moving​‌ Living Lab platform has​​ been funded by the​​ AgrifoodTEF project (H2020 project).​​​‌ It has been designed‌ to perform physical field‌​‌ testing (Navigation algorithms, Monitoring​​ of health and growth,​​​‌ Sensors and robots testing)‌ and dataset acquisition in‌​‌ real agricultural sites.

Moving​​ Living Lab (MLL) platform​​​‌ is composed of different‌ elements (see Figure 9‌​‌):

  • MLL is a​​ moving laboratory. It has​​​‌ been fully equipped with‌ a server, a RTK-GPS‌​‌ base, a 5G private​​ network, WIFI, and three​​​‌ office desks. It has‌ autonomy of energy and‌​‌ an electric generator. It​​ has also a trailer​​​‌ in order to transport‌ the robots.
  • SUMMIT-HM is‌​‌ a customized and updated​​ version of the Summit-XL​​​‌ offroad mobile robot from‌ robotnik. It is an‌​‌ instrumented robots with many​​ sensors (RTK GPS, Lidar,​​​‌ Camera, IMU Communications devices‌ (WIFI, 5G)) and one‌​‌ embedded NVIDIA Jetson AGX​​ Orin.
  • VERSATYL is an​​​‌ UAV from Skydrone company.‌ It has been customized‌​‌ with a payload instrumented​​ with many sensors (RTK​​​‌ GPS, Lidar, Camera, IMU‌ Communications devices (WIFI, 5G))‌​‌ and one embedded NVIDIA​​ Jetson AGX Orin.
  • Matrice​​​‌ 300 RTK is an‌ UAV from DJI company.‌​‌ It has been equipped​​ with a multispectral camera​​​‌ and one embedded NVIDIA‌ Jetson AGX Xavier.
  • A‌​‌ ground truth RTK system.​​ An RTK GPS base​​​‌ station has been installed‌ and a local server‌​‌ configured inside the MLL.​​ Each robot is equipped​​​‌ with an RTK GPS‌ receiver and connected to‌​‌ a local server in​​ order to compute a​​​‌ centimeter localization accuracy.

The‌ main applications of this‌​‌ platform are:

  • datasets acquisition​​
  • localization, Mapping, Simultaneous Localization​​​‌ And Mapping (SLAM)
  • autonomous‌ navigation (path planning and‌​‌ tracking, Geofencing), proactive navigation​​ in shared space.
Figure 9.a
 
Figure 9.b
 
Figure 9.c
 

This​​​‌ picture illustrates the robotic‌ platforms of the ACENTAURI‌​‌ team

This picture illustrates​​ the robotic platforms of​​​‌ the ACENTAURI team

Figure‌ 9: Overview of‌​‌ Moving Living Lab platform​​ (MLL,Summit-HM, VERSATYL, Matrice300RTK)

7.2.6​​​‌ UAV arena Dronix platform.‌

The UAV (Unmanned Aerial‌​‌ Vehicle) Arena (called Dronix)​​ is a fixed and​​​‌ reconfigurable platform owned by‌ ACENTAURI team at INRIA.‌​‌ It was cofunded by​​ the European project AgrifoodTEF.​​​‌

The volume of Dronix‌ is 5 m x‌​‌ 6 m x 7​​ m. It is a​​​‌ specialized platform designed for‌ the development, testing and‌​‌ demonstration of mobility algorithms​​ for UAVs and Autonomous​​​‌ Graound Robots (AGR)s in‌ a controlled indoor environment.‌​‌ It can be considered​​ as a fully controlled​​​‌ facility for preliminary testing‌ of UAV and AGR‌​‌ functionalities.

Dronix is based​​ on Qualisys Motion Capture​​​‌ and Tracking System and‌ it uses 12 cameras‌​‌ Qualisys Miqus M3 to​​ localize (estimation of the​​​‌ position and orientation) and‌ track any moving object‌​‌ (equipped with markers) in​​ a dedicated volume. The​​​‌ software named Qualisys Track‌ manager is able to‌​‌ provide in high frequency​​ localization of these objects​​​‌ with a 0.1mm accuracy‌ and can be connected‌​‌ to a robotic system​​ in real time as​​​‌ a ground truth source,‌ and/or real time localization‌​‌ system.

The main applications​​ of this platform are:​​​‌ Data collection via UAVs‌ and AGRs mounted with‌​‌ different sensors, Testing and​​​‌ Validation of Different Sensors​ Calibration, Building ground truth​‌ localization with 12 Cameras​​ for a millimeter (mm)​​​‌ accuracy, Testing and Validation​ of Localization, Mapping, SLAM,​‌ and Navigation Algorithms for​​ UAV, AGR, and their​​​‌ collaboration.

Dronix platform is​ presented in Figure 10​‌.

Figure 10.a
 
Figure 10.b
 

Overview of Dronix​​ platform.

Overview of Dronix​​​‌ platform.

Figure 10:​ Overview of Dronix platform.​‌

It is composed of​​ different elements :

  • Motion​​​‌ Capture Cameras Miqus M3​ (resolution (1824 x 1088),​‌ normal mode (2MO, 340fps),​​ High speed mode (0,5MO,​​​‌ 650fps))
  • Dedicated Qualisys Motion​ Analysis System
  • Multi object​‌ tracking with specific 3D​​ markers

7.3 Open data​​​‌

7.3.1 Robforisk Dataset

  • Contributors:​
    Louis Verduci, Enrico Fiasché,​‌ Pierre Joyet, Philippe Martinet​​
  • Description:

    Forests in the​​​‌ Mediterranean region are highly​ susceptible to biotic stresses,​‌ particularly from insect infestations.​​ Traditionally, these attacks are​​​‌ detected through visual inspections,​ which often occur after​‌ significant damage has already​​ been inflicted. In this​​​‌ context, the scenario of​ our project has been​‌ to acquire and calibrate​​ data on early indicators​​​‌ of tree vulnerability using​ multispectral imaging mounted on​‌ drones. The acquired and​​ processed data are available​​​‌ throught the website of​ the ROBFORISK project. The​‌ ROBFORISK dataset consists of:​​

    • 2D RGB images acquired​​​‌ in forest in 2024​
    • 2D multispectral images acquired​‌ in forest in 2024​​
    • 2D map indicators processed​​​‌ and published in 2025​

    The multi-spectral camera is​‌ integrated onboard of a​​ DJI Matrice 300 equipped​​​‌ with a NVIDIA Jetson​ Xavier. Another UAV DJI​‌ Mavic 3 offers a​​ dual camera sensor including​​​‌ a Hasselblad micro 4/3​ and a 28x hybrid​‌ zoom camera. All the​​ data are georeferenced.

    Figure​​​‌ 11 illustrates the setup​ of the UAV with​‌ the multispectral camera (Silios​​ Toucan).

    Figure 11

    The sensor setup​​​‌ in Robforisk.

    Figure 11​: The sensor setup​‌ in Robforisk.
  • Project link:​​ https://project.inria.fr/robforisk/
  • Publications:
    Public
  • Contact:​​​‌
    Philippe Martinet

7.3.2 STAIRS​ Jasmin Dataset

  • Contributors:
    Louis​‌ Verduci, Géraldine Groussier, Philippe​​ Martinet
  • Description:

    The collaboration​​​‌ is carried out with​ INRAE ​​Sophia relating to​‌ the evaluation of the​​ impact of trichogramma on​​​‌ crop pests. The study​ is based on the​‌ comparison of data before​​ and after the releases.​​​‌ Inria takes care of​ multispectral acquisitions therefore at​‌ low altitude in order​​ to construct vegetation indices.​​​‌ INRAE takes care of​ pest counts and flower​‌ health indicators carried out​​ in order to study​​​‌ the influence of different​ types of trichograms. The​‌ STAIRS Jasmin dataset consists​​ of:

    • 2D multispectral images​​​‌ acquired in Jasmin plot​ in 2025
    • 2D map​‌ indicators processed and published​​ in 2025

    The multi-spectral​​​‌ camera is integrated onboard​ of a DJI Matrice​‌ 300 equipped with a​​ NVIDIA Jetson Orin nano.​​​‌

    Figure 12 illustrates the​ setup of the UAV​‌ with the multispectral camera​​ (Silios Toucan).

    Figure 12

    Jasmin Data​​​‌ collection in Stairs.

    Figure​ 12: Jasmin Data​‌ collection.
  • Publications:
    internal
  • Contact:​​
    Philippe Martinet

7.3.3 STAIRS​​​‌ Vineyard Dataset

  • Contributors:
    Louis​ Verduci, Cédric Cosset, Philippe​‌ Martinet
  • Description:

    The collaboration​​ is done with one​​​‌ wine producer owner of​ many vineyards. The goal​‌ is to study and​​ follow the vine development​​ and health. The STAIRS​​​‌ vineyard dataset consists of:‌

    • 2D multispectral images acquired‌​‌ in vineyard in 2025​​
    • 2D map indicators processed​​​‌ and published in 2025‌

    The multi-spectral camera is‌​‌ integrated onboard of a​​ DJI Matrice 300 equipped​​​‌ with a NVIDIA Jetson‌ Orin nano.

    Figure 13‌​‌ illustrates the setup of​​ the UAV with the​​​‌ multispectral camera (Silios Toucan).‌

    Figure 13

    The UAV equiped with‌​‌ multi-spectral camera used in​​ vine.

    Figure 13:​​​‌ The sensor setup in‌ Stairs.
  • Publications:
    internal
  • Contact:‌​‌
    Philippe Martinet

7.3.4 ANNAPOLIS​​ Dataset

  • Contributors:
    Quentin Louvel,​​​‌ Nicolas Chleq, Louis Verduci,‌ Kaushik Bhowmik, Philippe Martinet‌​‌
  • Description:

    The ANNAPOLIS dataset​​ is a real-world environment​​​‌ dataset for development and‌ evaluation of methods for‌​‌ Autonomous driving in presence​​ of pedestrians and (Personal​​​‌ Light Electric Vehicles) PLEVs.‌

    The ANNAPOLIS dataset consists‌​‌ of:

    • dense 3D point​​ clouds acquired in August​​​‌ 2025 (in static simulating‌ an Road Side Units‌​‌ (RSUs))
    • dense 3D point​​ clouds acquired in August​​​‌ 2025 (embedded in the‌ autonomous vehicle)
    • stereo RGB‌​‌ image sequences with camera​​ poses, acquired in August​​​‌ 2025.
    • GPS information from‌ PLEV/Pedestrian
    • Aerial Monocular view‌​‌ of the dynamic scene​​

    The sensors used are​​​‌ either static (one LiDAR)‌ or integrated onboard an‌​‌ electric Renault Zoe car​​ (the ZOEcar) modified for​​​‌ autonomous driving. Figure 14‌ illustrates the setup of‌​‌ LiDAR, the ZOEcar with​​ onboard sensors, at the​​​‌ Azur Arena site. The‌ specific sensors used are:‌​‌

    • Static Robosense Ruby (80​​ planes)
    • IDS GV5280 stereo​​​‌ camera, onboard the ZOEcar.‌
    • GNSS SP90 de Spectra‌​‌ GPS-RTK, onboard the ZOEcar.​​
    • F9P RTK-GPS sensor, on​​​‌ pedestrians and PLEV.
    • Xsens‌ Mti-100 IMU, onboard the‌​‌ ZOEcar.
    Figure 14

    The sensor setup​​ at the Azur Arena​​​‌ site.

    Figure 14:‌ The sensor setup at‌​‌ the Azur Arena site.​​

    The ANNAPOLIS dataset was​​​‌ acquired over the course‌ of more than two‌​‌ years (and is still​​ being updated). The dense​​​‌ point clouds are obtained‌ by fusing multiple scans‌​‌ from the LiDAR.

    The​​ dataset contain the following​​​‌ information:

    • Position of the‌ pedestrian and PLEV
    • Position‌​‌ of Autonomous vehicle
    • 3D​​ point cloud from RSU​​​‌
    • 3D point cloud from‌ Autonomous vehicle
  • Publications:
    Internal‌​‌
  • Contact:
    Philippe Martinet

7.3.5​​ OCA Dataset

  • Contributors:
    Quentin​​​‌ Louvel, Stefan Larsen, El‌ Moustapha Mouaddib, Patrick Rives,‌​‌ Ezio Malis
  • Description:

    The​​ OCA dataset is a​​​‌ real-world changing-environment dataset for‌ development and evaluation of‌​‌ methods for long-term localization​​ and monitoring. It has​​​‌ been created in the‌ context of the SAMURAI‌​‌ project, with the use​​ of a dense 3D​​​‌ reference model from high-end‌ sensors, and navigation and‌​‌ monitoring with robots using​​ low-end sensors.

    The OCA​​​‌ dataset consists of:

    • 2‌ dense 3D point clouds‌​‌ with color and intensity​​ data, acquired in May​​​‌ 2023 and January 2025.‌
    • 50 stereo RGB‌​‌ image sequences with camera​​ poses, acquired in May​​​‌ 2023, January 2025, March‌ 2025 and August 2025.‌​‌

    The sensors used are​​ either handheld (the LiDAR)​​​‌ or integrated onboard an‌ electric Renault Zoe car‌​‌ (the ZOEcar) modified for​​ autonomous driving. Figure 15​​​‌ illustrates the setup of‌ LiDAR, the ZOEcar with‌​‌ onboard sensors, at the​​​‌ Observatoire de la Cote​ d'Azur (OCA)​‌ site. The specific sensors​​ used are:

    • Leica RTC360​​​‌ LiDAR scanner stand with​ integrated high dynamic range​‌ (HDR) RGB camera.
    • IDS​​ GV5280 stereo camera, onboard​​​‌ the ZOEcar.
    • GNSS SP90​ de Spectra GPS-RTK, onboard​‌ the ZOEcar.
    • Xsens Mti-100​​ IMU, onboard the ZOEcar.​​​‌
    Figure 15

    The sensor setup at​ the OCA site.

    Figure​‌ 15: The sensor​​ setup at the OCA​​​‌ site.

    The OCA dataset​ was acquired over the​‌ course of more than​​ two years (and is​​​‌ still being updated). The​ dense point clouds are​‌ obtained by fusing multiple​​ scans from the LiDAR.​​​‌ Each point is colored​ by the RGB camera​‌ on top of the​​ LiDAR stand. The image​​​‌ sequences are obtained from​ the stereo cameras mounted​‌ on the ZOEcar, and​​ the car is manually​​​‌ driven around in loops​ on the OCA site,​‌ passing through the area​​ represented by the dense​​​‌ point clouds. Camera poses​ are obtained by fusion​‌ of vehicle odometry, IMU​​ and GPS measurements, using​​​‌ a Kalman filter to​ remove noise and provide​‌ accurate Ground Truth (GT)​​ trajectories. LiDAR scans from​​​‌ onboard the ZOEcar, as​ well as drone footage,​‌ is also acquired at​​ the OCA, but not​​​‌ officially part of the​ dataset yet.

    The dataset​‌ images contain the following​​ changes and different viewing​​​‌ conditions:

    • Semi-static pedestrians, cars,​ scooters, and objects like​‌ park benches, traffic cones​​
    • Vegetation changes, mostly colors​​​‌
    • Lumination such as morning-light,​ day-light sunny, day-light strong​‌ sun flares, day-light sunny+cloudy,​​ day-light overcast, evening-light
  • Publications:​​​‌
    The dataset has been​ used to validate the​‌ experimental results presented in​​ 26.
  • Contact:
    Ezio​​​‌ Malis
  • Release contributions:
    The​ dataset has not been​‌ released to the public​​ yet. It is used​​​‌ by the partners of​ the SAMURAI project.

7.3.6​‌ PhraseStereo: The First Open-Vocabulary​​ Stereo Image Segmentation Dataset​​​‌

  • Contributors:
    Thomas Campagnolo, Ezio​ Malis, Gaetan Bahl, Philippe​‌ Martinet
  • Description:
    PhraseStereo, is​​ a novel dataset for​​​‌ phrase grounding segmentation in​ stereo image pairs. It​‌ contains 77,262 stereo images​​ and 345,486 phrase-region annotations,​​​‌ with multiple annotations per​ image pair. See 16​‌ for an example. By​​ enabling models to leverage​​​‌ stereo geometry, PhraseStereo can​ facilitate more accurate segmentation​‌ of referred objects and​​ regions. It provides a​​​‌ foundation for exploring multimodal​ architectures that integrate vision​‌ and language in a​​ stereo context, paving the​​​‌ way for advances in​ both geometric reasoning and​‌ semantic understanding.
    Figure 16

    Example of​​ image pairs and phrase-region​​​‌ annotations.

    Figure 16:​ Example of image pairs​‌ and phrase-region annotations.
  • Publications:​​
    The dataset has been​​​‌ published in 22.​
  • Contact:
    Ezio Malis
  • Release​‌ contributions:
    The dataset has​​ not been yet released​​​‌ to the public.

7.3.7​ CARLA Lidar Dataset

  • Contributors:​‌
    Matteo Azzini, Ezio Malis,​​ Philippe Martinet
  • Description:
    We​​​‌ used the CARLA simulator​ to create a dataset​‌ of noise free point​​ clouds with known ground​​​‌ truth poses in five​ different maps shown in​‌ Figure 17. For​​ each map, a vehicle​​​‌ travels along random trajectories,​ exposing different environments, such​‌ as residential and geometrically​​ structured areas or places​​ with vegetation, thus making​​​‌ it more difficult to‌ extract plans. The simulated‌​‌ 3D LiDAR has 64​​ horizontal planes, 360 deg​​​‌ horizontal field of view‌ and a vertical field‌​‌ of view of 28.8​​ deg, 100 m range.​​​‌ The point clouds are‌ acquired each 100 ms‌​‌ without noise and without​​ motion distortion. Each data​​​‌ collection session is then‌ treated to add Gaussian‌​‌ noise with zero mean​​ and standard deviation of​​​‌ 0.03 m, 0.05 m‌ and 0.10 m to‌​‌ the point clouds, in​​ order to simulate real-world​​​‌ sensor noise and evaluate‌ the robustness of the‌​‌ proposed method under different​​ noise conditions. The final​​​‌ result is a small‌ dataset of 20 sequences,‌​‌ 5 maps with 4​​ different noise levels each,​​​‌ with lengths ranging from‌ 800 m to 3‌​‌ km. Those sequences can​​ be used to perform​​​‌ an evaluation of the‌ LiDAR odometry or SLAM‌​‌ methods in a controlled​​ environment, allowing for a​​​‌ systematic analysis of their‌ performance across various noise‌​‌ levels and environmental complexities.​​
    Figure 17

    A bird eye view​​​‌ of the five maps‌ from the CARLA simulator.‌​‌

    Figure 17: A​​ bird eye view of​​​‌ the five maps from‌ the CARLA simulator.
  • Publications:‌​‌
    The dataset has been​​ used to validate the​​​‌ experimental results presented in‌ 20.
  • Contact:
    Ezio‌​‌ Malis
  • Release contributions:
    The​​ dataset has not been​​​‌ yet released to the‌ public.

7.3.8 Mixed Signals:‌​‌ A Diverse Point Cloud​​ Dataset for Heterogeneous LiDAR​​​‌ V2X Collaboration

  • Contributors:
    Minh-Quan‌ Dao, Ezio Malis
  • Description:‌​‌
    Mixed Signals, is a​​ comprehensive V2X dataset developed​​​‌ in collaboration with Ecole‌ Centrale de Nantes, Cornell‌​‌ University, University of Sydney​​ and the Ohio State​​​‌ University. The dataset Features‌ 45.1 K point clouds‌​‌ and 240.6 K bounding​​ boxes collected from three​​​‌ connected autonomous vehicles (CAVs)‌ equipped with two different‌​‌ configurations of LiDAR sensors,​​ plus a roadside unit​​​‌ with dual LiDARs. The‌ dataset provides point clouds‌​‌ and bounding box annotations​​ across 10 classes, ensuring​​​‌ reliable data for perception‌ training. Figure 18 illustrates‌​‌ examples of high quality​​ annotated dynamic objects in​​​‌ the dataset.
    Figure 18

    Visualization of‌ object tracks in Mixed‌​‌ Signals. Dynamic objects display​​ smooth trajectories, while static​​​‌ objects maintain consistent poses‌ over time, highlighting the‌​‌ high quality of our​​ annotations.

    Figure 18:​​​‌ Visualization of object tracks‌ in Mixed Signals. Dynamic‌​‌ objects display smooth trajectories,​​ while static objects maintain​​​‌ consistent poses over time,‌ highlighting the high quality‌​‌ of our annotations.
  • Publications:​​
    The dataset has been​​​‌ published in 29.‌
  • Contact:
    Ezio Malis
  • Release‌​‌ contributions:
    The dataset has​​ been released to the​​​‌ public at https://­mixedsignalsdataset.­cs.­cornell.­edu/

8‌ New results

8.1 Context‌​‌ aware autonomous navigation

Participants:​​ Monica Fossati, Philippe​​​‌ Martinet, Ezio Malis‌.

Achieving full autonomy‌​‌ in urban settings remains​​ challenging due to the​​​‌ dynamic and unpredictable behavior‌ of road users. We‌​‌ address these challenges 25​​ by integrating high-definition (HD)​​​‌ maps, specifically those based‌ on the Lanelet2 format,‌​‌ with real-time perception data​​ to dynamically characterize the​​​‌ space around a road‌ agent. This integration leverages‌​‌ the graph-based structure, semantic​​​‌ richness, and modularity of​ Lanelet2 maps to provide​‌ a comprehensive, context-aware representation​​ of the environment. The​​​‌ goal is to view​ the road from the​‌ user’s perspective, extracting navigation-relevant​​ information to support adaptive​​​‌ and proactive decision-making, enhancing​ the vehicle’s situational awareness​‌ and ability to navigate​​ complex urban scenarios safely​​​‌ and efficiently. This leads​ to a more robust​‌ understanding of the vehicle’s​​ context in urban navigation.​​​‌ It is an innovative​ approach to accurately describe​‌ the space around road​​ users at the lanelet​​​‌ level, using multiple reachability​ criteria and integrating real-time​‌ data on the agent’s​​ kinematics and its constraints.​​​‌

8.2 Multi-Spectral Visual Servoing​

Participants: Enrico Fiasché,​‌ Siddharth Savner, Philippe​​ Martinet, Ezio Malis​​​‌.

Multispectral sensors, which​ measure multiple wavelength bands​‌ beyond the standard red,​​ green, and blue channels,​​​‌ capture richer information than​ conventional RGB cameras. Such​‌ enriched data is especially​​ valuable in visual servoing,​​​‌ where robot control critically​ depends on image content.​‌ However, leveraging multiple spectral​​ bands (typically around a​​​‌ dozen) directly within real-time​ visual servoing constitutes a​‌ significant challenge. The only​​ prior work tackled this​​​‌ problem using a Pixel​ Selection strategy based on​‌ image gradients 3.​​ This work introduces a​​​‌ learning-based framework to enhance​ Multi-Spectral Visual Servoing (MSVS)​‌ by fusing data from​​ multispectral cameras into a​​​‌ single, robust representation for​ control. An autoencoder is​‌ employed to compress multispectral​​ inputs into a noise-attenuated​​​‌ 2D image, which is​ then used within a​‌ standard rule-based Direct Visual​​ Servoing (DVS) scheme. Comparing​​​‌ experiments both with simulated​ data and with a​‌ real robot in complex​​ and unstructured environments shows​​​‌ that the proposed learning-based​ fusion maintains stable convergence​‌ and improves positioning accuracy​​ under noisy conditions while​​​‌ preserving computational efficiency

8.3​ Efficient and accurate closed-form​‌ solution to pose estimation​​ from 3D correspondences

Participants:​​​‌ Ezio Malis, Jana​ Vrablikova [Inria, AROMATH],​‌ Laurent Busé [Inria, AROMATH]​​.

Computing the pose​​​‌ from 3D data acquired​ in two different frames​‌ is of high importance​​ for several robotic tasks​​​‌ like odometry, SLAM and​ place recognition. The pose​‌ is generally obtained by​​ solving a least-squares problem​​​‌ given points-to-points, points-to-planes or​ points to lines correspondences.​‌ The non-linear least-squares problem​​ can be solved by​​​‌ iterative optimization or, more​ efficiently, in closed-form by​‌ using solvers of polynomial​​ systems. The main contribution​​​‌ of our work is​ to integrate Sylvester forms​‌ 40 with the hidden-variable​​ formulation of the resultant​​​‌ 10 in order to​ obtain new resultant-based methods​‌ that operate in degrees​​ 7 and 8, significantly​​​‌ reducing the size of​ the elimination matrices compared​‌ to the degree 9​​ approach previously proposed. We​​​‌ give the theoretical foundations​ of our approach, relying​‌ on the concept of​​ saturation of an ideal,​​​‌ and prove its validity.​ More specifically, other key​‌ contributions are (i) a​​ detailed analysis of the​​​‌ rank of certain linear​ systems which allows us​‌ to prove the existence​​ of our new elimination​​​‌ matrices, (ii) a construction​ of Sylvester forms tailored​‌ to our setting, providing​​ structural results on their​​ coefficients that ease their​​​‌ evaluation. To our knowledge,‌ this is the first‌​‌ application of Sylvester forms​​ to a large variety​​​‌ of pose estimation problems,‌ and the first demonstration‌​‌ that such forms can​​ be used to derive​​​‌ faster, more compact closed-form‌ solvers without sacrificing accuracy.‌​‌ This establishes a new​​ connection between advanced elimination​​​‌ theory and practical computer‌ vision algorithms.

8.4 Lidar‌​‌ Odometry

Participants: Matteo Azzini​​, Ezio Malis,​​​‌ Philippe Martinet.

Nowadays,‌ lidar technology is widely‌​‌ exploited thanks to the​​ capability of providing a​​​‌ highly accurate 3D point‌ cloud. In particular, in‌​‌ the context of autonomous​​ navigation, the sensor data​​​‌ can be used to‌ localize a vehicle in‌​‌ the environment. In this​​ context, it is important​​​‌ to reduce as much‌ as possible the unavoidable‌​‌ drift, especially for long​​ trajectories. Traditional localization methods​​​‌ rely mainly on approaches‌ inspired by the ICP‌​‌ algorithm, trying to extract​​ reliable features in the​​​‌ incoming point cloud and‌ performing a unidirectional matching‌​‌ with the reference point​​ cloud. With such approach,​​​‌ not all the available‌ information from two successive‌​‌ scans are exploited. Indeed,​​ one could choose the​​​‌ symmetrical approach of extracting‌ reliable features in the‌​‌ reference point cloud and​​ performing a unidirectional matching​​​‌ with the current point‌ cloud. In 20,‌​‌ we proposed BALO, a​​ novel point-to-plane lidar odometry,​​​‌ which exploits the symmetrical‌ nature of the problem.‌​‌ By performing a bidirectional​​ matching, the method is​​​‌ able to balance the‌ error inherited in the‌​‌ features extraction phase and​​ the noise from the​​​‌ data. The proposed method‌ is evaluated on synthetic‌​‌ data from CARLA Simulator​​ and on real data​​​‌ from the KITTI dataset.‌ The results show that‌​‌ BALO outperforms the state-of-the-art​​ point-to-plane methods and it​​​‌ is equivalent to the‌ best point-to-point approaches across‌​‌ different real scenarios. Furthermore,​​ on synthetic data in​​​‌ periurban scenarios, the proposed‌ method showed higher accuracy‌​‌ and robustness to simulated​​ noise, proving the potential​​​‌ superiority of point-to-plane correspondences‌ over point-to-point ones, as‌​‌ expected from the theoretical​​ point of view

8.5​​​‌ Dense-direct visual-SLAM

Participants: Diego‌ Navarro Tellez, Ezio‌​‌ Malis, Raphael Antoine​​ [CEREMA], Philippe Martinet​​​‌.

In the context‌ of the ROADAI project,‌​‌ we proposed a comprehensive​​ framework based on direct​​​‌ Visual Simultaneous Localization and‌ Mapping (V-SLAM) to observe‌​‌ an infrastructure for an​​ inspection task. The precise​​​‌ positioning of data measurements‌ (such as ground-penetrating radar)‌​‌ is crucial for environmental​​ observations. However, in GPS-denied​​​‌ environments near large structures,‌ the GPS signal can‌​‌ be severely disrupted or​​ even unavailable. To address​​​‌ this challenge, we focus‌ on the accurate localization‌​‌ of drones using vision​​ sensors and SLAM systems.​​​‌ Traditional SLAM approaches may‌ lack robustness and precision,‌​‌ particularly when cameras lose​​ perspective near structures. We​​​‌ propose a new framework‌ that combines feature-based and‌​‌ direct methods to enhance​​ localization precision and robustness​​​‌ 3015. A‌ novel Dense Visual SLAM‌​‌ method has been tailored​​ for close-range localization to​​​‌ surfaces using unmanned aerial‌ vehicles (UAVs) in GPS‌​‌ degraded conditions. Our method​​​‌ uses a custom registration​ method to enable realistic​‌ rendering with dense maps,​​ designed for close-range visual​​​‌ odometry and surface modeling.​ The system operates in​‌ two steps: First, the​​ UAV performs an exploratory​​​‌ flight with a stereo​ camera to build a​‌ dense map, modeling surfaces​​ as ellipsoids; Second, the​​​‌ system exploits the map​ to generate reference data,​‌ enabling dense visual odometry​​ (DVO) in close proximity​​​‌ to the surfaces without​ the need of stereo​‌ data. Experiments in realistic​​ simulated environments and on​​​‌ real scenarios demonstrate the​ system’s capability to localize​‌ the drone within 16​​ cm accuracy at a​​​‌ distance of 2 m​ from the surface, outperforming​‌ existing state-of-the-art approaches.

8.6​​ Reliable Risk Assessment and​​​‌ Management in autonomous driving​

Participants: Emmanuel Alao,​‌ Lounis Adouane [UTC Compiegne]​​, Philippe Martinet.​​​‌

Risk assessment and management​ unit requires predicting and​‌ simulating multiple road scenarios.​​ It has led to​​​‌ the development of multiple​ hypotheses and prediction algorithms​‌ for estimating the future​​ states of road users.​​​‌ The uncertainty has further​ escalated due to the​‌ introduction of Personal Light​​ Electric Vehicles (PLEVs) like​​​‌ electric scooters and bikes.​ Previsouly, we have proposed​‌ and validated an overall​​ probabilistic multi-controller approach included​​​‌ in a global reliable​ risk assessment and management​‌ system architecture. It introduces​​ a decision-making and control​​​‌ strategy using a multi-level​ motion optimization method 18​‌ that captures the uncertainties​​ in the motion of​​​‌ the surrounding agents using​ a Fusion of stochastic​‌ Predictive Inter-distance Profile (F-sPIDP)​​ . Using F-sPIDP as​​​‌ a continuous multi-risk assessment​ metric, it is possible​‌ to project the uncertainties​​ in the motion of​​​‌ the traffic agents onto​ the predicted inter-distance between​‌ the Autonomous Vehicle and​​ the surrounding agents. F-sPIDP​​​‌ extends the concept of​ Predictive Inter-Distance Profile (PIDP)​‌ to stochastic PIDP (sPIDP)​​ to account for the​​​‌ stochastic uncertainty in the​ predicted state of the​‌ agents. Then, the problem​​ introduced by the multimodal​​​‌ prediction is addressed by​ performing a fusion of​‌ multiple stochastic PIDPs 19​​. In particular, an​​​‌ optimal trajectory is selected​ from the set of​‌ possible maneuvers the Autonomous​​ Vehicle can perform using​​​‌ a combination of safe​ global trajectory sampling and​‌ F-sPIDP. Subsequently, control actions​​ that minimize collision risk​​​‌ and respect the dynamics​ of the Autonomous Vehicle​‌ are computed using a​​ local control optimization method​​​‌ 17, 13.​

8.7 Parameter and state​‌ estimation for nonlinear vehicle​​ dynamics

Participants: Fabien Lionti​​​‌, Nicolas Gutowski [LERIA​ Angers], Sébastien Aubin​‌ [DGA], Philippe Martinet​​.

Parameter and state​​​‌ estimation for nonlinear vehicle​ dynamics. We address the​‌ challenges of simulating vehicle​​ dynamics over long horizons​​​‌ using limited, noisy data​ collected during testing. We​‌ propose a robust, physics-informed​​ framework for vehicle system​​​‌ identification and state estimation,​ focusing particularly on the​‌ dynamic behavior of military​​ vehicles. Three major research​​​‌ directions are explored: (1)​ robust trajectory based model​‌ identification using a multi-step​​ loss function 27 resilient​​​‌ to real-world disturbances; (2)​ hybridization of data-driven and​‌ model-based methods to integrate​​ physical priors with machine​​ learning techniques 14;​​​‌ and (3) state estimation‌ through Moving Horizon Estimation‌​‌ 28 and the design​​ of physics-informed virtual sensors​​​‌ for internal state reconstruction.‌ These contributions aim to‌​‌ enhance vehicle performance evaluation​​ and safety analysis during​​​‌ testing, and provide foundational‌ tools for future decision-support‌​‌ systems and simulation-driven validation​​ strategies.

8.8 A Novel​​​‌ 3D Model Update Framework‌ for Long-Term Autonomy

Participants:‌​‌ Stefan Larsen, Ezio​​ Malis, El Mustafa​​​‌ Mouaddib, Patrick Rives‌.

Accurate digital representations‌​‌ of large and complex​​ environments have many crucial​​​‌ applications for autonomous localization‌ and monitoring. Recent methods‌​‌ using high-end sensors can​​ create large, dense 3D​​​‌ representations, but this is‌ typically a costly task‌​‌ that can not be​​ done frequently due to​​​‌ time and budget constraints.‌ However, many robotic tasks‌​‌ require accurate and updated​​ reference models to perform​​​‌ over time, like localization.‌ This becomes a critical‌​‌ problem in environments like​​ (peri)-urban areas, which are​​​‌ constantly affected by periodic‌ changes from weather, vegetation‌​‌ and illumination, dynamic objects​​ like cars and pedestrians,​​​‌ and construction work. Without‌ consistent updates of the‌​‌ environment representation, scene changes​​ may have significant impacts​​​‌ on the long-term performance‌ of autonomous systems. Instead‌​‌ of performing specific missions​​ to update the 3D​​​‌ representation of dynamic environments,‌ we proposed in 26‌​‌ a more efficient approach​​ that uses limited query​​​‌ image data obtained by‌ agents during previous generic‌​‌ missions in the area.​​ The main contribution of​​​‌ the novel framework is‌ the ability to segment‌​‌ and locate both new​​ and missing objects from​​​‌ only a few observations,‌ to provide consistent updates‌​‌ to a dense reference​​ model. Experiments with a​​​‌ new changing-outdoor dataset demonstrate‌ the effectiveness of the‌​‌ model update framework, and​​ show how model updates​​​‌ can be used to‌ improve the accuracy of‌​‌ state-of-the-art visual localization over​​ time.

8.9 Multi-robots localization​​​‌ and navigation for infrastructure‌ monitoring

Participants: Mathilde Theunissen‌​‌, Isabelle Fantoni,​​ Ezio Malis.

In​​​‌ the context of the‌ ANR SAMURAI project, we‌​‌ studied the interest in​​ leveraging the robot formation​​​‌ control to enforce the‌ localization precision. Precise localization‌​‌ is crucial for accurate​​ mobile robot task execution.​​​‌ In obstructed or indoor‌ environments where Global Navigation‌​‌ Satellite System (GNSS) is​​ unavailable, localization performance degrades​​​‌ significantly. To address this,‌ wireless sensor networks can‌​‌ be deployed. Among the​​ various available technologies, UltraWideband​​​‌ (UWB) sensors stand out‌ as an attractive solution‌​‌ due to their low​​ cost, robustness to multipath​​​‌ errors and low power‌ consumption. In 32 we‌​‌ proposed a theoretical framework​​ for designing a multi-robot​​​‌ formation equipped with Ultra-wideband‌ (UWB) sensors to localize‌​‌ a target robot. In​​ the presence of noisy​​​‌ range measurements, the accuracy‌ of the target robot’s‌​‌ pose estimation is highly​​ dependent on the chosen​​​‌ formation geometry. Different from‌ existing works, we account‌​‌ for the heterogeneous standard​​ deviations of range measurements​​​‌ across different UWB transmitter-receiver‌ pairs. We establish new‌​‌ optimality conditions for formation​​ geometries and conduct a​​​‌ sensitivity analysis of optimal‌ formations under robot positioning‌​‌ errors. In a 2D​​​‌ setting, we derive necessary​ and sufficient conditions for​‌ both optimality and robustness​​ to robot positioning uncertainty.​​​‌ Experimental results confirm the​ heterogeneous standard deviations of​‌ UWB range measurements and​​ validate the target robot’s​​​‌ confidence ellipse model. An​ experimental comparison of formation​‌ geometries, optimized with and​​ without considering heterogeneous noise,​​​‌ emphasizes the importance of​ accounting for the heterogeneous​‌ standard deviations of range​​ measurements. In addition, we​​​‌ experimentally demonstrate that robust​ formation geometries improve the​‌ target robot’s confidence ellipse​​ in the presence of​​​‌ positioning errors.

8.10 Improving​ Vulnerable Road-Users Detection through​‌ Hybrid Collaborative Perception and​​ Detection Refinement

Participants: Minh-Quan​​​‌ Dao, Ezio Malis​, Selma Oubouabdellah [LS2N,​‌ Nantes], Elwan Héry​​ [LS2N, Nantes], Vincent​​​‌ Fremont [LS2N, Nantes],​ Julien Moreau [Heudiasyc, Compiegne]​‌.

In the context​​ of the ANR ANNAPOLIS​​​‌ project, we studied how​ to ensure the safety​‌ of autonomous vehicles in​​ complex urban environments increasing​​​‌ the accuracy in 3D​ object detection. While LiDAR​‌ sensors provide reliable depth​​ information, their effectiveness is​​​‌ limited by sparsity at​ long distances and occlusions,​‌ particularly in intersection scenarios.​​ Collaborative perception addresses these​​​‌ challenges by enabling information​ sharing among vehicles and​‌ infrastructure sensors, with intermediate​​ fusion offering a balance​​​‌ between communication efficiency and​ detection accuracy. However, existing​‌ collaborative perception frameworks exhibit​​ a notable performance gap​​​‌ between detecting vehicles and​ vulnerable road users such​‌ as cyclists and pedestrians.​​ In 31, we​​​‌ proposed a novel hybrid​ collaboration framework designed to​‌ reduce this gap. Our​​ method leverages late-stage information​​​‌ from communicating agents to​ augment the ego agent's​‌ point cloud, then applies​​ a standard intermediate fusion​​​‌ strategy, followed by a​ refinement stage that further​‌ improves the detection accuracy​​ of various objects. Experiments​​​‌ on the Mixed Signals​ dataset demonstrate that our​‌ approach sets a new​​ state-of-the-art in the detection​​​‌ of vulnerable road users​ in urban V2X scenarios​‌

8.11 Mixed Signals: A​​ Diverse Point Cloud Dataset​​​‌ for Heterogeneous LiDAR V2X​ Collaboration

Participants: Minh-Quan Dao​‌, Ezio Malis,​​ Vincent Frémont [LS2N, Nantes]​​​‌, Julie Stephany Berrio​ Perez [University of Sidney]​‌, Mao Shan [University​​ of Sidney], Stewart​​​‌ Worrall [University of Sidney]​, Katie Luo [Cornell​‌ University], Zhenzhen Liu​​ [Cornell University], Mark​​​‌ Campbell [Cornell University],​ Kilian Weinberger [Cornell University]​‌, Bharath Hariharan [Cornell​​ University], Wei-Lun Chao​​​‌ [The Ohio State University]​.

Vehicle-to-everything (V2X) collaborative​‌ perception has emerged as​​ a promising solution to​​​‌ address the limitations of​ single-vehicle perception systems. However,​‌ existing V2X datasets are​​ limited in scope, diversity,​​​‌ and quality. To address​ these gaps, we collaborated​‌ to Mixed Signals 29​​, a comprehensive V2X​​​‌ dataset featuring 45.1k point​ clouds and 240.6k bounding​‌ boxes collected from three​​ connected autonomous vehicles (CAVs)​​​‌ equipped with two different​ configurations of LiDAR sensors,​‌ plus a roadside unit​​ with dual LiDARs. The​​​‌ dataset provides point clouds​ and bounding box annotations​‌ across 10 classes, ensuring​​ reliable data for perception​​​‌ training. We provide detailed​ statistical analysis on the​‌ quality of our dataset​​ and extensively benchmark existing​​ V2X methods on it.​​​‌ Mixed Signals is ready-to-use‌, with precise alignment‌​‌ and consistent annotations across​​ time and viewpoints. We​​​‌ hope our work advances‌ research in the emerging,‌​‌ impactful field of V2X​​ perception.

8.12 A Novel​​​‌ Framework For Robust Collaborative‌ Perception Against Adversarial Agents‌​‌

Participants: Minh-Quan Dao,​​ Ezio Malis.

Autonomous​​​‌ vehicles predominantly rely on‌ LiDARs to accurately detect‌​‌ objects in their surrounding​​ environments. Due to their​​​‌ reliance on the detection‌ of light beams reflected‌​‌ from the surface of​​ objects, LiDARs are vulnerable​​​‌ to occlusion, which is‌ frequent when navigating complex‌​‌ traffic in urban areas.​​ Collaborative perception addresses the​​​‌ challenge of occlusion by‌ enabling vehicles and infrastructure,‌​‌ such as Road-Side Units​​ (RSUs), to share information​​​‌ and enhance detection capabilities.‌ Existing methods are categorized‌​‌ into Early, Intermediate, and​​ Late collaboration. Early collaboration​​​‌ shares raw point clouds,‌ Intermediate collaboration exchanges Bird's-Eye‌​‌ View (BEV) representations, while​​ Late collaboration transmits object​​​‌ detection results, each offering‌ different trade-offs between performance‌​‌ and communication efficiency. While​​ enabling vehicles to perceive​​​‌ beyond their sensing capacity,‌ collaborative perception introduces vulnerabilities‌​‌ to adversarial attacks, which​​ can degrade detection performance.​​​‌ Prior defenses focus solely‌ on Intermediate collaboration, neglecting‌​‌ more practical Late collaboration​​ approaches. In 23,​​​‌ we proposed a robust‌ two-stage Late collaboration framework‌​‌ that leverages secure RSUs​​ to evaluate and filter​​​‌ exchanged messages before fusion‌ at the ego vehicle.‌​‌ Our method is robust​​ against (i) a large​​​‌ number of spurious detections‌ that an adversarial agent‌​‌ sends to others and​​ (ii) the presence of​​​‌ multiple adversarial agents.

8.13‌ Connectivity and coordination in‌​‌ heterogeneous multi-robot systems

Participants:​​ Enrico Fiasché, Philippe​​​‌ Martinet, Ezio Malis‌.

Ensuring connectivity and‌​‌ coordination in heterogeneous multi-robot​​ systems (MRS) navigating in​​​‌ complex environments is a‌ critical challenge, especially when‌​‌ communication constraints and obstacles​​ cause robots to become​​​‌ lost or disconnected. We‌ present a novel approach‌​‌ 24 integrating, Model Predictive​​ Control (MPC) with Generalized​​​‌ Connectivity Maintenance (GCM) to‌ enable real-time path adaptation‌​‌ while preserving connectivity. We​​ introduce a decentralized decision-making​​​‌ framework that enables robots‌ to recover lost members‌​‌ dynamically. When reconnection is​​ infeasible, the system adapts​​​‌ the mission to continue‌ while accounting for disconnected‌​‌ robots. Our method is​​ evaluated through extensive simulations,​​​‌ showing its scalability and‌ effectiveness in maintaining connectivity‌​‌ and ensuring mission success.​​ Additionally, we propose a​​​‌ new evaluation metric that‌ comprehensively assesses system performance,‌​‌ considering connectivity, coordination, and​​ mission success in challenging​​​‌ environments.

8.14 Trajectory forecasting‌ in urban environments

Participants:‌​‌ Kaushik Bhowmik, Philippe​​ Martinet, Anne Spalanzani​​​‌.

We propose a‌ novel framework Candidate 21‌​‌ Graph-Net (CG-Net), which improves​​ trajectory prediction in urban​​​‌ road intersection scenarios by‌ encoding the available candidate‌​‌ centerlines at the current​​ location of the target​​​‌ agent. The interaction encoder‌ in CG-Net is inspired‌​‌ by human behavior. It​​ is modeled utilizing a​​​‌ bipartite graph attention network‌ to predict the trajectory‌​‌ of the target agent.​​ The agent embedding in​​​‌ the interaction encoder at‌ each time step pays‌​‌ attention to nearby agents​​​‌ and surrounding scene elements​ simultaneously. This enables the​‌ model to learn how​​ to prioritize interactions between​​​‌ nearby agents and the​ environment map.

8.15 Stereo​‌ embedding natural-language-driven open-vocabulary semantic​​ segmentation

Participants: Thomas Campagnolo​​​‌, Ezio Malis,​ Philippe Martinet, Gaetan​‌ Bael (NXP).

Recent​​ advances in phrase grounding​​​‌ are largely limited to​ single-view images, neglecting the​‌ rich geometric cues available​​ in stereo vision. To​​​‌ overcome this limitation, we​ introduced PhraseStereo in 22​‌, the first novel​​ dataset that brings phrase-region​​​‌ segmentation to stereo image​ pairs. PhraseStereo builds upon​‌ the PhraseCut dataset by​​ leveraging GenStereo to generate​​​‌ accurate right-view images from​ existing single-view data, enabling​‌ the extension of phrase​​ grounding into the stereo​​​‌ domain. This new setting​ introduces unique challenges and​‌ opportunities for multimodal learning,​​ particularly in leveraging depth​​​‌ cues for more precise​ and context-aware grounding. By​‌ providing stereo image pairs​​ with aligned segmentation masks​​​‌ and phrase annotations, PhraseStereo​ lays the foundation for​‌ future research at the​​ intersection of language, vision,​​​‌ and 3D perception, encouraging​ the development of models​‌ that can reason jointly​​ over semantics and geometry.​​​‌

8.16 Fast Quantum-based Keypoint​ Matching

Participants: Ayan Barui​‌, Ezio Malis,​​ Philippe Martinet.

Matching​​​‌ features is a fundamental​ process in computer vision​‌ (CV) applications like object​​ detection, image recognition, scene​​​‌ understanding, 3D reconstruction, localization​ and many others. Modern​‌ challenges require to process​​ huge amounts of input​​​‌ data and the processes​ maybe be extremely time​‌ consuming. To address these​​ challenges, the emergence of​​​‌ quantum computer vision, especially​ via adiabatic quantum computing,​‌ is a promising research​​ direction. At the same​​​‌ time, it is cumbersome​ to encode image features​‌ and to make an​​ algorithm that takes into​​​‌ account practical constraints of​ the noise of the​‌ image and execute a​​ use case scheme on​​​‌ a quantum computer. In​ this article, we propose​‌ a hybrid algorithm based​​ on universal gate quantum​​​‌ computing which uses a​ modified version of Grover's​‌ algorithm to match features​​ that are exact as​​​‌ well as inexact, to​ include practicability of real​‌ scenarios. The results demonstrate​​ scalability and a clear​​​‌ strategy to extract features,​ encode them into quantum​‌ states and use the​​ quadratic speedup of Grover's​​​‌ algorithm in matching. Experiments​ performed on the IBM​‌ Qiskit platform with real​​ images show the applicability​​​‌ of our approach on​ actual quantum computers.

8.17​‌ Segment-Safe Control Barrier Functions​​ for Model Predictive Control​​​‌

Participants: Andrea Pagnini,​ Ezio Malis.

Ensuring​‌ the safe operation of​​ autonomous robotic systems is​​​‌ a fundamental challenge, as​ safety, stability, and performance​‌ objectives often conflict. In​​ discrete-time control, safety constraints​​​‌ are usually enforced at​ discrete sampling instants, leaving​‌ the system potentially unsafe​​ between samples. This issue​​​‌ can lead to collisions​ or unsafe behavior, particularly​‌ in scenarios involving small​​ obstacles, fast dynamics, or​​​‌ long sampling intervals. A​ promising solution to ensure​‌ safety is Control Barrier​​ Functions (CBFs). We investigated​​​‌ a novel Segment-Safe Control​ Barrier Function (SSCBF) integrated​‌ into a discrete-time MPC​​ framework (MPC-SS). The SSCBF​​ extends discrete-time CBF theory,​​​‌ providing a formal guarantee‌ of safety along the‌​‌ line segment connecting consecutive​​ predicted states. This linear​​​‌ approximation results in improved‌ safety for the system,‌​‌ while avoiding overconservatism. The​​ method is applied to​​​‌ obstacle avoidance problems, providing‌ a practical choice of‌​‌ SSCBF constraints for both​​ static and dynamic obstacles.​​​‌ Numerical validation has been‌ conducted on a 2D‌​‌ double integrator and a​​ nonlinear quadrotor UAV, showing​​​‌ the effectiveness of the‌ proposed approach also in‌​‌ cases where the system’s​​ true dynamics deviate significantly​​​‌ from the linear segment‌ evolution. Safety and performance‌​‌ of the proposed method​​ have been compared with​​​‌ other CBF based approaches‌ through theoretical analysis and‌​‌ simulations showing its advantages​​ over existing methods.

9​​​‌ Bilateral contracts and grants‌ with industry

9.1 Bilateral‌​‌ contracts with industry

Acentauri​​ is in charge of​​​‌ four research contracts.

9.1.1‌ Naval Group

Usine du‌​‌ Futur (2022-2025)

 

Participants: Ezio​​ Malis, Philippe Martinet​​​‌, Erwan Emraoui,‌ Marie Aspro, Pierre‌​‌ Alliez (Inria, TITANE).​​

The context is that​​​‌ of the factory of‌ the future for Naval‌​‌ Group in Lorient, for​​ submarines and surface vessels.​​​‌ As input, we have‌ a digital model (for‌​‌ example of a frigate),​​ the equipment assembly schedule​​​‌ and measurement data (images‌ or Lidar). Most of‌​‌ the components to be​​ mounted are supplied by​​​‌ subcontractors. At the output,‌ we want to monitor‌​‌ the assembly site to​​ compare the "as-designed" with​​​‌ the "as-built". The challenge‌ of the contract is‌​‌ a need for coordination​​ on the construction sites​​​‌ for the planning decision.‌ It is necessary to‌​‌ be able to follow​​ the progress of a​​​‌ real project and check‌ its conformity using a‌​‌ digital twin. Currently, as​​ you have to see​​​‌ on board to check,‌ inspection rounds are required‌​‌ to validate the progress​​ as well as the​​​‌ mountability of the equipment:‌ for example, the cabin‌​‌ and the fasteners must​​ be in place, with​​​‌ holes for the screws,‌ etc. These rounds are‌​‌ time-consuming and accident-prone, not​​ to mention the constraints​​​‌ of the site, for‌ example the temporary lack‌​‌ of electricity or the​​ numerous temporary assembly and​​​‌ safety equipment.

The outcome‌ of the project has‌​‌ been a software to​​ monitor the progress of​​​‌ a real project and‌ check its conformity using‌​‌ a digital twin (see​​ Sections. 7.1.2 and 7.1.3​​​‌). It has been‌ tested and validated in‌​‌ real environment on Naval​​ Group shypyard in Lorient.​​​‌ Marie Aspro will continue‌ to valorize this work‌​‌ in a startup at​​ Inria Startup Studio.

 

La​​​‌ Fontaine (2022-2025)

 

Participants: Ezio‌ Malis, Philippe Martinet‌​‌, Hasan Yilmaz,​​ Pierre Joyet.

The​​​‌ context is that of‌ decision support for a‌​‌ collaborative autonomous multi-agent system​​ with a common objective.​​​‌ The multi-agent system tries‌ to get around ”obstacles”‌​‌ which, in turn, try​​ to prevent them from​​​‌ reaching their goals. As‌ part of a collaboration‌​‌ with NAVAL GROUP, we​​ wish to study a​​​‌ certain number of issues‌ related to the optimal‌​‌ planning and control of​​​‌ cooperative multi-agent systems. The​ objective of this contract​‌ is therefore to identify​​ and test methods for​​​‌ generating trajectories responding to​ a set of constraints,​‌ dictated by the interests,​​ the modes of perception,​​​‌ and the behavior of​ these actors. The first​‌ problem to study is​​ that of the strategy​​​‌ to adopt during the​ game. The strategy consists​‌ in defining “the set​​ of coordinated actions, skillful​​​‌ operations, maneuvers with a​ view to achieving a​‌ specific objective”. In this​​ framework, the main scientific​​​‌ issues are (i) how​ to formalize the problem​‌ (often as optimization of​​ a cost function) and​​​‌ (ii) how to be​ able to define several​‌ possible strategies while keeping​​ the same tools for​​​‌ implementation (tactics).

The second​ problem to study is​‌ that of the tactics​​ to be followed during​​​‌ the game in order​ to implement the chosen​‌ strategy. The tactic consists​​ in defining the tools​​​‌ to execute the strategy.​ In this context, we​‌ study the use of​​ techniques such as MPC​​​‌ (Model Predictive Control) and​ MPPI (Model Predictive Path​‌ Integral) which make it​​ possible to predict the​​​‌ evolution of the system​ over a given horizon​‌ and therefore to take​​ the best action decision​​​‌ based on knowledge at​ time t.

The third​‌ problem is that of​​ combining the proposed approaches​​​‌ with those based on​ AI and in particular​‌ the machine learning. Machine​​ Learning can intervene both​​​‌ in the choice of​ the strategy and in​‌ the development of tactics.​​ The possibility of simulating​​​‌ a large number of​ parts could allow the​‌ learning of a neural​​ network whose architecture remains​​​‌ to be designed.

The​ outcome of the project​‌ has been a software​​ for the optimal planning​​​‌ and control of cooperative​ multi-agent systems. The software​‌ has been sucessfully tested​​ in simulated scenarios defined​​​‌ by Naval Group.

9.1.2​ NXP

Participants: Ezio Malis​‌, Philippe Martinet,​​ Thomas Campagnolo, Gaetan​​​‌ Bael [NXP].

As​ part of a research​‌ collaboration between the ACENTAURI​​ team at Inria Sophia​​​‌ Antipolis and NXP Semiconductors​, we are interested​‌ in building autonomous devices​​ such as robots, drones​​​‌ or vehicles that have​ to navigate through various​‌ dynamic indoor and outdoor​​ environments, such as homes,​​​‌ factories or cities.

The​ object of the CIFRE​‌ PhD thesis of Thomas​​ Campagnolo will be to​​​‌ setup a complete Perception​ system based on a​‌ generic spatio-temporal multi-level representation​​ of the scene (geometrical,​​​‌ semantical, topological, ...) that​ will provide the information​‌ needed by an ontology​​ of navigation task and​​​‌ directions originating from various​ modalities (sound, text, images,​‌ other systems). The geometric​​ representation will be provided​​​‌ by state of the​ art SLAM algorithm, while​‌ the PhD subject will​​ focus on extracting semantic​​​‌ and topological information. Semantic​ and topology will be​‌ extracted using a Data​​ based approach and an​​​‌ abstraction toolbox (Graphs based)​ will be developed to​‌ make the connection with​​ ontologies on one side​​​‌ and with the task​ to be done on​‌ the other side.

The​​ PhD will address different​​ contexts with increasing complexity,​​​‌ starting by defining a‌ particular sensing system and‌​‌ a representation of the​​ natural dynamic environment, and​​​‌ using state-of-the-art algorithms to‌ assess the situation at‌​‌ each time of evolution​​ and to evaluate the​​​‌ different actions in a‌ given horizon of time.‌​‌ The different contexts will​​ concern various environments such​​​‌ as homes, factories, fields‌ or cities.

9.1.3 SAFRAN‌​‌

Participants: Ezio Malis,​​ Philippe Martinet, Mohamed​​​‌ Mahmoud Ahmed Maloum,‌ Ahmed Nasreddine Benaichouche [SAFRAN]‌​‌.

The objective of​​ the CIFRE PhD thesis​​​‌ of Mohamed Mahmoud Ahmed‌ Maloum would be to‌​‌ study the ability of​​ deep neural networks to​​​‌ address the SLAM problem‌ by leveraging multiple sensor‌​‌ modalities to take advantage​​ of each. The challenge​​​‌ lies in the ability‌ to find a common‌​‌ representation space for the​​ different modalities while maintaining​​​‌ a representation of the‌ robot's poses in SE3‌​‌ space.

The architecture to​​ be developed should take​​​‌ advantage of attention mechanisms‌ (developed in Transformers) to‌​‌ weight the measurements from​​ different sensors (images, inertia)​​​‌ based on the robot’s‌ state (proprioceptive information: inertia)‌​‌ as well as the​​ environment (exteroceptive information: vision).​​​‌ The balance between real-time‌ performance and accuracy, along‌​‌ with robustness in dynamic,​​ uncertain, and complex environments,​​​‌ are important factors to‌ consider in the study.‌​‌

In the context of​​ the thesis, the methodology​​​‌ followed will be hybrid‌ in nature, aiming to‌​‌ leverage both prior knowledge​​ from physics and data-driven​​​‌ insights. To this end,‌ the approach proposed will‌​‌ combine deep neural networks​​ with traditional pose estimation​​​‌ methods to calculate visual‌ odometry.

10 Partnerships and‌​‌ cooperations

10.1 International initiatives​​

10.1.1 Associate Teams in​​​‌ the framework of an‌ Inria International Lab or‌​‌ in the framework of​​ an Inria International Program​​​‌

AISENSE
  • Title:
    Artificial intelligence‌ for advanced sensing in‌​‌ autonomous vehicles
  • Duration:
    2023​​ -> 2025
  • Coordinator:
    Seung-Hyun​​​‌ Kong (skong@kaist.ac.kr)
  • Partners:
    • Korea‌ Advanced Institute of Science‌​‌ and Technology Daejeon (Corée​​ du Sud)
  • Inria contact:​​​‌
    Ezio Malis
  • Summary:
    The‌ main scientific objective of‌​‌ the collaboration project is​​ to study how to​​​‌ build a long-term perception‌ system in order to‌​‌ acquire situation awareness for​​ safe navigation of autonomous​​​‌ vehicles. The perception system‌ will perform the fusion‌​‌ of different sensor data​​ (lidar and vision) in​​​‌ order to localize a‌ vehicle in a dynamic‌​‌ peri-urban environment, to identify​​ and estimate the state​​​‌ (position, orientation, velocity, …)‌ of all possible moving‌​‌ agents (cars, pedestrians, …),​​ and to get high​​​‌ level semantic information. To‌ achieve such objectives, we‌​‌ will compare different methodologies.​​ From one hand, we​​​‌ will study model-based techniques,‌ for which the rules‌​‌ are pre-defined accordingly to​​ a given model, that​​​‌ need few data to‌ be setup. On the‌​‌ other hand, we will​​ study end-to-end data-based techniques,​​​‌ a single neural network‌ for aforementioned multi-tasks (e.g.,‌​‌ detection, localization, and tracking)​​ to be trained with​​​‌ data. We think that‌ the deep analysis and‌​‌ comparison of these techniques​​ will help us to​​​‌ study how to combine‌ them in a hybrid‌​‌ AI system where model-based​​​‌ knowledge is injected in​ neural networks and where​‌ neural networks can provide​​ better results when the​​​‌ model is too complex​ to be explicitly handled.​‌ This problem is hard​​ to solve since it​​​‌ is not clear which​ is the best way​‌ to combine these two​​ radically different approaches. Finally,​​​‌ the perception information will​ be used to acquire​‌ situation awareness for safe​​ decision making.

10.2 International​​​‌ research visitors

10.2.1 Visits​ of international scientists

Other​‌ international visits to the​​ team
Raphael Murrieta
  • Status​​​‌
    Professor
  • Institution of origin:​
    Centro de Investigación en​‌ Matemáticas (CIMAT)
  • Country:
    Mexico​​
  • Dates:
    September 1st 2024​​​‌ - August 31st 2025​
  • Context of the visit:​‌
    Collaboration on robot motion​​ planning with sensory feedback​​​‌ and learning.
  • Mobility program/type​ of mobility:
    sabbatical

10.3​‌ European initiatives

10.3.1 Digital​​ Europe

Agrifood-TEF (2023-2027)

 

Participants:​​​‌ Philippe Martinet, Ezio​ Malis, Nicolas Chleq​‌, Pardeep Kumar,​​ Matthias Curet, Malek​​​‌ Aifa, Jon Aztiria​ Oiartzabal, Andres Gomez​‌ Hernandez.

AGRIFOOD-TEF project​​ is a co-funded project​​​‌ by the European Union​ and the different countries​‌ involved. It is organized​​ in three national nodes​​​‌ (Italy, Germany, France) and​ 5 satellite nodes (Poland,​‌ Belgium, Sweden, Austria and​​ Spain), offers its services​​​‌ to companies and developers​ from all over Europe​‌ who want to validate​​ their robotics and artificial​​​‌ intelligence solutions for agribusiness​ under real-life conditions of​‌ use, speeding their transition​​ to the market.

The​​​‌ main objectives are:

  • to​ foster sustainable and efficient​‌ food production, AgrifoodTEF empowers​​ innovators with validation tools​​​‌ needed to bridge the​ gap between their brightest​‌ ideas and successful market​​ products.
  • to provide services​​​‌ that help assess and​ validate third party AI​‌ and Robotics solutions in​​ real-world conditions aiming to​​​‌ maximize impact from digitalization​ of the agricultural sector.​‌

Five impact sectors propose​​ tailor-made services for the​​​‌ testing and validation of​ AI-based and robotic solutions​‌ in the agri-food sector​​

  • Arable farming: testing and​​​‌ validation of robotic, selective​ weeding and geofencing technologies​‌ to enhance autonomous driving​​ vehicle performances and therefore​​​‌ decrease farmers' reliance on​ traditional agricultural inputs.
  • Tree​‌ crop: testing and validation​​ of AI solutions supporting​​​‌ optimization of natural resources​ and inputs (fertilizers, pesticides,​‌ water) for Mediterranean crops​​ (Vineyards, Fruit orchards, Olive​​​‌ groves).
  • Horticulture: testing and​ validation of AI-based solutions​‌ helping to strike the​​ right balance of nutrients​​​‌ while ensuring the crop​ and yield quality. 
  • Livestock​‌ farming: testing and validation​​ of AI-based livestock management​​​‌ applications and organic feed​ production improving the sustainability​‌ of cows, pigs and​​ poultry farming.
  • Food processing:​​​‌ testing and validation of​ standardized data models and​‌ self-sovereign data exchange technologies,​​ providing enhanced traceability in​​​‌ the production and supply​ chains.

Inria will put​‌ in place a Moving​​ Living Lab going to​​​‌ the field in order​ to provide three kind​‌ of services: data collection​​ with mobile ground robot​​​‌ or aerial robot, mobility​ algorithms evaluation with mobile​‌ ground robot or aerial​​ robot, and sensor/robots testing​​​‌ functionalities.

10.3.2 Other european​ programs/initiatives

Participants: Philippe Martinet​‌, Ezio Malis,​​ Enrico Dondero.

The​​ team is part of​​​‌ the euROBIN, the Network‌ of Excellence on AI‌​‌ and robotics that was​​ launched in 2022. The​​​‌ master 2 internship of‌ Enrico Dondero has been‌​‌ funded by the Eurobin​​ project to work in​​​‌ collaboration with the LARSEN‌ team in Nancy.

10.4‌​‌ National initiatives

10.4.1 ANR​​ project ANNAPOLIS (2022-2026)

Participants:​​​‌ Philippe Martinet, Ezio‌ Malis, Emmanuel Alao‌​‌, Kaushik Bhowmik,​​ Monica Fossati, Minh​​​‌ Quan Dao, Nicolas‌ Chleq, Quentin Louvel‌​‌.

AutoNomous Navigation Among​​ Personal mObiLity devIceS: INRIA​​​‌ (ACENTAURI, CHROMA), LS2N, HEUDIASYC.‌ We are involved in‌​‌ Augmented Perception using Road​​ Side Unit PPMP detection​​​‌ and tracking, Attention map‌ prediction, and Autonomous navigation‌​‌ in presence of PPMP.​​

10.4.2 ANR project SAMURAI​​​‌ (2022-2026)

Participants: Ezio Malis‌, Philippe Martinet,‌​‌ Patrick Rives, Nicolas​​ Chleq, Quentin Louvel​​​‌, Stefan Larsen,‌ Mathilde Theunissen, Matteo‌​‌ Azzini.

ShAreable Mapping​​ using heterogeneoUs sensoRs for​​​‌ collAborative robotIcs: INRIA (ACENTAURI),‌ LS2N, MIS. The aim‌​‌ of the SAMURAI project​​ is to design new​​​‌ approaches for the navigation‌ of a multi-robot system‌​‌ (e.g. AGVs and UAVs)​​ in a dynamic environment​​​‌ using heterogeneous sensors in‌ order to considerably increase‌​‌ the capability of these​​ systems and simplify their​​​‌ implementation (reduction of preparation‌ time and costs). The‌​‌ scientific objectives of the​​ project are: (i) to​​​‌ build shareable maps of‌ a dynamic environment using‌​‌ heterogeneous sensors (lidar, vision,​​ imu, gps, …) mounted​​​‌ on several robots; (ii)‌ to utilize the map‌​‌ to perform environment monitoring​​ using collaborative robots having​​​‌ sensors different from the‌ sensors used to build‌​‌ the shareable map; (iii)​​ to update the map​​​‌ using the data collected‌ by the robots with‌​‌ limited sensor capability during​​ their monitoring task. The​​​‌ developed approaches will be‌ validated experimentally on a‌​‌ scenario concerning the monitoring​​ of infrastructures in a​​​‌ peri-urban environment (roads, bridges,‌ buildings, ...) using a‌​‌ ground and aerial robots.​​

10.4.3 ANR project TIRREX​​​‌ (2021-2029)

Participants: Philippe Martinet‌, Ezio Malis.‌​‌

TIRREX is an EQUIPEX+​​ project funded by ANR​​​‌ and coordinated by N.‌ Marchand. It is composed‌​‌ of six thematic axis​​ (XXL axis, Humanoid axis,​​​‌ Aerial axis, Autonomous Land‌ axis, Medical axis, Micro-Nano‌​‌ axis) and three transverse​​ axis (Prototyping & Design,​​​‌ Manipulation, and Open infrastructure).‌ Acentauri is involved in:‌​‌

  • Autonomous Land axis (ROBt)​​ is coordinated by P.​​​‌ Bonnifait and R. Lenain‌ is covering Autonomous vehicles‌​‌ and Agricultural robots.
  • Aerial​​ Axis is coordinated by​​​‌ I. Fantoni and F.‌ Ruffier.

10.4.4 PEPR project‌​‌ NINSAR (2023-2026)

Participants: Philippe​​ Martinet, Ezio Malis​​​‌.

In the framework‌ of the PEPR Agroecology‌​‌ and Digital, ACENTAURI is​​ leading the coordination (R.​​​‌ Lenain (INRAE), P. Martinet‌ (INRIA), Yann Perrot (CEA))‌​‌ of a project called​​ NINSAR (New ItiNerarieS for​​​‌ Agroecology using cooperative Robots)‌ accepted in 2022. It‌​‌ gathers 17 research teams​​ from INRIA (3), INRAE(4),​​​‌ CNRS(7), CEA, UniLasalle, UEVE.‌

10.4.5 Defi Inria-Cerema ROAD-AI‌​‌ (2021-2025)

Participants: Ezio Malis​​, Philippe Martinet,​​​‌ Diego Navarro [Cerema],‌ Pierre Joyet.

The‌​‌ aim of the Inria-Cerema​​​‌ ROAD-AI (2021-2025) defi is​ to invent the asset​‌ maintenance of infrastructures that​​ could be operated in​​​‌ the coming years. This​ is to offer a​‌ significant qualitative leap compared​​ to traditional methods. Data​​​‌ collection is at the​ heart of the integrated​‌ management of road infrastructure​​ and engineering structures and​​​‌ could be simplified by​ deploying fleets of autonomous​‌ robots. Indeed, robots are​​ becoming an essential tool​​​‌ in a wide range​ of applications. Among these​‌ applications, data acquisition has​​ attracted increasing interest due​​​‌ to the emergence of​ a new category of​‌ robotic vehicles capable of​​ performing demanding tasks in​​​‌ harsh environments without human​ supervision.

10.4.6 DGA ASTRID​‌ project ASCAR (2024-2027)

Participants:​​ Ezio Malis, Andrea​​​‌ Pagnini, Tarek Hamel​ [I3S Sophia Antipolis].​‌

The ASCAR project will​​ exploit natural invariance and/or​​​‌ equivariance properties in Autonomous​ Robotic Systems by developing​‌ design principles and methods​​ tailored for systems with​​​‌ symmetries. More specifically, the​ project will establish i)​‌ a new paradigm of​​ Guidance and Control for​​​‌ Autonomous Systems that seamlessly​ integrates, in a unified​‌ framework, modeling, control, and​​ optimization design procedures, ii)​​​‌ a framework for Navigation​ that integrates situation awareness​‌ for the analysis and​​ design of efficient and​​​‌ reliable state observers for​ general systems with symmetries,​‌ and iii) a new​​ paradigm and new tools​​​‌ for robust sensor-based control.​

10.4.7 3IA Institute

Ezio​‌ Malis holds a senior​​ chair from 3IA Côte​​​‌ d'Azur (Interdisciplinary Institute for​ Artificial Intelligence). The topic​‌ of his chair is​​ “Autonomous robotic systems in​​​‌ dynamic and complex environments.​ Ezio MALIS has been​‌ nominated to serve on​​ the Scientific Council of​​​‌ the 3IA Instirute and​ he is the scientific​‌ head of the fourth​​ research axis entitled “AI​​​‌ for smart and secure​ territories”.

11 Dissemination

11.1​‌ Promoting scientific activities

11.1.1​​ Scientific events: organization

Member​​​‌ of the organizing committees​
  • Philippe Martinet has been​‌ Editor of the following​​ conferences:
    • IROS (161 papers,​​​‌ 20 Associated Editors)
  • Ezio​ Malis has been Associated​‌ Editor of the following​​ conferences:
    • IROS 2025 (8​​​‌ papers).
    • IV 2025 (3​ papers).
  • Philippe Martinet has​‌ been Associated Editor of​​ the following conferences:
    • ITSC​​​‌ 2025 (5 papers).
    • IV​ 2025 (3 papers).
  • Philippe​‌ Martinet has been co-organizer​​ of the IROS25 Workhop​​​‌ on Safety of Intelligent​ and Autonomous Vehicles: Formal​‌ Methods vs. Machine Learning​​ approaches for reliable navigation​​​‌ (SIAV-FM2L).

11.1.2 Scientific events:​ selection

Member of the​‌ conference program committees
  • Ezio​​ Malis has been member​​​‌ of the conference program​ committee of the following​‌ conferences:
    • International Conference on​​ Robotics, Computer Vision and​​​‌ Intelligent Sytems (ROBOTVIS).
    • International​ Conference on Computer Vision​‌ Theory and Applications (VISAPP).​​
    • International Conference on Informatics​​​‌ in Control, Automation and​ Robotics (ICINCO).
    • International Joint​‌ Conferences on Artificial Intelligence​​ (IJCAI) Program Committee.
Reviewer​​​‌
  • Ezio Malis has been​ reviewer of the following​‌ conferences:
    • CVPR (3 papers).​​
    • ICRA (2 papers).

11.1.3​​​‌ Journal

Member of the​ editorial boards
  • Ezio Malis​‌ is Associated Editor of​​ Robotics and Automation Letters​​​‌ in the area “Vision​ and Sensor-Based Control" (6​‌ papers).
  • Ezio Malis is​​ Editor of the Young​​ Professionals Column in the​​​‌ IEEE Robotics & Automation‌ Magazine.
  • Philippe Martinet is‌​‌ Member of the Intelligent​​ Service Robotics (Springer) Advisory​​​‌ Editorial Board.

11.1.4 Invited‌ talks

  • Ezio Malis gave‌​‌ the following invited talks:​​
    • "Data-Driven Learning for Intelligent​​​‌ Vehicle Applications", Adaptive Learning‌ to Improve Hybrid Visual‌​‌ Odometry for Intelligent Vehicle​​ Applications" at the Intelligence​​​‌ Vehicle Conference in June‌ 2025.
    • "Hybrid AI: Integration‌​‌ of Rule-Driven and Data-Driven​​ Approaches for Safer Autonomous​​​‌ Driving", 1st Workshop on‌ Safe and Trustworthy Autonomous‌​‌ Driving at the Intelligence​​ Vehicle Conference in June​​​‌ 2025.
  • Philippe Martinet participate‌ to the panel gave‌​‌ the following invited talks:​​
    • "Heterogeneous Multi-robots applications", at​​​‌ the Panel "Internet and‌ Future networking" during Infoware/ICAS‌​‌ 2025 in Lisbon, Portugal.​​

11.1.5 Leadership within the​​​‌ scientific community

  • Ezio Malis‌ has been the head‌​‌ of the incubator "Quantum​​ Algorithms for Robotics" at​​​‌ the GDR Robotique.
  • Philippe‌ Martinet is member of‌​‌ the Advisory Board Meeting​​ of the Atlas project​​​‌ in Luxembourg (9 PhDs).‌

11.1.6 Scientific expertise

  • Ezio‌​‌ Malis has been member​​ of the jury for​​​‌ the Best PhD Award‌ of the GDR Robotique‌​‌ (written report on 2​​ PhD thesis).
  • Philippe Martinet​​​‌ has been reviewer for‌ one proposal to the‌​‌ call « Bienvenue Bretagne​​ - 2025 ».
  • Philippe​​​‌ Martinet has been reviewer‌ for one proposal to‌​‌ the F.R.S.-FNRS in Belgium.​​
  • Philippe Martinet has been​​​‌ reviewer for one proposal‌ to CORE Multi-Annual Thematic‌​‌ Research Programme of the​​ F.N.R. in Luxembourg.

11.1.7​​​‌ Research administration

  • Philippe Martinet‌ is the
    • coordinator of‌​‌ the PEPR project NINSAR​​
    • coordinator of the ANR​​​‌ project ANNAPOLIS
    • coordinator of‌ the regional project EPISUD.‌​‌
    • co-coordinator of the INRIA/INRAE​​ project ROBFORISK
    • local coordinator​​​‌ of the European project‌ AGRIFOOD-TEF
  • Philippe Martinet is‌​‌ a member of the​​ Project management committee (called​​​‌ PSG) and leader of‌ the biggest workpackage (WP1‌​‌ physical testing) at the​​ consortium level of the​​​‌ European project AGRIFOOD-TEF 10.3.1.‌
  • Ezio Malis is the‌​‌
    • coordinator of the ANR​​ project SAMURAI.
    • local coordinator​​​‌ of the Defi Inria-CEREMA‌ ROAD-AI.
    • local coordinator of‌​‌ the ANR project ASCAR.​​
  • Ezio Malis is
    • member​​​‌ of BECP (Bureau des‌ comité de projets) at‌​‌ Centre Inria d'Université Côte​​ d'Azur.
    • scientific leader of​​​‌ the Inria - Naval‌ Group partnerships.
    • member of‌​‌ the scientific council of​​ Institut 3IA Côte d'Azur​​​‌ and the scientific head‌ of the fourth research‌​‌ axis entitled “AI for​​ smart and secure territories”​​​‌ .
  • Ayan Barui has‌ been in charge of‌​‌ the organization the Biweekly​​ Robotic Seminar (10 seminars).​​​‌
  • Andrea Pagnini has been‌ the social media manager‌​‌ for the Linkedin account​​ of the team.
  • Thomas​​​‌ Campagnolo has been the‌ manager for the Website‌​‌ and the Youtube channel​​ of the team.

11.2​​​‌ Teaching - Supervision -‌ Juries - Educational and‌​‌ pedagogical outreach

11.2.1 Teaching​​

  • Ezio Malis has taught​​​‌ 28 hours on Signal‌ Processing at ROBO 3‌​‌ of Polytech Nice.
  • Ezio​​ Malis has taught 20​​​‌ hours on Robotic Vision‌ at ROBO 3 of‌​‌ Polytech Nice.
  • Ezio Malis​​ has taught 20 hours​​​‌ on Robotic Vision at‌ Master 2 ISC-Robotique of‌​‌ Université de Toulon.

11.2.2​​​‌ Supervision

The team has​ received two Master 2​‌ students:

  • Souhail Benomar Subject:​​ "Precise 3D Semantic Segmentation​​​‌ For Standalone Navigation". Supervisors:​ Stefan Larsen and Ezio​‌ Malis .
  • Enrico Dondero​​ Subject: "Further comparisons of​​​‌ advanced control methods for​ navigation of a mobile​‌ platform in a human​​ environment". Supervisors: Philippe Martinet​​​‌ and Ezio Malis .​

The permanent team members​‌ supervised the following PhD​​ students:

  • Diego Navarro (04/11/2025):​​​‌ Precise localization and control​ of an autonomous multi​‌ robot system for long-term​​ infrastructure inspection, Defi Inria-Cerema​​​‌ ROAD-AI. Phd supervisors: Ezio​ Malis and R. Antoine,​‌ Philippe Martinet. 38
  • Matteo​​ Azzini (1/10/2022 - 12/12/2025)​​​‌ "Lidar-vision fusion for robust​ robot localization and mapping",​‌ Phd supervisors: Ezio Malis​​ and Philippe Martinet. 34​​​‌
  • Enrico Fiasché (1/10/2022 -​ 3/12/2025) "Modeling and control​‌ of a heterogeneous and​​ autonomous multi-robot system", Phd​​​‌ supervisors: Philippe Martinet and​ Ezio Malis . 35​‌
  • Stefan Larsen (1/10/2022 -​​ 10/12/2025) "Detection of changes​​​‌ and update of environment​ representation using sensor data​‌ acquired by multiple collaborative​​ robots". Phd supervisors: Ezio​​​‌ Malis , El Mustapha​ Mouaddib (MIS Amiens), Patrick​‌ Rives. 36
  • Mathilde Theunissen​​ (1/11/2022 - 2/12/2025) "Multi-robot​​​‌ localization and navigation for​ infrastructure monitoring", Phd supervisors:​‌ Isabelle Fantoni, Ezio Malis​​ . 39
  • Fabien Lionti​​​‌ (1/10/2022 - 21/10/2025) "Dynamic​ behavior evaluation by artificial​‌ intelligence: Application to the​​ analysis of the safety​​​‌ of the dynamic behavior​ of a vehicle", Phd​‌ supervisors: Philippe Martinet ,​​ N. Gutoswski (LERIA, Angers),​​​‌ S. Aubin (DGA-TT, Angers).​ 37
  • Emmanuel Alao (1/10/2022​‌ - 5/12/2025) "Probabilistic risk​​ assessment and management architecture​​​‌ for safe autonomous navigation",​ Phd supervisors: L. Adouane​‌ (Heudiasyc, Compiègne) and Philippe​​ Martinet . 33
  • Kaushik​​​‌ Bhowmik (started on 1/05/2023)​ "Modeling and prediction of​‌ pedestrian behavior on bikes,​​ scooters or hoverboards", Phd​​​‌ supervisors: Anne Spalanzani, Philippe​ Martinet .
  • Monica Fossati​‌ (started on 1/10/2023) "Navigation​​ sûre en environnement urbain",​​​‌ Phd supervisors: Philippe Martinet​ and Ezio Malis .​‌
  • Thomas Campagnolo (started on​​ 1/09/2024) "Embedded Machine Learning​​​‌ Solutions for Autonomous Navigation",​ Phd supervisors: Ezio Malis​‌ and Philippe Martinet .​​
  • Ayan Barui (started on​​​‌ 1/11/2024), "Quantum algorithms for​ vision-based robot localization", Phd​‌ supervisors: Ezio Malis and​​ Philippe Martinet .
  • Andrea​​​‌ Pagnini (started on 1/12/2024),​ "Optimal and efficient sensor-based​‌ control of aerial drones",​​ Phd supervisors: Ezio Malis​​​‌ .
  • Shamick Basu (started​ on 1/10/2025), "Hybrid AI​‌ for sensor-referenced control of​​ robots", Phd supervisors: Ezio​​​‌ Malis .
  • Gires Fotsing​ Takam (started on 1/10/2025),​‌ "Synthesis of coordinated robotic​​ behaviors for agroecology: Application​​​‌ to pixel cropping", Phd​ supervisors: Philippe Martinet ,​‌ Eric Lucet , et​​ Roland Lenain .

11.2.3​​​‌ Juries

  • Ezio Malis has​ been member of the​‌ jury for the HDR​​ of Claire Dune (COSMER,​​​‌ Université de Toulon).
  • Ezio​ Malis has been reviewer​‌ and member of the​​ jury for the PhD​​​‌ of Antonio Marino (Centre​ Inria d'Université de Rennes).​‌
  • Philippe Martinet has been​​ member of the jury​​​‌ for the HDR of​ Olivier Kermorgant (ARMEN, LS2N,​‌ Université de Nantes).
  • Philippe​​ Martinet has been member​​​‌ of the jury and​ reviewer for the PhD​‌ Louis Damberger (Institut Pascal,​​ Université Clermont-Auvergne)
  • Philippe Martinet​​ has been member of​​​‌ the jury for the‌ PhD Kai ZHANG (ENSTA‌​‌ Paris)

11.3 Popularization

11.3.1​​ Others science outreach relevant​​​‌ activities

  • Ezio Malis has‌ been the chair of‌​‌ the Young Professional Committee​​ of the IEEE Robotics​​​‌ and Automation Society (4‌ events organized at ICRA,‌​‌ CASE, HUMANOIDS and IROS).​​

12 Scientific production

12.1​​​‌ Major publications

12.2 Publications of the​​​‌ year

International journals

International peer-reviewed​ conferences

Doctoral dissertations​ and habilitation theses

12.3​ Cited publications

  • 40 article​‌L.Laurent Busé,​​ M.Marc Chardin and​​​‌ N.Navid Nemati.​ Multigraded Sylvester forms, duality​‌ and elimination matrices.​​Journal of Algebra609​​​‌2022, 514-546DOI​back to text