Keywords
Computer Science and Digital Science
- A3.4.1. Supervised learning
- A3.4.2. Unsupervised learning
- A3.4.5. Bayesian methods
- A3.4.7. Kernel methods
- A6. Modeling, simulation and control
- A6.1. Methods in mathematical modeling
- A6.1.1. Continuous Modeling (PDE, ODE)
- A6.1.2. Stochastic Modeling
- A6.1.5. Multiphysics modeling
- A6.2. Scientific computing, Numerical Analysis & Optimization
- A6.2.1. Numerical analysis of PDE and ODE
- A6.2.4. Statistical methods
- A6.2.6. Optimization
- A6.2.7. High performance computing
- A6.3. Computation-data interaction
- A6.3.1. Inverse problems
- A6.3.2. Data assimilation
- A6.3.3. Data processing
- A6.3.4. Model reduction
- A6.3.5. Uncertainty Quantification
- A6.5.2. Fluid mechanics
Other Research Topics and Application Domains
- B3. Environment and planet
- B3.3. Geosciences
- B4. Energy
- B4.3. Renewable energy production
- B5.2.1. Road vehicles
- B5.2.3. Aviation
- B5.2.4. Aerospace
- B5.5. Materials
1 Team members, visitors, external collaborators
Research Scientists
- Pietro Marco Congedo [Team leader, INRIA, Senior Researcher, HDR]
- Olivier Le Maitre [CNRS, Senior Researcher]
Post-Doctoral Fellows
- Nicolas Leoni [Inria, until Jul 2022]
- Lalaina Rakotondrainibe [INRIA]
PhD Students
- Meryem Benmahdi [DASSAULT SYSTEMES, CIFRE]
- Michele Capriati [INSTITUT VKI]
- Marius Duvillard [CEA]
- Sanae Janati Idrissi [CEA, from Oct 2022]
- Zachary Jones [Inria, from Oct 2022]
- Malo Pocheau [BAÑULS DESIGN]
Interns and Apprentices
- Sicheng Mao [Inria, from Mar 2022 until Aug 2022]
- Jean Pecquet [Inria, from May 2022 until Aug 2022]
Administrative Assistant
- Hanadi Dib [INRIA]
External Collaborator
- Giulio Gori [ECOLE POLYT. MILAN]
2 Overall objectives
Computational approaches in science and engineering rely on numerical tools to produce effective, robust, and high fidelity predictions through the simulation of complex physical systems. The design and development of simulation tools encompass numerous aspects, ranging from the initial mathematical formulation of the problem to its actual numerical resolution, including the design of numerical algorithms suited to computational architectures of modern supercomputers, in particular massively parallel machines.
To fully achieve the promises of numerical simulations in sciences and engineering, it is essential to assess and improve their predictive capabilities continuously. Obvious improvements concern the modeling aspects (higher fidelity) and numerical efficiency (to enable higher resolution). However, as the computational capabilities are progressing, it is becoming more and more evident that accounting for the various uncertainties involved in the simulation process is critical. The reason is that the accurate simulation of a complex system has a practical utility if, and only if, one can prescribe with sufficient precision the system investigated. In other words, obtaining high fidelity predictions on a system different from the one targeted present limited interest. The problem here is that, except for purely academic situations, specifying precisely all the properties and forcing applied to a complex system is impossible. Whether the precise definition of the system is impossible because of inherent variabilities, lack of knowledge, or imprecise calibration procedures (experimental setups and measurements are inherently inexact), reducing totally uncertainty sources is not an option. As a result, the simulation should account for these uncertainties and quantify their impact on the predictions (similarly to the experimental error characterization) in order to assess objectively the truthfulness of the simulation and enable fully informed decision making. As a matter of fact, reliable numerical predictions require both sophisticated physical models and the systematic and comprehensive treatment of inherent uncertainties, including the calibration and validation procedures. Coarsely, the prediction errors result from physical simplifications in the mathematical model, numerical errors incurring from the discretization and numerical methods (solvers), and uncertainties in the definition of the model to be solved (input uncertainties).
Uncertainty management procedures are often tailored to the particular problem and application considered. In our experiences, it is hard to conceive a systematic a priori approach suitable for all problems. Most often, the UQ analysis consists in the gradual (re)definition and extension of its objectives , which can be somewhat vague initially. It is, therefore, crucial to have a large portfolio of diverse numerical methods to quickly propose and apply suitable treatments in response to the evolving understanding and needs as they emerge during the analysis.
The global objective of the research proposed within Platon is to develop advanced numerical methods and practices in simulations, integrating as much as possible the uncertainty management. Here, uncertainty management encompasses multiple uncertainty tasks: a) uncertainty characterization (the construction and identification of uncertainty models), b) uncertainty propagation (computation of the model-based prediction uncertainty), c) uncertainty reduction (by inference, data assimilation, conception of new experiments either physical or numerical,...) and d) uncertainty treatment in decision-making processes (sensitivity analysis, risk management, robust optimization,...). Note that one should not perceive these different uncertainty tasks as reflecting an ordered sequence of analysis steps. On the contrary, our vision and experience value a strong interaction between all these tasks which, ideally, must be visited in an order commanded by the initial information, the progress of the analysis and the resources available.
Progressing on all these tasks constitutes a significant challenge as the tasks involve a diversity of thematics and skills. This difficulty is prominent in the context of large scale simulations, where practitioners and researchers tend to be highly specialized in specific aspects (modeling, numerical schemes, parallel computing,...). Further, more massive simulations are often confused with better predictions and they overshadow the importance of uncertainties. At the same time, high simulation costs usually prevent applying straightforward uncertainty analyses as, for a fixed budget, one often prefers a simulation at the highest affordable resolution, rather than performing uncertainty analysis involving possibly less resolved simulations. However, this preference is most often not based on an objective assessment of the situation. In contrast, we believe that using complex models and exploiting fairly the predictions of large scale simulations need suitable uncertainty management procedures. Further, we are convinced of the importance of a research effort encompassing as much as possible all uncertainty tasks, to ensure the coherence and mutual relevance of the methods developed. Such an effort focusing on uncertainty management, rather than on a particular application, will be critical to improving the predictive capabilities of simulation tools and address industrial and societal needs.
Therefore, the main objectives of the team will be :
- Propose new methods and approaches for uncertainty management.
- Develop these methods into numerical tools applicable to large scale simulations.
- Apply and demonstrate the impact of uncertainty management in real applications with industrial and academic partners.
To achieve these objectives, we rely on the expertise and past researches of the permanent members, which cover most of the uncertainty tasks (propagation, inference, reduction, optimization,...), although not in a comprehensive way so far. The development of new predictive simulation tools also relies on collaborations, mainly within the international academic network that we have established over the past 15 years and within the Centre de Mathématiques Appliquées de l'École Polytechnique. The development of useful uncertainty management frameworks applicable to large scale simulations demand constant interactions with end-users (engineers, practitioners, researchers); we rely on our current network of industrial partners and EPICs1 and extend it progressively.
3 Research program
The Team approach to research will be bottom-up: starting from new ideas and concepts to address both existing (known) and emerging (anticipated or not) problems. The later point, concerning the emerging problems, is particularly important in a quickly evolving research area with the constant improvement of the methodological and computational capabilities. The research thrust will be structured along two principal directions: a methodological axis and an applications axis.
3.1 UQ methodologies and tools
The Team will continuously work on developing original UQ representations and algorithms to deal with complex and large scale models, having high dimensional input parameters with complexes influences. We plan to organize our core research activities along different methodological UQ developments related to the challenges discussed above.
3.1.1 Surrogate modeling for UQ
Challenges. Surrogate models are crucial to enable the solution of both forward and backward UQ problems. Several alternative approaches, such as Polynomial Chaos, Gaussian Processes, and tensor format approximation, have been proposed and developed over the last decades. These approaches have been successfully applied to many different domains. Still, surrogates models for UQ management are facing many remaining limitations that require significant research works to handle large scale simulation-based studies and account for complex dependencies. These limitations concern multiple aspects, including the complexity related to the dimensionality of the input parameter, the definition of suitable basis representations, the complexity of the surrogate construction, and the control of the surrogate error.
Proposed actions. Platon will pursue long time efforts in the continuity of previous developments, such as the improvement of advanced sparse grid methods, sparsity promoting strategies and low-rank methods. Besides these generic developments, a first research axis will focus on the construction of surrogates for multi-physics problems (fluids, structures, chemistry,...) simulated by a system of coupled solvers. Classical surrogate methods consider the system of solvers as a single entity, and their construction requires the complete simulation with a high cost as a result. In contrast, we are proposing a divide to simplify strategy, using a surrogate of each constitutive solver, which reduces the input dimensionality of the local models, enables parallel construction and more flexible control of the computational effort. We will have to derive suitable error estimates of the contributions of the individual solver and procedures to decide the new computer experiments to reduce the error optimally. A second research axis on surrogate models will concern complexity reduction using transformation methods. Transformations can act on the input or output spaces of the model. In the first case, dimensionality reduction is achieved by finding low dimensional subspaces of the input space that convey most of the output variability. Platon will extend these methodologies to incorporate non-linear subspaces and alternative importance measures, in particular, to account for the surrogate's final usage (goal-oriented reduction). For the reduction of the output, we will consider generalizations of the preconditioning approach, which transforms the model output to a form admitting a much simpler surrogate and implicit enforcement of physical constraints. Here, the main challenges will be the automatic selection of the transformation among a dictionary and the design of computer experiments in this context (see below).
3.1.2 Uncertainty model, information theory and inference
Challenges. Uncertainty management in simulation can be considered in its infancy, and the control of the whole process, from the definition of the uncertainty model to the design of new simulations or experiments for uncertainty reduction, is still facing multiple challenges. Most past works on UQ have focused on forward-propagation and inverse problems when, in contrast, input uncertainty models and uncertainty reduction strategies, in general, have received much less attention.
Proposed actions. The uncertainty model directly affects the conclusion of UQ analyses (e.g., sensitivity analyses, estimation of failure probabilities, rare events). Therefore, it is crucial to propose uncertainty models that consistently and objectively integrate all available information and expert knowledge(s). Platon will explore the application of maximum entropy principle, likelihood maximization and moment matching methods, for the construction of uncertainty models in engineering problems.
For the inverse problem, the Team will continue its efforts in Bayesian inference toward better treatment of the model error in the calibration procedure.
Concerning uncertainty reduction, a central question is the prediction of the improvement toward the specific objective brought by a new simulation (computer experiment). Platon will investigate different strategies of design of experiment (DoE) based on measures of the improvement, such as entropy reduction, besides the classical reduction of variance.
The DoE in inference consists in proposing new physical experiments to reduce the posterior uncertainty optimally. Optimizing information gain leads to expensive numerical procedures, and suitable model error and noise models are critical to ensure the robustness of these optimal DoE procedures when applied to real-life data. Platon will work on approximation and reduction methods for optimal DoE to enable applications in large-scale engineering problems; the extension of the optimization to uncertainty reduction in general model-prediction, not just the model parameter uncertainty.
3.1.3 Multi-fidelity, Multi-level and optimization under uncertainty
Challenges. Multi-fidelity and Multi-level (MF&L) methods have been proposed to reduce the cost of surrogate model construction or statistics estimations, by relying on simulators of different complexity (in the modeled physics, discretization, or both). Although these methods have proved to be effective, particularly in the context of expensive simulations, existing algorithms must be adapted to other tasks. MF&L strategies are also missing in Robust Optimization (RO) and Reliability-Based Optimization (RBO), where one has to evaluate the objective accurately, typically some statistics of the model output (moments, quantiles, ...).
Proposed action. The Team Platon will explore MF&L approaches and the design of computer experiments to obtain the best estimation at the lowest cost (or for a prescribed computational budget) for nontrivial goal, specifically optimization and reliability problems where the accuracy needed is not uniform, possibly unknown a priori and to be estimated as the construction proceeds.
In RO and RBO, our research will focus on the estimation of robustness and reliability measures with tunable fidelity to adapt the convergence of the statistics to the advancement of the optimization procedure. Platon will include MF&L in the so-called bounding-box approach to track the level of error in the statistical estimates. Another research axis will focus on alternative estimation methods, e.g. the Quantile Bayesian regression, to include MF&L features.
3.1.4 HPC and UQ problems
Challenges. Both intrusive and non-intrusive UQ methods are associated to large computational costs, ranging from several to millions of times the cost of a deterministic solution depending on the problem and task considered. This situation is a significant obstacle to the deployment of UQ analysis in large scale simulations, and computational aspects have been central for a long time. However, works concerning the exploitation of High-Performance Computing platforms with massive parallelism are still scarce, besides the trivial parallelism of some sampling methods (e.g., Monte Carlo). Further, past efforts have concerned the formulation of the stochastic problem and relied on existing advanced solution methods (e.g. Domain Decomposition, linear algebra libraries, parallelism). However, few works have fully considered exploiting stochastic structures and HPC aspects to design novel computational strategies fully dedicated to UQ problems.
Proposed actions. Platon will continue to develop solvers for the resolution of multiple large systems resulting from the discretization of sampled stochastic problems. In particular, we shall focus on linear and non-linear (Newton-like) solvers, exchanging information (Krylov spaces) between successive solves to improve convergence rates of iterative methods. Besides the extension to non-linear problems, the work will focus on the implementation aspect and consider communication strategies when several instances of the random system are solved in parallel.
Platon will continue to develop specific domain decomposition methods for stochastic problems, and to propose effective stochastic preconditioners exploiting the independence of the local (uncertain) sub-problems. An additional but critical point concerns the association of adaptive mesh refinement (AMR, in space) with multiple resolution analysis (MRA, in the parameters) methods. Few works have solved UQ problems with deterministic AMR, and combining the two adaptive approaches within a parallel framework remains challenging; progress in this direction would enable efficient intrusive solvers for conservation laws.
4 Application domains
In this section, we provide some examples of UQ problems with industrial interests. We believe they are illustrative of how we envision interactions and knowledge transfer with industrial partners. These examples involve industrial and academic partnerships, with active projects and contracts.
4.1 Simulation of space objects
Challenges. The French Aerospace industry is facing enormous technological challenges in a highly competitive market. We focus on two relevant problems, i.e. the design of the booster of the Ariane6 launch vehicle and the atmospheric reentry of space vehicles or satellites. The launch vehicle's structure sustains severe mechanical and thermal stresses during the ignition stage, which are challenging to model accurately. Therefore, the design still relies heavily on experimental measurements and safety margins, when a better account of model uncertainty would help improve the design procedures. Concerning the atmospheric reentry, recent regulations impose the reentry of a human-made end-of-life space object with a rigorous assessment of the risk for human assets. The risk evaluation requires sequences of complex numerical simulations accounting for the multi-physics phenomena occurring during the reentry of a space object, e.g., fluid-structure interactions and heat transfer. Further, these simulations are inaccurate because they rely on overly simplified models (e.g., a reliable model of fragmentation is not available yet) and partial knowledge of the reentry conditions.
4.2 Predictive simulation of complex flows in nuclear reactors
Challenges. In the nuclear field, a systematic issue is that the calibration and validation of the mathematical model use experimental data measured on devices that are scaled versions of the actual design. One expects the scaled models to exhibit the same physics as the actual design, although the two operate in different conditions. Because of prohibitive computational cost, only parts of the reactor can be simulated with computational-fluid-dynamics (CFD) models. An open question is then how to accurately estimate the global prediction error associated with the resulting numerical model. The long-term objective in this field is to perform a so-called up-scaling approach, integrate simulations of different parts of the reactor and available experiments in scaled and actual designs, and improve the global predictive capability of the simulation and support the decision regarding new experiments.
4.3 Robust design of ORC turbines for renewable energy sources
Challenges. Organic Rankine Cycles (ORCs) are of key-importance in renewable energy systems. The thermodynamic properties of the organic fluids present technological advantages for low-grade heat sources, e.g. geothermal, solar, or industrial waste. The use of these systems in different physical locations worldwide and with different heat source conditions implies large variability in the turbine's operating conditions. For this reason, ORCs manufacturers are highly interested in evaluating the variability in the system efficiency and, eventually, in the robust designing of the turbines. Moreover, the molecular complexity of organic fluids requires sophisticated thermodynamic models. Nevertheless, the scarcity of experimental data makes hard the calibration of both thermodynamic models and parameters (among other critical properties, acentric factor), as well as the inference of a suitable turbulence model.
4.4 Uncertainty and inference in geosciences
Challenges. Uncertainty and inference are crucial in geosciences where all prediction is affected by lack of knowledge, imprecise calibration, and model error. It is essential to make the best use of the available information and objectively account for the actual state of knowledge. Besides, depending on the application, experimental observations can be very scarce or highly abundant, models can be crude or highly sophisticated, such that different methods are needed to adapt to the context. Further, these methods should ideally consider all sources of error (data error, calibration uncertainty, model error, numerical error) globally to balance them and ensure that resources are properly allocated to improve the prediction. For these reasons, Platon will continue to work on methodologies for applications in geosciences.
4.5 Research plan
Most of the actions proposed above are either initiated or planned to start shortly. They are organized and structured around Ph.D. and Post-Doc research activities and will not exceed the duration of the project. Apart from these actions, we will continuously conduct more exploratory research activities to improve, for instance, the treatment of (structural) model errors in uncertainty management, assess the potential application of machine learning algorithms to UQ, and advance toward holistic management of uncertainties.
5 Social and environmental responsibility
5.1 Impact of research results
Pollution reduction in commercial aircrafts
Within the EU H2020 programme, the Clean Sky 2 Joint Undertaking (CS2 JU) aims at meeting the following high level goals with respect to energy efficiency and environmental performance: CO2 and Fuel Burn: -20% to -30% (2025 / 2035), NOX: -20% to -40% (2025 / 2035), Population exposed to noise / Noise footprint impact: Up to -75% (2035)
To reach such goals, the CS2 JU is developing, among others, a demonstrator of an Advanced Rear End. The Advanced Rear End (ARE) demonstrator aims to integrate the conceptual design, structural and systems architectures, materials, technologies and industrial processes associated with a rear fuselage and empennage. The aim is to optimise all of these elements for application in the next generation of commercial aircraft that should provide: 20% Weight reduction, 20% recurring-cost reduction, 50% lead-time reduction, Aircraft fuel burn reduction by 1.5%
In the EU project Monnalisa funded from CS2 JU, Platon is developing a physics-based, low-order numerical method matching the results of the wind tunnel tests/hi-fi simulations of Large Passenger Aircraft tails, by leveraging Hi-fi numerical simulations, High-quality wind-tunnel tests, Open-source, advanced mid-fidelity models under control of UQ.
Renewable energy sources
Platon is involved in the development of advanced numerical tools to simulate Organic Rankine Cycles (ORCs), which are of key-importance in renewable energy systems. Specifically, we are working on the inference of thermodynamic models parameters for complex molecular compounds, using experimental data of the worldwide first facility at Politecnico di Milano. Secondly, we are developing a robust optimization framework for the shape design of ORC turbines. We hope to apply these methodologies to real-case scenarios in collaboration with manufacturers within the H2020-MSCA-ITN NICE (submitted this year).
6 Highlights of the year
6.1 Successful projects
Platon received funding from two different European projects approved in 2022: Nextair (HORIZON-CL5-2021-D5-01, Grant agreement ID: 101056732), Traces (HORIZON-MSCA-2021-DN-01 , Grant agreement ID: 101072551).
The team received funding from ANR (ANR-Labcom) to create a joint laboratory with the SME Bañulsdesign. The project will start in 2023.
7 New software and platforms
7.1 New software
7.1.1 Stocholm
-
Name:
Stocholm
-
Keyword:
Uncertainty quantification
-
Functional Description:
Stocholm is a numerical library permitting to respond to potential partners swiftly and draft UQ solutions addressing new questions. It includes Polynomial Chaos construction, manipulation, and algebra, adaptive sparse grid methods for integration, interpolation, and projection in high-dimension, stochastic multi-resolution analysis tools with error estimators, advanced regression methods with regularization techniques and Gaussian process modeling, sampling methods with LHS, QMC and Markov Chain Monte Carlo algorithms, Bayesian inference framework and fast density estimation methods, Bayesian optimization algorithms with robust and multi-objective strategies, ...
-
Release Contributions:
We will continue integrating existing tools and new ones into the library StochOlm (C++), the most general one, to allow for maximum interoperability of the constitutive utilities. Having a unique library shared by the whole group also presents some interest for students and new researchers joining the Team, as they can benefit from the others’ experience.
-
Contact:
Pietro Marco Congedo
8 New results
8.1 Research axis 1: Uncertainty Quantification and Inference
Participants: P.M. Congedo, O. Le Maître, M. Capriati, N. Leoni, M. Duvillard, G. Gori, M. Benmahdi, S. Idrissi, J. Pequet.
Project-team positioning
Many research groups are presently working on Uncertainty Quantification (UQ) and inference problems over the world and in France. For instance, the US has created and continues to expand large multi-disciplinary groups to address UQ challenges in energy and military domains through their national laboratories (SANDIA, Oak-Ridge, LLNL,...). These groups aim at providing generic methods and tools (mostly software) for the resolution of UQ problems (for example, the Dakota code from Sandia-Albuquerque) faced by other research groups from diverse application domains. Other countries are supporting smaller initiatives, including the CEA (civil and military) in France. Several large industrial groups, such as Bosch, EADS, or EdF, are also deploying UQ methodologies and tools (for example, the OpenTurns code from EADS/EDF) through dedicated RD units or services, responding to the demands of other services. These UQ activities have often emerged in well-established groups working in specific application domains (e.g., fluid dynamics, solid mechanics, electromagnetics, chemistry, material sciences, earth sciences, life sciences, ...), in response to some UQ aspects related to these particular domains. We cite G. Iaccarino (Uncertainty Quantification Lab within the Center for Turbulence Research, Stanford University), Y. Marzouk (Aerospace Computational Design Laboratory, MIT) and K. Wilcox (Institute for Computational Engineering and Sciences, University of Texas). The situation is globally similar in applied mathematics, where several groups develop advanced UQ methods within a broader research area (e.g., stochastic numerics, statistics, numerical analysis,...), sometimes with only a distant connection to engineering domains. For example, we can mention the research groups of M. Giles (Oxford), I. Bilionis (Purdue University), J. Garnier (Ecole Polytechnique), R. Abgrall (University of Zurich).
The objective of Platon is to team-up participants with the main interest in the development of UQ methodologies. While primarily targeting our current applications, our objective is to propose new applications through collaborations and progressive team development while maintaining the UQ as the project's identity. This strategy gives a somehow unique position of the Team within the national and international research landscapes. As far as computational mechanics and engineering are concerned, no group has been created with UQ management as the principal working area.
Then, the identity of Platon is to be contrasted with initiatives, including within Inria, which may have a UQ component, but within different methodological contexts and not as a central activity. For instance, some teams (e.g. SIERRA, TAO, SELECT, MODAL) develop statistical methods for data analysis, machine learning, and the treatment of large databases. Overall, the problems targeted in Platon are usually too costly, with high parametric dimension, and with few experimental data, so existing statistical methods can not be reused "as is", and require dedicated approaches.
On the application side, there are already Inria teams working on CFD applications, some even incorporating uncertainty quantification and sensitivity analysis activities. We mention here AIRSEA, which focuses on oceanic and atmospheric flows, CARDAMOM on free-surface hydraulics, and ACUMES on unsteady models in traffic flow and biology. In contrast to our project, all these efforts primarily address challenges in their respective application areas.
Scientific achievements
Our research activity features two main axes. The first is related to methodological developments, while the second is oriented to UQ problems with industrial interests.
In the following, we describe first the main contributions from a methodological point of view followed by more oriented-applications findings.
The first contribution concerns a computer model calibration technique inspired by the well-known Bayesian framework of Kennedy and O'Hagan 21. We tackle the full Bayesian formulation where model parameter and model discrepancy hyperparameters are estimated jointly and reduce the problem dimensionality by introducing a functional relationship that we call the Full Maximum a Posteriori (FMP) method. This method also eliminates the need for a true value of model parameters that caused identifiability issues in the KOH formulation. When the joint posterior is approximated as a mixture of Gaussians, the FMP calibration is proved to avoid some pitfalls of the KOH calibration, namely missing some probability regions and underestimating the posterior variance. We then illustrate two numerical examples where both model error and measurement uncertainty are estimated together. Using the solution to the full Bayesian problem as a reference, we show that the FMP results are accurate, robust and avoid the need for high-dimensional Markov Chains for sampling.
The second contribution concerns the first systematic quantification of the numerical error and the uncertainty-induced variability for the simulation of hypersonic flows 7. The numerical simulation of hypersonic atmospheric entry flows is a challenging problem. Prediction of quantities of interest, such as surface heat flux and pressure, is strongly influenced by the mesh quality using conventional second-order spatial accuracy schemes while depending on boundary conditions, which may generally suffer from uncertainty. In this paper, a mesh-convergence study using grid adaptation tools is coupled with surrogate-based approaches to Uncertainty Quantification. The illustrative example is the simulation of the EXPERT vehicle of the European Space Agency employing the US3D solver. First, we show the benefits in using mesh adaptation to simulate hypersonic flows under uncertainty. On the one hand, this practice reduces the numerical uncertainty associated with each prediction and, on the other hand, allows us to obtain a more reliable surrogate model for Uncertainty Quantification by preventing non-physical heat flux values. Secondly, we perform a sensitivity analysis to compare the numerical uncertainty associated with a given mesh with the UQ-induced variability for a specific quantity of interest. In the case considered, the impact of the numerical uncertainty turned out to be at least one order of magnitude less than the quantity of interest variability. This result indicates the possibility of using coarse and adapted meshes for future UQ studies.
The third contribution focuses on a non-linear regression method applied to the simulation of in-flight ice accretion under parametric uncertainty 8. A preliminary accuracy assessment, achieved comparing numerical predictions against experimental observations, confirm the robustness and the predictiveness of the computerized icing model. Besides, sensitivity analyses highlight the variance of the targeted outputs with respect to the different uncertain inputs. In rime icing conditions, a predominant role is played by the uncertainty affecting the airfoil angle of attack, the cloud liquid water content and the droplets’ mean volume diameter. In glaze icing condition, the sensitivity analysis shows instead that the output variability is due mainly to the ambient temperature uncertainty. Results expose a major criticality of standard uncertainty quantification techniques. The issue is inherent the approximation of the full icing model behavior in domain regions scarcely affected by ice build up. To mitigate the issue, a novel method is proposed and applied.
Applications in Uncertainty Quantification and Inference
Concerning applications in aerospace, we calibrate a carbon nitridation model for a broad span of surface temperatures from existing plasma wind tunnel measurements by accounting for experimental and parametric uncertainties 18. A chemical non-equilibrium stagnation line model is proposed to simulate the experiments and obtain recession rates and CN densities, the measured model outputs. First, we establish the influence of the experimental boundary conditions and nitridation parameters on the simulated observations through a sensitivity analysis. Results show that such quantities are mostly affected by the efficiency of nitridation reactions at the gas-surface interface. We then perform model calibrations for each experimental condition and compare them based on the experimental data used. This allows us to check the consistency of the experimental dataset. Using only the trustworthy experimental data, we perform a calibration of Arrhenius law parameters for nitridation efficiencies considering all available experimental conditions jointly, allowing us to compute nitridation efficiencies even for surface temperatures for which there are no reliable experimental data available. The stochastic Arrhenius law agrees well with most of the data in the literature. This result constitutes the first nitridation model extracted from plasma wind tunnel experiments with accurate uncertainty estimates.
Another contribution concerns the simulation of thermal management of Lithium-ion batteries 17, which is a key element to the widespread of electric vehicles. In this study, we illustrate the validation of a data-driven numerical method permitting to evaluate fast the behavior of the Immersion Cooling of a Lithium-ion Battery Pack. First, we illustrate an experiment using a set up of immersion cooling battery pack, where the temperatures, voltage and electrical current evolution of the Li-ion batteries are monitored. The impact of different charging/discharging cycles on the thermal behavior of the battery pack is investigated. Secondly, we introduce a numerical model, that simulates the heat transfer and electrical behavior of an immersion cooling Battery Thermal Management System. The deterministic numerical model is compared against the experimental measurements of temperatures. Then, we perform a Bayesian calibration of the multi-physics input parameters using the experimental measurements directly. The informative distributions outcoming of this process are used to validate the model in different experimental conditions and reduce the uncertainty in the model’s temperatures predictions. Finally, the learned distributions of inputs and the numerical model are used to design the system under realistic conditions representing a realistic racing car operation. A Sobol indices based sensitivity analysis is performed to get further analysis elements on the behavior of the BTMS.
Collaborations
Since many years, we have several long-term partnerships with KAUST, von Karman Institute for Fluid-Dynamics (VKI), Politecnico di Milano and CEA.
With KAUST, we are working on new stochastic particle tracking methods to identify and track oil spills in open waters, combining satellite images and uncertainties in predicted currents. We also develop new assimilation schemes, inference methods for fractional diffusion models, and the selection and reduction of observations. There are several joint publications and exchanges of students.
With VKI, we work on UQ methods and inverse problems for atmospheric re-entry and ablation problems. In terms of production, there are several joint publications and one joint PhDs (M. Capriati).
With Politecnico di Milano, we have several activities in the Aeronautical and Energy fields. We work on the characterization of the thermodynamic model with Bayesian approaches, uncertainty on the turbulence model for RANS aerodynamic simulation, multi-fidelity approaches. We are currently involved in the EU CS2 MONNALISA project and in the EU TRACES project. We have also a strong collaboration with Giulio Gori, former member of Platon, and now Assistant Professor at Politecnico di Milano.
With CEA Saclay, we have a long-term collaboration since four years. Nicolas Leoni defended his thesis this year, and a new PhD student has started in October 2022 (Sanae Idrissi).
External support
- CleanSky2 MONNALISA Project (2020-2022)
- MSCA Doctoral Network TRACES Project (2022-2026)
- Industrials contracts with CEA
- Industrial contract with 3DS
Self assessment
In addition to developing methods-oriented research, we proposed UQ methods tailored to specific applications in collaboration with other academic and industrial partners. This action has allowed us to position ourselves with high-impact papers in many application areas.
A weakness may be finding a balance between two different axes. The first axis concerns the development of high-level research from a methodological point of view, while the second one involves collaborations with industrial partners within research contracts and European projects. We think that the team's current size does not fit very well in the long term with this double effort. For this reason, the recruitment of new forces seems mandatory to keep sustaining a good balance between these two main axes of research.
8.2 Research axis 2: Solvers, Numerical Schemes and HPC
Personnel
Participants: P.M. Congedo, O. Le Maître, M. Duvillard, M. Benmahdi.
Project-team positioning
Research on solvers, numerical schemes, and HPC algorithms specifically dedicated to UQ problems is scarce. Indeed, advanced sampling and stochastic estimation procedures, the subject of intensive outgoing research, rely on state-of-the-art deterministic solvers to generate the solution samples. To our knowledge, there is no research group (within or outside Inria) focusing entirely on the computational aspects of UQ problems. Groups producing computational utilities for UQ (e.g., Sandia's Dakota, OpenTurns) focus on the sampling part (statistical treatment), and the efficient generation of the samples is left to the user. In recent years, few works have concerned Galerkin solvers, their preconditioning, and the adaptation of domain decomposition methods (DDM) for (usually elliptic) stochastic PDEs. We can mention some activities in Manchester (preconditioning), Munich and Lausanne (DDM), and Bath (solvers for multi-level methods). In Platon, we are trying to exploit the structure of the stochastic problems to propose new strategies for their resolution (Galerkin method) or the generation of solution samples. These strategies can consist of adapting deterministic solvers to factorize the computational effort over multiple samples or, on the contrary, the definition of entirely new solution procedures to exploit parallel methods in stochastic problems better, beyond the independent resolution of independent samples. Our objective is to produce parallel and scalable methods for large-scale stochastic problems.
It becomes more and more critical to devise solution methods tailored to the stochastic problem when the numerical complexity of the underlying deterministic problem increases. For elliptic problems, highly efficient deterministic solvers' availability has somehow limited the research on stochastic solvers. The situation is different for models based on fractional diffusion operators (in space or time), where the numerical difficulties to solve these operators have virtually prevented any work on problems with stochastic fractional and diffusion coefficients. A few years ago, KAUST (Omar Knio) and KFUPM (Kassem Mustapha) initiated a research program on fractional diffusion models. Platon is involved in this program to deal with the stochastic extensions. Several new numerical schemes and algorithms to solve deterministic fractional diffusion equations have been designed. These schemes are suitable for an extension to stochastic problems (e.g., allowing for spatially variable coefficients and achieving efficient -scalability- enabling sampling methods and inverse problems).
Scientific achievements
The first achievement concerns a contribution on the space-fractional diffusion equations, which can model accurately anomalous diffusion phenomena. A second-order accurate time-stepping scheme for solving a time-fractional Fokker–Planck equation of order , with a general driving force, is investigated 12. A stability bound for the semidiscrete solution is obtained for via a novel and concise approach. Our stability estimate is -robust in the sense that it remains valid in the limiting case where approaches 1 (when the model reduces to the classical Fokker–Planck equation), a limit that presents practical importance. Concerning the error analysis, we obtain an optimal second-order accurate estimate for . A time-graded mesh is used to compensate for the singular behavior of the continuous solution near the origin. The time-stepping scheme scheme is associated with a standard spatial Galerkin finite element discretization to numerically support our theoretical contributions. We employ the resulting fully discrete computable numerical scheme to perform some numerical tests. These tests suggest that the imposed time-graded meshes assumption could be further relaxed, and we observe second-order accuracy even for the case , that is, outside the range covered by the theory.
In the context of supervised learning of a function by a neural network, a second achievement concerns the empirical verification that the neural network yields better results when the distribution of the data set focuses on regions where the function to learn is steep 14. We first traduce this assumption in a mathematically workable way using Taylor expansion and emphasize a new training distribution based on the derivatives of the function to learn. Then, theoretical derivations allow construction of a methodology that we call variance based samples weighting (VBSW). VBSW uses labels' local variance to weight the training points. This methodology is general, scalable, cost-effective, and significantly increases the performances of a large class of neural networks for various classification and regression tasks on image, text, and multivariate data. We highlight its benefits with experiments involving neural networks from linear models to ResNet and BERT.
Obtaining accurate high-resolution representations of model outputs is essential to describe the system dynamics. In general, however, only spatially-and temporally-coarse observations of the system states are available. These observations can also be corrupted by noise. Downscaling is a process/scheme in which one uses coarse scale observations to reconstruct the highresolution solution of the system states. Continuous Data Assimilation (CDA) is a recently introduced downscaling algorithm that constructs an increasingly accurate representation of the system states by continuously nudging the large scales using the coarse observations. In this context, as a third contribution of the team, we introduce in 10 a Discrete Data Assimilation (DDA) algorithm as a downscaling algorithm based on CDA with discrete-in-time nudging. We then investigate the performance of the CDA and DDA algorithms for downscaling noisy observations of the Rayleigh-Bénard convection system in the chaotic regime. In this computational study, a set of noisy observations was generated by perturbing a reference solution with Gaussian noise before downscaling them. The downscaled fields are then assessed using various error-and ensemble-based skill scores. The CDA solution was shown to converge towards the reference solution faster than that of DDA but at the cost of a higher asymptotic error. The numerical results also suggest a quadratic relationship between the 2 error and the noise level for both CDA and DDA. Cubic and quadratic dependences of the DDA and CDA expected errors on the spatial resolution of the observations were obtained, respectively.
Out of the UQ paradigm, we work on the development of a four-equation model for simulating two-phase mixtures with phase transition 6. The main assumption consists in a homogeneous temperature, pressure and velocity fields between the two phases. In particular, we tackle the study of time dependent problems with strong discontinuities and phase transition. This work presents the extension of a non-conservative residual distribution scheme to solve a four-equation two-phase system with phase transition. This non-conservative formulation allows avoiding the classical oscillations obtained by many approaches, that might appear for the pressure profile across contact discontinuities. The proposed method relies on a Finite Volume based Residual Distribution scheme which is designed for an explicit second-order time stepping. We test the non-conservative Residual Distribution scheme on several benchmark problems and assess the results via a cross-validation with the approximated solution obtained via a conservative approach, based on an HLLC solver. Furthermore, we check both methods for mesh convergence and show the effective robustness on very severe test cases, that involve both problems with and without phase transition.
Collaborations
With KAUST, we worked with Omar Knio on numerical schemes for fractional diffusion equation and their extension to the stochastic case.
With CEA-CESTA, we worked on scientific machine learning techniques within the thesis of Paul Novello.
External support
- Industrial contracts with CEA
- Industrial contract with 3DS
Self assessment
Concerning the fractional diffusion models, we are already engaged in the extension of the hierarchical matrix method to solve the spatial stochastic fractional diffusion equation with a Galerkin method and to develop sparse storage strategies to reduce the complexity of the stochastic time-fractional problem. These are promising and very original researches. We (Platon) are dependent on the collaboration to access some of the numerical utilities (H-matrices).
8.3 Research axis 3: Optimization under uncertainty
Personnel
Participants: P.M. Congedo, O. Le Maître, M. Pocheau, L. Rakotondrainibe, G. Gori, S. Mao, Z. Jones.
Project-team positioning
Optimization Under Uncertainty is an important axis of research, due to both the evergrowing computational power available and the need for efficiency, reliability and cost optimality. The presence of uncertainty could make the solution of a deterministic optimization problem suboptimal or even infeasible. Since this behavior could impact strongly the design performances, both academia and industry focused their effort to developing optimization under uncertanty methodologies. Optimization under uncertainty is a broad domain including several modeling paradigms, such as for example stochastic programming, Reliability-Based Design Optimization (RBDO, that deals with probabilistic and worst-case feasibility constraints), and Robust Design Optimization (RDO, where the deterministic objectives are replaced with averaged or worst-case ones, possibly in a multi-objective context such as the classical Taguchi optimization).
Note that most of the groups active in optimization under uncertainty also have strong activities in uncertainty quantification. Thus there is an overlap with the state of the art presented in Section 8.1. The Optimization & Uncertainty Quantification Group of Sandia-Albuquerque aim at providing advanced methods for the resolution of optimization under uncertainty problems. We mention as well optimization under uncertainties activities emerged in well-established groups working in specific application domains. We cite the Aerospace Computational Design Laboratory from MIT and the Institute for Computational Engineering and Sciences from University of Texas. In France, we can mention the OQUAIDO Chair ( Optimization and QUAntification of Uncertainties), hosted by the École des Mines de Saint-Étienne from 2016 to 2021, aiming to bring together academic and industrial partners to solve problems related to uncertainty quantification, inversion and optimization.
In the context of the Optimization under Uncertainty, Platon is devoted to developing novel methods to tackle constrained multi-objective optimization, with specific attention on cost-efficient and mainly derivative-free strategies. Specifically, we look for an optimal trade-off between computational cost and accuracy in the case of problems involving complex and expensive numerical solvers. Platon is exploring also dedicated representation and the design of computer experiments to obtain the best estimation at the lowest cost (or for a prescribed computational budget) for nontrivial goal, specifically optimization and reliability problems where the accuracy needed is not uniform, possibly unknown a priori and to be estimated as the construction proceeds. More recently, we have worked also on sample average approximation methods using a risk-averse stochastic programming formulations.
Several Inria Teams have the optimization problem as core activity, such as for example BONUS, EDGE, INOCS, POLARIS, RANDOPT. Main difference is that we are not interested in working on generic optimization algorithms, as mentioned before. In our past and current works, we use standard optimization algorithms, mainly for continuous optimization. We focus our attention on dedicated representations to efficiently estimate uncertainty-based metrics within an optimization problem. The Inria teams POLARIS and INOCS work on innovative methods for stochastic optimization that are quite different from those proposed by Platon.
Scientific achievements
The first achievement concerns a novel methodology devoted to tackling constrained multi-objective optimisation under uncertainty problems 15. A Surrogate-Assisted Bounding-Box approach (SABBa) is formulated here to deal with approximated robustness and reliability measures, which can be adaptively refined. A Bounding-Box is defined as a multi-dimensional product of intervals, centred on the estimated objectives and constraints, that contains the true underlying values. The accuracy of these estimations can be tuned throughout the optimisation so as to reach high levels only on promising designs, which allows quick convergence toward the optimal area. In SABBa, this approach is supplemented with a Surrogate-Assisting (SA) strategy, which permits to further reduce the overall computational cost. The adaptive refinement within the Bounding-Box approach is guided by the computation of the Pareto Optimal Probability (POP) of each box. We first assess the proposed method on several analytical uncertainty-based optimisation test-cases with respect to an a priori metamodel approach in terms of a probabilistic modified Hausdorff distance to the true Pareto optimal set. The method is then applied to three engineering applications: the design of two-bar truss in structural mechanics, the shape optimisation of an Organic Rankine Cycle turbine blade and the design of a thermal protection system for atmospheric reentry.
The second achievement focuses on the risk-averse optimization methods to address the self-scheduling and market involvement of a virtual power plant (VPP) 11. The decision-making problem of the VPP involves uncertainty in the wind speed and electricity price forecast. We focus on two methods: risk-averse two-stage stochastic programming (SP) and two-stage adaptive robust optimization (ARO). We investigate both methods concerning formulations, uncertainty and risk, decomposition algorithms, and their computational performance. To quantify the risk in SP, we use the conditional value at risk (CVaR) because it can resemble a worst-case measure, which naturally links to ARO. We use two efficient implementations of the decomposition algorithms for SP and ARO; we assess (1) the operational results regarding first-stage decision variables, estimate of expected profit, and estimate of the CVaR of the profit and (2) their performance taking into consideration different sample sizes and risk management parameters. The results show that similar first-stage solutions are obtained depending on the risk parameterizations used in each formulation. Computationally, we identified three cases: (1) SP with a sample of 500 elements is competitive with ARO; (2) SP performance degrades comparing to the first case and ARO fails to converge in four out of five risk parameters; (3) SP fails to converge, whereas ARO converges in three out of five risk parameters. Overall, these performance cases depend on the combined effect of deterministic and uncertain data and risk parameters. Summary of Contribution: The work presented in this manuscript is at the intersection of operations research and computer science, which are intrinsically related with the scope and mission of IJOC. From the operations research perspective, two methodologies for optimization under uncertainty are studied: risk-averse stochastic programming and adaptive robust optimization. These methodologies are illustrated using an energy scheduling problem. The study includes a comparison from the point of view of uncertainty modeling, formulations, decomposition methods, and analysis of solutions. From the computer science perspective, a careful implementation of decomposition methods using parallelization techniques and a sample average approximation methodology was done . A detailed comparison of the computational performance of both methods is performed. Finally, the conclusions allow establishing links between two alternative methodologies in operations research: stochastic programming and robust optimization.
The third achievement deals with a confidence-based design approach robust to turbulence closures model-form uncertainty in Reynolds-Averaged Navier-Stokes computational models 9. The Eigenspace Perturbation Method is employed to compute turbulence closure uncertainty estimates of the performance targeted by the optimizer. The magnitude of the uncertainty estimates is exploited to establish an indicator parameter associated to the credibility of numerical prediction. The proposed approach restricts the optimum search only to design space regions for which the credibility indicator suggests trustworthy RANS model predictions. In this way, we improve the efficiency of the design process, potentially avoiding designs for which the computational model is unreliable. The reference test case consists in a two-dimensional single element airfoil resembling a morphing wing section in a high-lift configuration. Results show that the prediction credibility constraint has a non negligible impact on the definition of the optimal design.
Collaborations
We worked with DLR (German Aerospace Center) within the EU NEXTAIR project for robust optimization.
The collaboration with ArianeGroup has been ongoing for eight years and was initiated by P.M. Congedo in the Bacchus team. Concerning Platon, works on robust optimization are essentially associated with Mickael Rivier's CIFRE thesis (2017-2020). After his PhD defense in 2020, Mickael Rivier was hired in a permanent position by ArianeGroup and became one of ArianeGroup's contact persons in uncertainty quantification and optimization. A contract with ArianeGroup (which served to fund a post-doc for Platon) on optimization for non-stationary problems has finished in 2022.
The collaboration with KAUST, and in particular Ricardo Lima, has brought specific stochastic optimization problems with structures that considerably differ from our other researches (e.g. two-stages optimization, introduction of recourse, discrete optimization, ...). These problems also involve different risk mitigation approaches. Working on these problems we have learn alternative formulations and uncertainty treatments that we plan to apply to engineering applications. Similarly, we have contributed with sampling and uncertainty modelling strategies that are original for this types of problems.
External support
- MSCA Doctoral Network TRACES Project (2022-2026)
- EU NEXTAIR Project (2022-2026)
- Industrial contract with ArianeGroup
- Industrial contract with Bañulsdesign
Self assessment
Concerning the strong point, we proposed advanced state-of-the-art methods in different aspects of optimization under uncertainty, which are topics of great interest in academia. At the same time, we consolidated industrial collaborations that have allowed us to develop high-impact projects with a relevant societal impact.
Concerning a potential weakness, we think it is particularly challenging, given the size of the team, to keep proposing innovative methods and, at the same time, to contribute to projects at the industrial and European scale. New recruitments seem necessary to ensure this twofold effort.
9 Bilateral contracts and grants with industry
Participants: P.M. Congedo, O. Le Maitre, M. Rivier, M. Pocheau, L. Rakotondrainibe, M. Benmahdi.
9.1 Bilateral contracts with industry
9.1.1 ArianeGroup
The project concerned the optimization of the ignition device, taking into account the effects induced by the ignition transient. The simulation of the ignition device is carried out by ArianeGroup via a numerical chain made up of a CFD fluid code and a structural code which predicts the dynamic forces. We tested several techniques, which are tailored to the problem of interest. The amount of the grant is of 150 K euros, and the contract finished in 2022.
9.1.2 Bañulsdesign
Since 2019, the team benefits from a "contrat d'accompagnement" for the Cifre thesis of Malo Pocheau, on the modelling of foilers.
9.1.3 3DS
Since 2022, the team benefits from a "contrat d'accompagnement" for the thesis of Meryem Benmahdi.
9.1.4 CEA
Since 2022, the team benefits from a "contrat d'accompagnement" for the thesis of Marius Duvillard, and from a "contrat d'accompagnement" for the thesis of Sanae Idrissi.
10 Partnerships and cooperations
Participants: P.M. Congedo, O. Le Maître, Z. Jones, G. Gori.
10.1 International research visitors
10.1.1 Visits of international scientists
Other international visits to the team
Omar Knio
-
Status
Professor
-
Institution of origin:
KAUST
-
Country:
SAUDI ARABIA
-
Dates:
March 2022
-
Context of the visit:
Research Collaboration
-
Mobility program/type of mobility:
Research stay
10.2 European initiatives
10.2.1 Horizon Europe
NEXTAIR:
NEXTAIR project on cordis.europa.eu
-
Title:
NEXTAIR - multi-disciplinary digital - enablers for NEXT-generation AIRcraft design and operations
-
Duration:
From September 1, 2022 to August 31, 2025
-
Partners:
- INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
- THE UNIVERSITY OF SHEFFIELD (USFD), United Kingdom
- IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE (Imperial), United Kingdom
- AIRBUS OPERATIONS SAS (AIRBUS OPERATIONS), France
- ETHNICON METSOVION POLYTECHNION (NATIONAL TECHNICAL UNIVERSITY OF ATHENS - NTUA), Greece
- SAFRAN SA, France
- UNIVERSITA DEGLI STUDI DI CAGLIARI (UNICA), Italy
- OFFICE NATIONAL D'ETUDES ET DE RECHERCHES AEROSPATIALES (ONERA), France
- DEUTSCHES ZENTRUM FUR LUFT - UND RAUMFAHRT EV (DLR), Germany
- FUNDACION CENTRO DE TECNOLOGIAS DE INTERACCION VISUAL Y COMUNICACIONES VICOMTECH (VICOM), Spain
- DASSAULT AVIATION, France
- ASOUTI V & SIA OE, Greece
- OPTIMAD ENGINEERING SRL (Optimad srl), Italy
- IRT ANTOINE DE SAINT EXUPERY, France
- ERDYN CONSULTANTS SARL, France
- ROLLS-ROYCE PLC, United Kingdom
-
Inria contact:
Pietro Congedo
-
Coordinator:
ONERA
-
Summary:
Radical changes in aircraft configurations and operations are required to meet the target of climate-neutral aviation. To foster this transformation, innovative digital methodologies are of utmost importance to enable the optimisation of aircraft performances.
NEXTAIR will develop and demonstrate innovative design methodologies, data-fusion techniques and smart health-assessment tools enabling the digital transformation of aircraft design, manufacturing and maintenance. NEXTAIR proposes digital enablers covering the whole aircraft life-cycle devoted to ease breakthrough technology maturation, their flawless entry into service and smart health assessment. They will be demonstrated in 8 industrial test cases, representative of multi-physics industrial design, maintenance problems and environmental challenges and interest for aircraft and engines manufacturers.
NEXTAIR will increase high-fidelity modelling and simulation capabilities to accelerate and derisk new disruptive configurations and breakthrough technologies design. NEXTAIR will also improve the efficiency of uncertainty quantification and robust optimisation techniques to effectively account for manufacturing uncertainty and operational variability in the industrial multi-disciplinary design of aircraft and engine components. Finally, NEXTAIR will extend the usability of machine learning-driven methodologies to contribute to aircraft and engine components' digital twinning for smart prototyping and maintenance.
NEXTAIR brings together 16 partners from 6 countries specialised in various disciplines: digital tools, advanced modelling and simulation, artificial intelligence, machine learning, aerospace design, and innovative manufacturing. The consortium includes 9 research organisations, 4 leading aeronautical industries providing digital-physical scaled demonstrator aircraft and engines and 2 high-Tech SME providing expertise in industrial scientific computing and data intelligence.
TRACES:
TRACES project on cordis.europa.eu
-
Title:
TRAining the next generation of iCE researcherS
-
Duration:
From December 1, 2022 to November 30, 2026
-
Partners:
- ECOLE POLYTECHNIQUE (EP), France
- SAFRAN AEROSYSTEMS (SAFRAN AEROSYSTEMS SAS), France
- AIRBUS HELICOPTERS, France
- SAFRAN AIRCRAFT ENGINES, France
- INSTITUT POLYTECHNIQUE DE PARIS, France
- AIRBUS OPERATIONS SAS (AIRBUS OPERATIONS), France
- INSTITUT SUPERIEUR DE L'AERONAUTIQUE ET DE L'ESPACE (ISAE-Supaero), France
- AIRBUS DEFENCE AND SPACE GMBH, Germany
- TECHNISCHE UNIVERSITAET BRAUNSCHWEIG, Germany
- OFFICE NATIONAL D'ETUDES ET DE RECHERCHES AEROSPATIALES (ONERA), France
- ACCADEMIA EUROPEA DI BOLZANO (Eurac Research), Italy
- DASSAULT AVIATION, France
- POLITECNICO DI MILANO (POLIMI), Italy
- TECHNISCHE UNIVERSITAT DARMSTADT, Germany
- LEONARDO - SOCIETA PER AZIONI (LEONARDO), Italy
- GENERAL ELECTRIC DEUTSCHLAND HOLDING GMBH, Germany
-
Inria contact:
Pietro Congedo
-
Coordinator:
POLITECNICO DI MILANO
-
Summary:
In 2019, the European Aviation Safety Agency (EASA) identified in-flight icing as a priority 1 issue for large aeroplanes with the aggregated European Risk Classification Scheme score being amongst the highest of all safety issues. In-flight icing can occur when an aircraft flies through clouds of supercooled droplets, namely, drops of liquid water with a temperature below the freezing point, which freezes upon impact. Aircraft icing can lead to a reduction of visibility, damage due to ice shedding, blockage of probes and static vents, reduced flight performance, engine power loss, etc. In addition to safety concerns, inservice icing events can lead to major disruption of air operation and aircraft maintenance. The more frequent occurrence of severe thunderstorms due to climate change results in more in-flight accidents also at cruising altitudes, with more than 100 engine failures in recent years. Recently, icing-related issues are being observed in newer, more efficient aircraft engines due to the lower temperature of operation. The main goal of TRACES EJD is to provide high-level training in the field of inflight icing to deliver a new generation of high achieving Early Stage Researchers in the diverse disciplines necessary for mastering the complexity of ice accretion and its mitigation in aircraft and aeroengines. This goal will be achieved by a unique combination of hands-on research training, non-academic placements at major EU aviation industries and courses and workshops on scientific and complementary so-called soft skills facilitated by the academic/non-academic composition of the consortium. Innovative Ice Detection and Ice Protection Systems based on disruptive technologies will be designed by the ESRs during Project Working Group. EASA will provide training on certification procedure and together with major industries in the field will assess the ESRs projects during a team Design & Certify exercise.
MONNALISA:
MONNALISA project on cordis.europa.eu
-
Title:
Modelling Nonlinear Aerodynamics of Lifting Surfaces
-
Duration:
From January 1, 2021 to March 31, 2023
-
Partners:
- INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
- POLITECNICO DI MILANO (POLIMI), Italy
- METALTECH SRL , Italy
-
Inria contact:
Pietro Congedo
-
Coordinator:
POLITECNICO DI MILANO
-
Summary:
The performance-improvement objectives sought in the Clean Sky 2 Joint Undertaking requires a departure from conventional empennage configurations and technologies that constitute the current state of the art in aircraft design. An “Advanced Rear End” component for the forthcoming generation of ultra-efficient aircraft might consist of a very compact rear fuselage and tail surfaces with planforms significantly different from those currently used in terms of aspect ratio, taper ratio and sweep angle. In this project, we aim at developing and validating an innovative, physics-based low-order method to predict the non-linear aerodynamic characteristics of lifting surfaces with controls whose geometry could significantly differ from the usual ones. The development and validation of the method relies on a high-resolution database scanning the extensive space of design parameters required in the call: sweep angle, aspect ratio, taper ratio, dihedral angle, shape of the leading edge and presence of ice. A recently validated approach, based on the most advanced techniques of uncertainty quantification, guarantees the reliability of the database of the aerodynamic characteristics that will drive the development of the method. The aerodynamic database will efficiently mix highly accurate experimental results with state-of-the-art, high-fidelity numerical simulations and lower fidelity simulations. The success of the project is guaranteed by the scientific quality and reliability of the partners in the consortium that have extensive experience in aerospace research projects, by state-of-the-art experimental facilities and aerodynamic-simulation open-source codes, and by a judicious use of subcontracting.
11 Dissemination
11.1 Promoting scientific activities
11.1.1 Scientific events: selection
Chair of conference program committees
- Olivier Le Maître has served in the scientific committee of the Congrès Français de Mécanique (Nantes, 2022).
11.1.2 Journal
Member of the editorial boards
- Olivier Le Maître is member of the editorial board of the International Journal for Uncertainty Quantification.
- Pietro Marco Congedo is Editor of the Journal "Mathematics and Computer in Simulation (MATCOM)" from Elsevier.
11.1.3 Invited talks
- Pietro Marco Congedo has given a seminar at the UQ-Saclay seminars, the 3rd of February 2022.
- Pietro Marco Congedo has given a seminar at the 3rd European workshop on MDO, the 20th of September 2022.
- Olivier Le Maître has given an invited talk, at the GdR MascotNum annual meeting, Clermont Ferrand, the 8th of June 2022.
- Olivier Le Maître has given an invited talk, at the Workshop on Waves and Geosciences, Lyon, the 19th of March 2022.
11.1.4 Research administration
Pietro Marco Congedo is the Scientific Director of the Inria International Lab CWI-Inria.
11.2 Teaching - Supervision - Juries
Teaching at University
- PM Congedo, 2022: ENSTA ParisTech, Palaiseau, Graduate level (20h/y), Numerical methods in Fluid Mechanics.
- OP Le Maître, 2022: Université Paris Saclay, Doctoral School SMEMAG (22h/y), Uncertainty Quantification Methods.
11.2.1 Supervision
- Pietro Marco Congedo is the co-advisor of the thesis of Michele Capriati in collaboration with von Karman Institute for Fluid-dynamics (Belgium).
- Olivier Le Maître is the advisor of the thesis of Malo Pocheau in collaboration with Bañulsesign.
- Olivier Le Maître is the advisor of the thesis of Marius Duvillard in collaboration with CEA Cadarache.
- Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Meryem Benmahdi, in collaboration with 3DS.
- Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Sanae Idrissi Janati, in collaboration with CEA Saclay.
- Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Zachary Jones.
- Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Christos Papagiannis, in collaboration with LEGI Lab.
- Pietro Marco Congedo has been advisor of the thesis of Paul Novello, who defended his thesis the 9th of March 2022 from "Institut Polytechnique de Paris".
- Pietro Marco Congedo and Olivier Le Maître have been co-advisors of the thesis of Nicolas Leoni, who defended his thesis the 4th of April 2022 from "Institut Polytechnique de Paris".
11.2.2 Juries
- Pietro Marco Congedo has served as reviewer in the PhD of Joffrey Coheur (16 Février 2022) from Université catholique de Louvain.
- Pietro Marco Congedo has served as committee member in the PhD of Julien Nespoulous (23 Novembre 2022) from UGE.
- Olivier Le Maître has served as committee head in the PhD of Luc Bonnet (Dec. 13th, 2021) at UP-Saclay.
- Olivier Le Maître has served as committee member in the PhD of Clément Gauchy (Nov. 9th, 2022) at IP-Paris.
- Olivier Le Maître has served as committee member in the PhD of Henry Memoz Kouyé (Dec. 2nd, 2022) at IP-Saclay.
11.3 Popularization
11.3.1 Internal or external Inria responsibilities
- Pietro Marco Congedo is the Deputy Coordinator of "Maths/Engineering" Program of the Labex Mathématiques Hadamard (IPP and Paris-Saclay University), since 2018.
- Pietro Marco Congedo is member of the Conseil du Laboratoire du CMAP (Ecole Polytechnique IPP).
- Pietro Marco Congedo is the coordinator of the "Pôle Analyse" of CMAP Lab (Ecole Polytechnique, IPP).
- Olivier Le Maître is the corresponding member of the Inria SIF center with the French Agency for Math and Industry (AMIES), since 2019.
11.3.2 Interventions
Caféine : discussion avec Pietro Congedo (PLATON) autour du projet européen MONNALISA le 1er décembre 2022.
12 Scientific production
12.1 Major publications
- 1 articleModeling in-flight ice accretion under uncertain conditions.Journal of Aircraft593May 2022
- 2 articleA Confidence-based Aerospace Design Approach Robust to Structural Turbulence Closure Uncertainty.Computers and Fluids246June 2022, 105614
- 3 articleA second-order accurate numerical scheme for a time-fractional Fokker–Planck equation.IMA Journal of Numerical AnalysisJuly 2022
- 4 articleSurrogate-Assisted Bounding-Box approach applied to constrained multi-objective optimisation under uncertainty.Reliability Engineering and System Safety217January 2022, 108039
- 5 articleStochastic calibration of a carbon nitridation model from plasma wind tunnel experiments using a Bayesian formulation.Carbon200November 2022
12.2 Publications of the year
International journals
Doctoral dissertations and habilitation theses
Reports & preprints