Section: Research Program
Research Program
Our group, originally only involved in electronic structure computations, continues to focus on many numerical issues in quantum chemistry, but now expands its expertise to cover several related problems at larger scales, such as molecular dynamics problems and multiscale problems. The mathematical derivation of continuum energies from quantum chemistry models is one instance of a long-term theoretical endeavour.
Electronic structure of large systems
Quantum Chemistry aims at understanding the properties of matter through
the modelling of its behavior at a subatomic scale, where matter is
described as an assembly of nuclei and electrons.
At this scale, the equation that rules the interactions between these
constitutive elements is the Schrödinger equation. It can be
considered (except in few special cases notably those involving
relativistic phenomena or nuclear reactions)
as a universal model for at least three reasons. First it contains all
the physical
information of the system under consideration so that any of the
properties of this system can in theory be deduced from the
Schrödinger
equation associated to it. Second, the Schrödinger equation does not
involve any
empirical parameters, except some fundamental constants of Physics (the
Planck constant, the mass and charge of the electron, ...); it
can thus be written for any kind of molecular system provided its
chemical
composition, in terms of natures of nuclei and number of electrons,
is known. Third, this model enjoys remarkable predictive
capabilities, as confirmed by comparisons with a large amount of
experimental data of various types.
On the other hand, using this high quality model requires working with
space and time scales which are both very
tiny: the typical size of the electronic cloud of an isolated atom is
the Angström (
-
both equations involve the quantum Hamiltonian of the molecular system under consideration; from a mathematical viewpoint, it is a self-adjoint operator on some Hilbert space; both the Hilbert space and the Hamiltonian operator depend on the nature of the system;
-
also present into these equations is the wavefunction of the system; it completely describes its state; its
norm is set to one.
The time-dependent equation is a first-order linear evolution
equation, whereas the time-independent equation is a linear eigenvalue
equation.
For the reader more familiar with numerical analysis
than with quantum mechanics, the linear nature of the problems stated
above may look auspicious. What makes the
numerical simulation of these equations
extremely difficult is essentially the huge size of the Hilbert
space: indeed, this space is roughly some
symmetry-constrained subspace of
As the size of the systems one wants to study increases, more efficient
numerical techniques need to be resorted to. In computational chemistry,
the typical scaling law for the complexity of computations with respect
to the size of the system under study is
-
how can one improve the nonlinear iterations that are the basis of any ab initio models for computational chemistry?
-
how can one more efficiently solve the inner loop which most often consists in the solution procedure for the linear problem (with frozen nonlinearity)?
-
how can one design a sufficiently small variational space, whose dimension is kept limited while the size of the system increases?
An alternative strategy to reduce the complexity of ab initio computations is to try to couple different models at different scales. Such a mixed strategy can be either a sequential one or a parallel one, in the sense that
-
in the former, the results of the model at the lower scale are simply used to evaluate some parameters that are inserted in the model for the larger scale: one example is the parameterized classical molecular dynamics, which makes use of force fields that are fitted to calculations at the quantum level;
-
while in the latter, the model at the lower scale is concurrently coupled to the model at the larger scale: an instance of such a strategy is the so called QM/MM coupling (standing for Quantum Mechanics/Molecular Mechanics coupling) where some part of the system (typically the reactive site of a protein) is modeled with quantum models, that therefore accounts for the change in the electronic structure and for the modification of chemical bonds, while the rest of the system (typically the inert part of a protein) is coarse grained and more crudely modeled by classical mechanics.
The coupling of different scales can even go up to the macroscopic scale, with methods that couple a microscopic representation of matter, or at least a mesoscopic one, with the equations of continuum mechanics at the macroscopic level.
Computational Statistical Mechanics
The orders of magnitude used in the microscopic representation of
matter are far from the orders of magnitude of the macroscopic
quantities we are used to: The number of particles under
consideration in a macroscopic sample of material is of the order of
the Avogadro number
To give some insight into such a large number of particles contained in
a macroscopic sample, it is helpful to
compute the number of moles of water on earth. Recall that one mole of water
corresponds to 18 mL, so that a standard glass of water contains roughly
10 moles, and a typical bathtub contains
For practical numerical computations
of matter at the microscopic level, following the dynamics of every atom would
require simulating
Describing the macroscopic behavior of matter knowing its microscopic
description
therefore seems out of reach. Statistical physics allows us to bridge the gap
between microscopic and macroscopic descriptions of matter, at least on a
conceptual
level. The question is whether the estimated quantities for a system of
Despite its intrinsic limitations on spatial and timescales, molecular simulation has been used and developed over the past 50 years, and its number of users keeps increasing. As we understand it, it has two major aims nowadays.
First, it can be used as a numerical microscope, which allows us to perform “computer” experiments. This was the initial motivation for simulations at the microscopic level: physical theories were tested on computers. This use of molecular simulation is particularly clear in its historic development, which was triggered and sustained by the physics of simple liquids. Indeed, there was no good analytical theory for these systems, and the observation of computer trajectories was very helpful to guide the physicists' intuition about what was happening in the system, for instance the mechanisms leading to molecular diffusion. In particular, the pioneering works on Monte-Carlo methods by Metropolis et al., and the first molecular dynamics simulation of Alder and Wainwright were performed because of such motivations. Today, understanding the behavior of matter at the microscopic level can still be difficult from an experimental viewpoint (because of the high resolution required, both in time and in space), or because we simply do not know what to look for! Numerical simulations are then a valuable tool to test some ideas or obtain some data to process and analyze in order to help assessing experimental setups. This is particularly true for current nanoscale systems.
Another major aim of molecular simulation, maybe even more important than the previous one, is to compute macroscopic quantities or thermodynamic properties, typically through averages of some functionals of the system. In this case, molecular simulation is a way to obtain quantitative information on a system, instead of resorting to approximate theories, constructed for simplified models, and giving only qualitative answers. Sometimes, these properties are accessible through experiments, but in some cases only numerical computations are possible since experiments may be unfeasible or too costly (for instance, when high pressure or large temperature regimes are considered, or when studying materials not yet synthesized). More generally, molecular simulation is a tool to explore the links between the microscopic and macroscopic properties of a material, allowing one to address modelling questions such as “Which microscopic ingredients are necessary (and which are not) to observe a given macroscopic behavior?”
Homogenization and related problems
Over the years, the project-team has developed an increasing expertise on how to couple models written at the atomistic scale with more macroscopic models, and, more generally, an expertise in multiscale modelling for materials science.
The following observation motivates the idea of coupling atomistic and
continuum representation of materials. In many situations of interest
(crack propagation, presence of defects in the atomistic lattice, ...),
using a model based on continuum mechanics is difficult. Indeed, such a
model is based on a macroscopic constitutive law, the derivation of
which requires a deep qualitative and quantitative understanding of the
physical and mechanical properties of the solid under consideration.
For many solids, reaching such an understanding is a challenge, as loads
they are subjected to become larger and more diverse, and as
experimental observations helping designing such models are not always
possible (think of materials used in the nuclear industry).
Using an atomistic model in the whole domain is not possible either, due
to its prohibitive computational cost. Recall indeed that a
macroscopic sample of matter contains a number of atoms on the order of
From a mathematical viewpoint, the question is to couple a discrete model with a model described by PDEs. This raises many questions, both from the theoretical and numerical viewpoints:
-
first, one needs to derive, from an atomistic model, continuum mechanics models, under some regularity assumptions that encode the fact that the situation is smooth enough for such a macroscopic model to provide a good description of the materials;
-
second, couple these two models, e.g. in a domain decomposition spirit, with the specificity that models in both domains are written in a different language, that there is no natural way to write boundary conditions coupling these two models, and that one would like the decomposition to be self-adaptive.
More generally, the presence of numerous length scales in material science problems represents a challenge for numerical simulation, especially when some randomness is assumed on the materials. It can take various forms, and includes defects in crystals, thermal fluctuations, and impurities or heterogeneities in continuous media. Standard methods available in the literature to handle such problems often lead to very costly computations. Our goal is to develop numerical methods that are more affordable. Because we cannot embrace all difficulties at once, we focus on a simple case, where the fine scale and the coarse-scale models can be written similarly, in the form of a simple elliptic partial differential equation in divergence form. The fine scale model includes heterogeneities at a small scale, a situation which is formalized by the fact that the coefficients in the fine scale model vary on a small length scale. After homogenization, this model yields an effective, macroscopic model, which includes no small scale. In many cases, a sound theoretical groundwork exists for such homogenization results. The difficulty stems from the fact that the models generally lead to prohibitively costly computations. For such a case, simple from the theoretical viewpoint, our aim is to focus on different practical computational approaches to speed-up the computations. One possibility, among others, is to look for specific random materials, relevant from the practical viewpoint, and for which a dedicated approach can be proposed, that is less expensive than the general approach.