Section: Research Program
Interacting Monte Carlo methods and particle approximation of Feynman–Kac distributions
Monte Carlo methods are numerical methods that are widely used
in situations where
(i) a stochastic (usually Markovian) model is given for some underlying
process, and (ii) some quantity of interest should be evaluated, that
can be expressed in terms of the expected value of a functional of the
process trajectory, which includes as an important special case the
probability that a given event has occurred.
Numerous examples can be found, e.g. in financial engineering (pricing of options and derivative
securities) [36] ,
in performance evaluation of communication networks (probability of buffer
overflow), in statistics of hidden Markov models (state estimation,
evaluation of contrast and score functions), etc.
Very often in practice, no analytical expression is available for
the quantity of interest, but it is possible to simulate trajectories
of the underlying process. The idea behind Monte Carlo methods is
to generate independent trajectories of this process
or of an alternate instrumental process,
and to build an approximation (estimator) of the quantity of interest
in terms of the weighted empirical probability distribution
associated with the resulting independent sample.
By the law of large numbers, the above estimator converges
as the size
A recent and major breakthrough, has been the introduction of interacting Monte Carlo methods, also known as sequential Monte Carlo (SMC) methods, in which a whole (possibly weighted) sample, called system of particles, is propagated in time, where the particles
-
explore the state space under the effect of a mutation mechanism which mimics the evolution of the underlying process,
-
and are replicated or terminated, under the effect of a selection mechanism which automatically concentrates the particles, i.e. the available computing power, into regions of interest of the state space.
In full generality, the underlying process is a discrete–time Markov chain, whose state space can be
finite, continuous, hybrid (continuous / discrete), graphical, constrained, time varying, pathwise, etc.,
the only condition being that it can easily be simulated.
In the special case of particle filtering, originally developed within the tracking community, the algorithms yield a numerical approximation of the optimal Bayesian filter, i.e. of the conditional probability distribution of the hidden state given the past observations, as a (possibly weighted) empirical probability distribution of the system of particles. In its simplest version, introduced in several different scientific communities under the name of bootstrap filter [38] , Monte Carlo filter [43] or condensation (conditional density propagation) algorithm [40] , and which historically has been the first algorithm to include a redistribution step, the selection mechanism is governed by the likelihood function: at each time step, a particle is more likely to survive and to replicate at the next generation if it is consistent with the current observation. The algorithms also provide as a by–product a numerical approximation of the likelihood function, and of many other contrast functions for parameter estimation in hidden Markov models, such as the prediction error or the conditional least–squares criterion.
Particle methods are currently being used in many scientific and engineering areas
positioning, navigation, and tracking [39] , [33] , visual tracking [40] , mobile robotics [34] , [55] , ubiquitous computing and ambient intelligence, sensor networks, risk evaluation and simulation of rare events [37] , genetics, molecular simulation [35] , etc.
Other examples of the many applications of particle filtering can be found in the contributed volume [22] and in the special issue of IEEE Transactions on Signal Processing devoted to Monte Carlo Methods for Statistical Signal Processing in February 2002, where the tutorial paper [23] can be found, and in the textbook [52] devoted to applications in target tracking. Applications of sequential Monte Carlo methods to other areas, beyond signal and image processing, e.g. to genetics, can be found in [51] . A recent overview can also be found in [25] .
Particle methods are very easy to implement, since it is sufficient in principle to simulate independent trajectories of the underlying process. The whole problematic is multidisciplinary, not only because of the already mentioned diversity of the scientific and engineering areas in which particle methods are used, but also because of the diversity of the scientific communities which have contributed to establish the foundations of the field
target tracking, interacting particle systems, empirical processes, genetic algorithms (GA), hidden Markov models and nonlinear filtering, Bayesian statistics, Markov chain Monte Carlo (MCMC) methods.
These algorithms can be interpreted as numerical approximation schemes
for Feynman–Kac distributions, a pathwise generalization of Gibbs–Boltzmann
distributions,
in terms of the weighted empirical probability distribution
associated with a system of particles.
This abstract point of view [31] , [29] ,
has proved to be extremely fruitful in providing a very general
framework to the design and analysis of numerical approximation schemes,
based on systems of branching and / or interacting particles,
for nonlinear dynamical systems with values in the space of probability
distributions, associated with Feynman–Kac distributions.
Many asymptotic results have been proved as the number
convergence in
, convergence as empirical processes indexed by classes of functions, uniform convergence in time, see also [48] , [49] , central limit theorem, see also [45] , propagation of chaos, large deviations principle, etc.
The objective here is to systematically study the impact of the many algorithmic variants on the convergence results.