Section: Research Program
Historical aspects
The roots of deterministic optimal control are the “classical” theory of the calculus of variations, illustrated by the work of Newton, Bernoulli, Euler, and Lagrange (whose famous multipliers were introduced in [45]), with improvements due to the “Chicago school”, Bliss [32] during the first part of the 20th century, and by the notion of relaxed problem and generalized solution (Young [51]).
Trajectory optimization really started with the spectacular achievement done by Pontryagin's group [50] during the fifties, by stating, for general optimal control problems, nonlocal optimality conditions generalizing those of Weierstrass. This motivated the application to many industrial problems (see the classical books by Bryson and Ho [38], Leitmann [47], Lee and Markus [46], Ioffe and Tihomirov [43]).
Dynamic programming was introduced and systematically studied by R. Bellman during the fifties. The HJB equation, whose solution is the value function of the (parameterized) optimal control problem, is a variant of the classical Hamilton-Jacobi equation of mechanics for the case of dynamics parameterized by a control variable. It may be viewed as a differential form of the dynamic programming principle. This nonlinear first-order PDE appears to be well-posed in the framework of viscosity solutions introduced by Crandall and Lions [39]. The theoretical contributions in this direction did not cease growing, see the books by Barles [30] and Bardi and Capuzzo-Dolcetta [29].