Section: Research Program
Control Problems
McTAO's major field of expertise is control theory in the large. Let us give an overview of this field.
Modelling
Our effort is directed toward efficient methods for the control of real (physical) systems, based on a model of the system to be controlled. Choosing accurate models yet simple enough to allow control design is in itself a key issue. The typical continuous-time model is of the form where is the state, ideally finite dimensional, and the control; the control is left free to be a function of time, or a function of the state, or obtained as the solution of another dynamical system that takes as an input. Deciding the nature and dimension of , as well as the dynamics (roughly speaking the function ). Connected to modeling is identification of parameters when a finite number of parameters are left free in “”.
Controllability, path planning
Controllability is a property of a control system (in fact of a model) that two states in the state space can be connected by a trajectory generated by some control, here taken as an explicit function of time.. Deciding on local or global controllability is still a difficult open question in general. In most cases, controllability can be decided by linear approximation, or non-controllability by “physical” first integrals that the control does not affect. For some critically actuated systems, it is still difficult to decide local or global controllability, and the general problem is anyway still open.
Path planning is the problem of constructing the control that actually steers one state to another.
Optimal control
In optimal control, one wants to find, among the controls that satisfy some contraints at initial and final time (for instance given initial and final state as in path planning), the ones that minimize some criterion.
This is important in many control engineering problems, because minimizing a cost is often very relevant. Mathematically speaking, optimal control is the modern branch of the calculus of variations, rather well established and mature [70], [41], [29], but with a lot of hard open questions. In the end, in order to actually compute these controls, ad-hoc numerical schemes have to be derived for effective computations of the optimal solutions.
See more about our research program in optimal control in section 3.2.
Feedback control
In the above two paragraphs, the control is an explicit function of time. To address in particular the stability issues (sensitivity to errors in the model or the initial conditions for example), the control has to be taken as a function of the (measured) state, or part of it. This is known as closed-loop control; it must be combined with optimal control in many real problems.
On the problem of stabilization, there is longstanding research record from members of the team, in particular on the construction of “Control Lyapunov Functions”, see [59], [72].
Classification of control systems
One may perform various classes of transformations acting on systems, or rather on models... The simpler ones come from point-to-point transformations (changes of variables) on the state and control, and more intricate ones consist in embedding an extraneous dynamical system into the model, these are dynamic feedback transformations, they change the dimension of the state.
In most problems, choosing the proper coordinates, or the right quantities that describe a phenomenon, sheds light on a path to the solution; these proper choices may sometimes be found from an understanding of the modelled phenomenons, or it can come from the study of the geometry of the equations and the transformation acting on them. This justifies the investigations of these transformations on models for themselves.
These topics are central in control theory; they are present in the team, see for instance the classification aspect in [48] or [18], or —although this research has not been active very recently— the study [69] of dynamic feedback and the so-called “flatness” property [62].