Optimal Control Theory

Description

The optimal control problem basic mathematical notion from the variational calculus - minimization of functionals - Euler-Lagrance equation - minimization of functional under constraints - optimal control of continuous or discrete time systems with or without state/input constraints - the minimum principle of Pontryagin - the linear quadratic (LQ) regulation and tracking problem - Riccati equation - bang-bang control - Hamilton-Jacobi-Bellman theory - dynamic programming - the linear quadratic Gaussian (LQG) problem - applications in Matlab.

Course Coordinators

Suggested References

  1. Burl J.B. (1998). Linear Optimal Control: H2 and Hinf Methods. Addison-Wesley.
  2. Lewis F.L. (1995). Optimal Control. 2nd edition. John Wi¬ley and Sons; New York.
  3. Donald E. Kirk (1970), Optimal Control Theory : An Introduction, Prentice Hall.
  4. D. S. Naidu, (2003), Optimal Control Systems, CRC Press.
  5. A. Shina, 2007, Linear systems : optimal and robust control, CRC Press
  6. V.M. Tikhomirov, 1999, Ιστορίες για μέγιστα και ελάχιστα, Εκδόσεις Κάτοπτρο.
  7. Καραμπετάκης Ν., (2009), Βέλτιστος Έλεγχος Συστημάτων, Εκδόσεις Ζήτη.
  8. Κυβεντίδης Θ., (1994). Λογισμός μεταβολών, Εκδόσεις Ζήτη.
Semester: 
Credit Units (ECTS): 
10.0
ID: 
0844