Syllabus of the Course
Introduction to the Calculus of Variations and to the Optimal Control Theory
Prof. Stefano Patrì
1) Overview on the ordinary, second order, linear differential equations
1.1) Homogeneous and non-homogeneous equations with constant coefficients
1.2) General method for the particular solution
1.3) Euler’s equation with constant coefficients \(c_0,c_1,c_2\) \[c_2\left [(\alpha x+\beta )^2y^{\prime \prime}(x)\right ]+c_1\left [(\alpha x+\beta )y^{\prime}(x)\right ]+c_0y(x)=f(x)\]
2) Dynamic Optimization and Calculus of Variations
2.1) Concept of functionals defined on function spaces
2.2) Optimization of Functionals in the Calculus of Variations
2.3) Examples
a) from Classical Mechanics (Brachistochrone and Tautochrone problem)
b) from Geometry (minimal distance between two given points on a plane)
c) from Economics
2.4) Necessary condition for the optimization: Euler-Lagrange’s equation
2.5) Sufficient condition for the optimization
3) Deterministic Optimal Control Theory in continuous time
3.1) Overview on dynamical systems
3.2) Pontryagin’s equations for the Optimization
3.3) Bellman’s principle of optimality and dynamic programming
3.4) Hamilton-Jacobi-Bellman equation
References
1) Kamien M.I., Schwartz N.L., Dynamic Optimization: The Calculus of Variations and Optimal Control in Economics and Management , Elsevier Science, Second edition, 1991 – downloadable here
2) Liberzon D., Calculus of Variations and Optimal Control Theory: a Concise Introduction, Princeton University Press, 2012 – downloadable here
3) Calogero A., Notes on Optimal Control Theory with Economic Models and Exercises, Lectures Notes, 2020 – downloadable here
4) Gozzi F., Optimal Control Problems: the Dynamic Programming Approach, Lectures Notes – downloadable here