Skip to content

Commit

Permalink
Expand machine learning control notebook and add final open-ended exe…
Browse files Browse the repository at this point in the history
…rcise notebook
  • Loading branch information
AnesBenmerzoug committed May 14, 2024
1 parent 514244b commit 1205694
Show file tree
Hide file tree
Showing 21 changed files with 1,122 additions and 236 deletions.
3 changes: 3 additions & 0 deletions notebooks/_static/images/40_mpc_block_diagram_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions notebooks/_static/images/70_dmdc_overview.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions notebooks/_static/images/70_sindy_diagram.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions notebooks/_static/images/70_sindy_with_control_diagram.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 43 additions & 0 deletions notebooks/bibliography.bib
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,26 @@ @article{brunke_safe_2022
langid = {english}
}

@article{brunton_discovering_2016,
title = {Discovering Governing Equations from Data by Sparse Identification of Nonlinear Dynamical Systems},
author = {Brunton, Steven L. and Proctor, Joshua L. and Kutz, J. Nathan},
date = {2016-04-12},
journaltitle = {Proceedings of the National Academy of Sciences},
shortjournal = {PNAS},
volume = {113},
number = {15},
eprint = {27035946},
eprinttype = {pmid},
pages = {3932--3937},
publisher = {National Academy of Sciences},
issn = {0027-8424, 1091-6490},
doi = {10.1073/pnas.1517384113},
url = {https://www.pnas.org/content/113/15/3932},
urldate = {2021-03-28},
abstract = {Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.},
langid = {english}
}

@article{brunton_modern_2022,
title = {Modern {{Koopman Theory}} for {{Dynamical Systems}}},
author = {Brunton, Steven L. and Budišić, Marko and Kaiser, Eurika and Kutz, J. Nathan},
Expand All @@ -75,6 +95,17 @@ @article{brunton_modern_2022
abstract = {We establish the convergence of a class of numerical algorithms, known as dynamic mode decomposition (DMD), for computation of the eigenvalues and eigenfunctions of the infinite-dimensional Koopman operator. The algorithms act on data coming from observables on a state space, arranged in Hankel-type matrices. The proofs utilize the assumption that the underlying dynamical system is ergodic. This includes the classical measure-preserving systems, as well as systems whose attractors support a physical measure. Our approach relies on the observation that vector projections in DMD can be used to approximate the function projections by the virtue of Birkhoff's ergodic theorem. Using this fact, we show that applying DMD to Hankel data matrices in the limit of infinite-time observations yields the true Koopman eigenfunctions and eigenvalues. We also show that the singular value decomposition, which is the central part of most DMD algorithms, converges to the proper orthogonal decomposition of observables. We use this result to obtain a representation of the dynamics of systems with continuous spectrum based on the lifting of the coordinates to the space of observables. The numerical application of these methods is demonstrated using well-known dynamical systems and examples from computational fluid dynamics.}
}

@online{brunton_sparse_2016,
title = {Sparse {{Identification}} of {{Nonlinear Dynamics}} with {{Control}} ({{SINDYc}})},
author = {Brunton, Steven L. and Proctor, Joshua L. and Kutz, J. Nathan},
date = {2016-05-21},
url = {https://arxiv.org/abs/1605.06682v1},
urldate = {2024-05-14},
abstract = {Identifying governing equations from data is a critical step in the modeling and control of complex dynamical systems. Here, we investigate the data-driven identification of nonlinear dynamical systems with inputs and forcing using regression methods, including sparse regression. Specifically, we generalize the sparse identification of nonlinear dynamics (SINDY) algorithm to include external inputs and feedback control. This method is demonstrated on examples including the Lotka-Volterra predator--prey model and the Lorenz system with forcing and control. We also connect the present algorithm with the dynamic mode decomposition (DMD) and Koopman operator theory to provide a broader context.},
langid = {english},
organization = {arXiv.org}
}

@article{cerf_combining_2023,
title = {Combining Neural Networks and Control: Potentialities, Patterns and Perspectives},
shorttitle = {Combining Neural Networks and Control},
Expand Down Expand Up @@ -119,6 +150,18 @@ @article{dogra_optimizing_2020
langid = {english}
}

@online{fasel_sindy_2021,
title = {{{SINDy}} with {{Control}}: {{A Tutorial}}},
shorttitle = {{{SINDy}} with {{Control}}},
author = {Fasel, Urban and Kaiser, Eurika and Kutz, J. Nathan and Brunton, Bingni W. and Brunton, Steven L.},
date = {2021-08-30},
url = {https://arxiv.org/abs/2108.13404v1},
urldate = {2024-05-14},
abstract = {Many dynamical systems of interest are nonlinear, with examples in turbulence, epidemiology, neuroscience, and finance, making them difficult to control using linear approaches. Model predictive control (MPC) is a powerful model-based optimization technique that enables the control of such nonlinear systems with constraints. However, modern systems often lack computationally tractable models, motivating the use of system identification techniques to learn accurate and efficient models for real-time control. In this tutorial article, we review emerging data-driven methods for model discovery and how they are used for nonlinear MPC. In particular, we focus on the sparse identification of nonlinear dynamics (SINDy) algorithm and show how it may be used with MPC on an infectious disease control example. We compare the performance against MPC based on a linear dynamic mode decomposition (DMD) model. Code is provided to run the tutorial examples and may be modified to extend this data-driven control framework to arbitrary nonlinear systems.},
langid = {english},
organization = {arXiv.org}
}

@book{goodwin_control_2000,
title = {Control {{System Desig}}},
author = {Goodwin, Graham C.},
Expand Down
33 changes: 24 additions & 9 deletions notebooks/nb_10_introduction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 5,
"metadata": {
"editable": true,
"init_cell": true,
Expand All @@ -23,7 +23,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 6,
"metadata": {
"init_cell": true,
"scene__Default Scene": true,
Expand Down Expand Up @@ -201,7 +201,7 @@
"<IPython.core.display.HTML object>"
]
},
"execution_count": 9,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
Expand Down Expand Up @@ -575,12 +575,16 @@
"- In **control theory**:\n",
"\n",
" - we explicitly model the system using knowledge about the equations governing its behaviour, by estimating the parameters of such equations or by fitting a model on measurements from the system.\n",
" - we synthesize a controller by **minimizing** a cost function, in the case of optimal control. \n",
" - we generally deal with dynamical systems governed by differential equations.\n",
" - we synthesize a controller by **minimizing** a cost function, in the case of optimal control.\n",
" - we may have to reconstruct the state from measurements using an observer.\n",
" \n",
"- Whereas in **reinforcement learning**:\n",
"\n",
" - we do have to model the system and instead can directly learn the agent that maximizes the expected reward while interacting with the environment.\n",
" - we train an agent by **maximing** a reward function."
" - We generally deal with systems modelled by Markov Decision Processes (MDPs).\n",
" - we train an agent by **maximing** a reward function.\n",
" - we directly use the measurements from the environment."
]
},
{
Expand Down Expand Up @@ -919,7 +923,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"```{exercise} RC-Circuit Exercise\n",
"::::{exercise} RC-Circuit Exercise\n",
":label: rc-circuit-exercise\n",
":width: 80%\n",
"\n",
Expand All @@ -936,15 +940,26 @@
"\n",
"$$R \\frac{d y(t)}{dt} + \\frac{1}{C} y(t) = u(t)$$\n",
"\n",
"with $q(0) = 0$ i.e. the capacitor is uncharged at $t=0$\n",
"with $y(0) = 0$ i.e. the capacitor is uncharged at $t=0$\n",
"\n",
"The current in the circuit is given by: $i(t) = \\frac{d y(t)}{dt}$\n",
"\n",
"### Questions:\n",
"**Questions**:\n",
"\n",
"- Suppose our only goal is to charge the capacitor as quickly as possible without worrying about anything else. What would be your choice for a cost function?\n",
"- Suppose that we additionally want to limit the current running throught the circuit. What would then be your choice for a cost function?\n",
"```"
"::::"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":::{solution} rc-circuit-exercise\n",
":class: dropdown\n",
"- $J = - y(t)$\n",
"- $J = - y(t) + \\lambda i(t), \\quad \\lambda > 0$\n",
":::"
]
}
],
Expand Down
Loading

0 comments on commit 1205694

Please sign in to comment.