As time passes, uncertainty (usually) increases in the absence of new evidence.
$
\begin{aligned}
P(\mathbf{X}_\infty = \text{sun}) =&, P(\mathbf{X}_{\infty+1} = \text{sun}) \\
=&, P(\mathbf{X}_{\infty+1}=\text{sun} | \mathbf{X}_{\infty}=\text{sun}) P(\mathbf{X}_{\infty}=\text{sun})\\
& + P(\mathbf{X}_{\infty+1}=\text{sun} | \mathbf{X}_{\infty}=\text{rain}) P(\mathbf{X}_{\infty}=\text{rain})\\
=&, 0.9 P(\mathbf{X}_{\infty}=\text{sun}) + 0.3 P(\mathbf{X}_{\infty}=\text{rain})
\end{aligned}
$
.center[With new evidence, uncertainty decreases. Beliefs get reweighted. But how?]
Formally, the Bayes filter is defined as
$$
\begin{aligned}
{\bf P}(\mathbf{X}_{t+1}| \mathbf{e}_{1:t+1}) &= {\bf P}(\mathbf{X}_{t+1}| \mathbf{e}_{1:t}, \mathbf{e}_{t+1}) \\
&\propto {\bf P}(\mathbf{e}_{t+1}| \mathbf{X}_{t+1}, \mathbf{e}_{1:t}) {\bf P}(\mathbf{X}_{t+1}| \mathbf{e}_{1:t}) \\
&\propto {\bf P}(\mathbf{e}_{t+1}| \mathbf{X}_{t+1}) {\bf P}(\mathbf{X}_{t+1}| \mathbf{e}_{1:t}) \\
&\propto {\bf P}(\mathbf{e}_{t+1}| \mathbf{X}_{t+1}) \sum_{\mathbf{x}_t} {\bf P}(\mathbf{X}_{t+1}|\mathbf{x}_t, \mathbf{e}_{1:t}) P(\mathbf{x}_t | \mathbf{e}_{1:t}) \\
&\propto {\bf P}(\mathbf{e}_{t+1}| \mathbf{X}_{t+1}) \sum_{\mathbf{x}_t} {\bf P}(\mathbf{X}_{t+1}|\mathbf{x}_t) P(\mathbf{x}_t | \mathbf{e}_{1:t})
\end{aligned}
$$
where
Solve on blackboard.
Ghostbusters with a Bayes filter
]
.footnote[Credits: CS188 , UC Berkeley.]
???
python3 run.py --nghosts 2 --layout maze_small --agentfile sherlockpacman.py --bsagentfile bayesfilter.py --show True
python3 run.py --nghosts 3 --layout maze_medium --agentfile sherlockpacman.py --bsagentfile bayesfilter.py --show True
python3 run.py --nghosts 4 --layout maze_huge --agentfile sherlockpacman.py --bsagentfile bayesfilter.py --show True
We want to compute ${\bf P}(\mathbf{X}_{k}| \mathbf{e}_{1:t})$ for $0 \leq k < t$ .
Dividing the evidence $\mathbf{e}_{1:t}$ into $\mathbf{e}_{1:k}$ and $\mathbf{e}_{k+1:t}$ , we have
$$
\begin{aligned}
{\bf P}(\mathbf{X}_k | \mathbf{e}_{1:t}) &= {\bf P}(\mathbf{X}_k | \mathbf{e}_{1:k}, \mathbf{e}_{k+1:t}) \\
&\propto {\bf P}(\mathbf{X}_k | \mathbf{e}_{1:k}) {\bf P}(\mathbf{e}_{k+1:t} | \mathbf{X}_k, \mathbf{e}_{1:k}) \\
&\propto {\bf P}(\mathbf{X}_k | \mathbf{e}_{1:k}) {\bf P}(\mathbf{e}_{k+1:t} | \mathbf{X}_k).
\end{aligned}
$$
class: middle
Let the backward message $\mathbf{b}_{k+1:t}$ correspond to ${\bf P}(\mathbf{e}_{k+1:t} | \mathbf{X}_k)$ . Then,
$${\bf P}(\mathbf{X}_k | \mathbf{e}_{1:t}) = \alpha, \mathbf{f}_{1:k} \times \mathbf{b}_{k+1:t},$$
where $\times$ is a pointwise multiplication of vectors.
This backward message can be computed using backwards recursion:
$$
\begin{aligned}
{\bf P}(\mathbf{e}_{k+1:t} | \mathbf{X}_k) &= \sum_{\mathbf{x}_{k+1}} {\bf P}(\mathbf{e}_{k+1:t} | \mathbf{X}_k, \mathbf{x}_{k+1}) {\bf P}(\mathbf{x}_{k+1} | \mathbf{X}_k) \\\
&= \sum_{\mathbf{x}_{k+1}} P(\mathbf{e}_{k+1:t} | \mathbf{x}_{k+1}) {\bf P}(\mathbf{x}_{k+1} | \mathbf{X}_k) \\\
&= \sum_{\mathbf{x}_{k+1}} P(\mathbf{e}_{k+1} | \mathbf{x}_{k+1}) P(\mathbf{e}_{k+2:t} | \mathbf{x}_{k+1}) {\bf P}(\mathbf{x}_{k+1} | \mathbf{X}_k).
\end{aligned}
$$
The first and last factors are given by the model. The second factor is obtained recursively. Therefore,
$$\mathbf{b}_{k+1:t} = \text{backward}(\mathbf{b}_{k+2:t}, \mathbf{e}_{k+1} ).$$
class: middle
Forward-backward algorithm
.center.width-100[ ]
Complexity:
Smoothing for a particular time step $k$ takes: $O(t)$
Smoothing a whole sequence (because of caching): $O(t)$
class: middle
.center.width-80[ ]
???
Solve on blackboard.
.center.width-100[![](figures/lec6/sherlock-mostlikely.png)]
.footnote[Credits: CS188 , UC Berkeley.]
class: middle
.center.width-80[ ]
Suppose that $[\text{true}, \text{true}, \text{false}, \text{true}, \text{true}]$ is the umbrella sequence.
What is the weather sequence that is the most likely to explain this?
Among all $2^5$ sequences, is there an (efficient) way to find the most likely one?
.footnote[Credits: CS188 , UC Berkeley.]
class: middle
The most likely sequence is not the sequence of the most likely states!
The most likely path to each $\mathbf{x}_{t+1}$ , is the most likely path to some $\mathbf{x}_t$ plus one more step. Therefore,
$$
\begin{aligned}
&\max_{\mathbf{x}_{1:t}} {\bf P}(\mathbf{x}_{1:t}, \mathbf{X}_{t+1} | \mathbf{e}_{1:t+1}) \\
&\propto {\bf P}(\mathbf{e}_{t+1}|\mathbf{X}_{t+1}) \max_{\mathbf{x}_t}( {\bf P}(\mathbf{X}_{t+1} | \mathbf{x}_t) \max_{\mathbf{x}_{1:t-1}} {\bf P}(\mathbf{x}_{1:t-1}, \mathbf{x}_{t} | \mathbf{e}_{1:t}) ).
\end{aligned}
$$
???
Let’s focus in particular on paths that reach the state Rain 5 = true
.
Because of the Markov property, it follows that the most likely path to the state Rain 5 = true
consists of the most likely path to some state at time 4, followed by a transition to Rain 5 = true
; and the state at time 4 that will become part of the path to Rain 5 = true
is whichever maximizes the likelihood of that path.
class: middle
This is identical to filtering, except that
the forward message $\mathbf{f}_{1:t} = {\bf P}(\mathbf{X}_t | \mathbf{e}_{1:t})$ is replaced with
$$\mathbf{m}_{1:t} = \max_{\mathbf{x}_{1:t-1}} {\bf P}(\mathbf{x}_{1:t-1}, \mathbf{X}_{t} | \mathbf{e}_{1:t}),$$
where $\mathbf{m}_{1:t}(i)$ gives the probability of the most likely path to state $i$ .
The update has its sum replaced by max.
The resulting algorithm is called the Viterbi algorithm , which computes the most likely explanation as
$$\mathbf{m}_{1:t+1} \propto {\bf P}(\mathbf{e}_{t+1} | \mathbf{X}_{t+1}) \max_{\mathbf{x}_{t}} {\bf P}(\mathbf{X}_{t+1} | \mathbf{x}_{t}) \mathbf{m}_{1:t}.$$ Its complexity is linear in $t$ , the length of the sequence.
???
Naive procedure: use smoothing to compute $P(X_k|e_{1:t})$ , then output the sequence of the most likely value for each $k$ .
class: middle
.center.width-90[ ]
???
[Q] How do you retrieve the path, in addition to its likelihood?
So far, we described Markov processes over arbitrary sets of state variables $\mathbf{X}_t$ and evidence variables $\mathbf{E}_t$ .
A hidden Markov model (HMM) is a Markov process in which the state $\mathbf{X}_t$ and the evidence $\mathbf{E}_t$ are both single discrete random variables.
$\mathbf{X}_t = X_t$ , with domain $D_{X_t} = \{1, ..., S\}$
$\mathbf{E}_t = E_t$ , with domain $D_{E_t} = \{1, ..., R\}$
This restricted structure allows for a reformulation of the forward-backward algorithm in terms of matrix-vector operations.
class: middle
Some authors instead divide Markov models into two classes, depending on the observability of the system state:
Observable system state: Markov chains
Partially-observable system state: Hidden Markov models.
We follow here instead the terminology of the textbook, as defined in the previous slide.
class: middle
Simplified matrix algorithms
The prior ${\bf P}(X_0)$ becomes a (normalized) column vector $\mathbf{f}_0 \in \mathbb{R}_+^S$ .
The transition model ${\bf P}(X_t | X_{t-1})$ becomes an $S \times S$ transition matrix $\mathbf{T}$ , such that $$\mathbf{T}_{ij} = P(X_t=j | X_{t-1}=i).$$
The sensor model ${\bf P}(E_t | X_t)$ is defined as an $S \times R$ sensor matrix $\mathbf{B}$ , such that
$$\mathbf{B}_{ij} = P(E_t=j | X_t=i).$$
class: middle
Let the observation matrix $\mathbf{O}_t$ be a diagonal matrix whose elements corresponds to the column $e_t$ of the sensor matrix $\mathbf{B}$ .
If we use column vectors to represent forward and backward messages, then we have
$$\mathbf{f}_{1:t+1} = \alpha \mathbf{O}_{t+1} \mathbf{T}^T \mathbf{f}_{1:t}$$
$$\mathbf{b}_{k+1:t} = \mathbf{T} \mathbf{O}_{k+1} \mathbf{b}_{k+2:t},$$
where $\mathbf{b}_{t+1:t}$ is an all-one vector of size $S$ .
Therefore the forward-backward algorithm needs time $O(S^2t)$ and space $O(St)$ .
class: middle
Suppose that $[\text{true}, \text{true}, \text{false}, \text{true}, \text{true}]$ is the umbrella sequence.
$$
\begin{aligned}
\mathbf{f}_0 &= \left(\begin{matrix}
0.5 \\\
0.5
\end{matrix}\right)\\\
\mathbf{T} &= \left(\begin{matrix}
0.7 & 0.3 \\\
0.3 & 0.7
\end{matrix}\right)\\\
\mathbf{B} &= \left(\begin{matrix}
0.9 & 0.1 \\\
0.2 & 0.8
\end{matrix}\right)\\\
\mathbf{O}_1 = \mathbf{O}_2 = \mathbf{O}_4 = \mathbf{O}_5 &= \left(\begin{matrix}
0.9 & 0.0 \\\
0.0 & 0.2
\end{matrix}\right) \\\
\mathbf{O}_3 &= \left(\begin{matrix}
0.1 & 0.0 \\\
0.0 & 0.8
\end{matrix}\right)
\end{aligned}
$$
See code/lecture6-forward-backward.ipynb
for the execution.
class: middle
The stationary distribution $\mathbf{f}$ of a HMM is a distribution such that
$$\mathbf{f} = \mathbf{T}^T \mathbf{f}.$$
Therefore, the stationary distribution corresponds to a (normalized) eigenvector of the transposed transition matrix with an eigenvalue of $1$ .
class: middle
class: middle
.center.width-50[ ]
Suppose we want to track the position and velocity of a robot from noisy observations collected over time.
Formally, we want to estimate continuous state variables such as
the position $\mathbf{X}_t$ of the robot at time $t$ ,
the velocity $\mathbf{\dot{X}}_t$ of the robot at time $t$ .
We assume discrete time steps.
.footnote[Credits: CS188 , UC Berkeley.]
Let $X: \Omega \to D_X$ be a random variable.
When $D_X$ is uncountably infinite (e.g., $D_X = \mathbb{R}$ ), $X$ is called a continuous random variable .
If $X$ is absolutely continuous, its probability distribution is described by a density function $p$ that assigns a probability to any interval $[a,b] \subseteq D_X$ such that
$$P(a < X \leq b) = \int_a^b p(x) dx,$$
where $p$ is non-negative piecewise continuous and such that $$\int_{D_X} p(x)dx=1.$$
class: middle
.center.width-60[ ]
The uniform distribution $\mathcal{U}(a,b)$ is described by the density function
$$
p(x) = \begin{cases}
\frac{1}{b-a} & \text{if } x \in [a,b]\\
0 & \text{otherwise}
\end{cases}$$
where $a \in \mathbb{R}$ and $b \in \mathbb{R}$ are the bounds of its support.
class: middle
.center.width-60[ ]
The normal (or Gaussian) distribution $\mathcal{N}(\mu,\sigma)$ is described by the density function
$$p(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$$
where $\mu \in \mathbb{R}$ and $\sigma \in \mathbb{R}^+$ are its mean and standard deviation parameters.
???
Comment that
$\mu$ is the location
$\sigma$ is the width of the normal
class: middle
.center.width-60[ ]
The multivariate normal distribution generalizes to $n$ random variables. Its (joint) density function is defined as
$$p(\mathbf{x}=x_1, ..., x_n) = \frac{1}{\sqrt{(2\pi)^n|\mathbf{\Sigma}|}} \exp\left(-\frac{1}{2} (\mathbf{x}-\mathbf{m})^T \mathbf{\Sigma}^{-1} (\mathbf{x}-\mathbf{m}) \right) $$
where $\mathbf{m} \in \mathbb{R}^n$ and $\mathbf{\Sigma} \in \mathbb{R}^{n\times n}$ is positive semi-definite.
class: middle
Cheat sheet for Gaussian models (Särkkä, 2013)
If $\mathbf{x}$ and $\mathbf{y}$ have the joint Gaussian distribution
$$
\begin{aligned}
p\left(\begin{matrix}
\mathbf{x} \\
\mathbf{y}
\end{matrix}\right) = \mathcal{N}\left( \left(\begin{matrix}
\mathbf{x} \\
\mathbf{y}
\end{matrix}\right) \bigg\vert \left(\begin{matrix}
\mathbf{a} \\
\mathbf{b}
\end{matrix}\right), \left(\begin{matrix}
\mathbf{A} & \mathbf{C} \\
\mathbf{C}^T & \mathbf{B}
\end{matrix}\right) \right),
\end{aligned}
$$
then the marginal and conditional distributions of $\mathbf{x}$ and $\mathbf{y}$ are given by
$$
\begin{aligned}
p(\mathbf{x}) &= \mathcal{N}(\mathbf{x}|\mathbf{a}, \mathbf{A}) \\
p(\mathbf{y}) &= \mathcal{N}(\mathbf{y}|\mathbf{b}, \mathbf{B}) \\
p(\mathbf{x}|\mathbf{y}) &= \mathcal{N}(\mathbf{x}|\mathbf{a}+\mathbf{C}\mathbf{B}^{-1}(\mathbf{y}-\mathbf{b}), \mathbf{A}-\mathbf{C}\mathbf{B}^{-1}\mathbf{C}^T) \\
p(\mathbf{y}|\mathbf{x}) &= \mathcal{N}(\mathbf{y}|\mathbf{b}+\mathbf{C}^T\mathbf{A}^{-1}(\mathbf{x} - \mathbf{a}) , \mathbf{B}-\mathbf{C}^T\mathbf{A}^{-1}\mathbf{C}).
\end{aligned}
$$
class: middle
If the random variables $\mathbf{x}$ and $\mathbf{y}$ have Gaussian probability distributions
$$
\begin{aligned}
p(\mathbf{x}) &= \mathcal{N}(\mathbf{x}|\mathbf{m}, \mathbf{P}) \\
p(\mathbf{y}|\mathbf{x}) &= \mathcal{N}(\mathbf{y}|\mathbf{H}\mathbf{x}+\mathbf{u}, \mathbf{R}),
\end{aligned}
$$
then the joint distribution of $\mathbf{x}$ and $\mathbf{y}$ is Gaussian with
$$
\begin{aligned}
p\left(\begin{matrix}
\mathbf{x} \\
\mathbf{y}
\end{matrix}\right) = \mathcal{N}\left( \left(\begin{matrix}
\mathbf{x} \\
\mathbf{y}
\end{matrix}\right) \bigg\vert \left(\begin{matrix}
\mathbf{m} \\
\mathbf{H}\mathbf{m}+\mathbf{u}
\end{matrix}\right), \left(\begin{matrix}
\mathbf{P} & \mathbf{P}\mathbf{H}^T \\
\mathbf{H}\mathbf{P} & \mathbf{H}\mathbf{P}\mathbf{H}^T + \mathbf{R}
\end{matrix}\right) \right).
\end{aligned}
$$
The Bayes filter extends to continuous state and evidence variables $\mathbf{X}_{t}$ and $\mathbf{E}_{t}$ .
The summations are replaced with integrals and the probability mass functions with probability densities, giving the recursive Bayesian relation
$$
\begin{aligned}
p(\mathbf{x}_{t+1}| \mathbf{e}_{1:t+1}) &\propto, p(\mathbf{e}_{t+1}| \mathbf{x}_{t+1}) \int p(\mathbf{x}_{t+1}|\mathbf{x}_t) p(\mathbf{x}_t | \mathbf{e}_{1:t}) d{\mathbf{x}_t},
\end{aligned}
$$
where the normalization constant is
$$Z = \int p(\mathbf{e}_{t+1} | \mathbf{x}_{t+1}) p(\mathbf{x}_{t+1} | \mathbf{e}_{1:t}) d\mathbf{x}_{t+1}.$$
The Kalman filter is a special case of the Bayes filter, which assumes:
Gaussian prior
Linear Gaussian transition model
Linear Gaussian sensor model
.grid[
.kol-1-2.center[
$p(\mathbf{x}_{t+1} | \mathbf{x}_t) = \mathcal{N}(\mathbf{x}_{t+1} | \mathbf{A} \mathbf{x}_t + \mathbf{b}, \mathbf{\Sigma}_{\mathbf{x}})$
Transition model
]
.kol-1-2.center[
$p(\mathbf{e}_{t} | \mathbf{x}_t) = \mathcal{N}(\mathbf{e}_t | \mathbf{C} \mathbf{x}_t + \mathbf{d}, \mathbf{\Sigma}_{\mathbf{e}})$
Sensor model
]
]
class: middle
Filtering Gaussian distributions
.italic[Prediction step:]
If the distribution $p(\mathbf{x}_t | \mathbf{e}_{1:t})$ is Gaussian and the transition model $p(\mathbf{x}_{t+1} | \mathbf{x}_{t})$ is linear Gaussian, then the one-step predicted distribution given by
$$p(\mathbf{x}_{t+1} | \mathbf{e}_{1:t}) = \int p(\mathbf{x}_{t+1} | \mathbf{x}_{t}) p(\mathbf{x}_{t} | \mathbf{e}_{1:t}) d\mathbf{x}_t $$
is also a Gaussian distribution.
.italic[Update step:]
If the prediction $p(\mathbf{x}_{t+1} | \mathbf{e}_{1:t})$ is Gaussian and the sensor model $p(\mathbf{e}_{t+1} | \mathbf{x}_{t+1})$ is linear Gaussian, then after conditioning on new evidence, the updated distribution
$$p(\mathbf{x}_{t+1} | \mathbf{e}_{1:t+1}) \propto p(\mathbf{e}_{t+1} | \mathbf{x}_{t+1}) p(\mathbf{x}_{t+1} | \mathbf{e}_{1:t})$$
is also a Gaussian distribution.
class: middle
Therefore, for the Kalman filter, $p(\mathbf{x}_t | \mathbf{e}_{1:t})$ is a multivariate Gaussian distribution $\mathcal{N}(\mathbf{x}_t | \mathbf{\mu}_t, \mathbf{\Sigma}_t)$ for all $t$ .
Filtering reduces to the computation of the parameters $\mu_t$ and $\mathbf{\Sigma}_t$ .
By contrast, for general (non-linear, non-Gaussian) processes, the description of the posterior grows unboundedly as $t \to \infty$ .
class: middle
Gaussian random walk:
Gaussian prior: $$p(x_0) = \mathcal{N}(x_0 | \mu_0, \sigma_0^2) $$
The transition model adds random perturbations of constant variance:
$$p(x_{t+1}|x_t) = \mathcal{N}(x_{t+1}|x_t, \sigma_x^2)$$
The sensor model yields measurements with Gaussian noise of constant variance:
$$p(e_{t}|x_t) = \mathcal{N}(e_t | x_t, \sigma_e^2)$$
class: middle
The one-step predicted distribution is given by
$$
\begin{aligned}
p(x_1) &= \int p(x_1 | x_0) p(x_0) dx_0 \\
&\propto \int \exp\left(-\frac{1}{2} \frac{(x_{1} - x_0)^2}{\sigma_x^2}\right) \exp\left(-\frac{1}{2} \frac{(x_0 - \mu_0)^2}{\sigma_0^2}\right) dx_0 \\
&\propto \int \exp\left( -\frac{1}{2} \frac{\sigma_0^2 (x_1 - x_0)^2 + \sigma_x^2(x_0 - \mu_0)^2}{\sigma_0^2 \sigma_x^2} \right) dx_0 \\
&... ,, \text{(simplify by completing the square)} \\
&\propto \exp\left( -\frac{1}{2} \frac{(x_1 - \mu_0)^2}{\sigma_0^2 + \sigma_x^2} \right) \\
&= \mathcal{N}(x_1 | \mu_0, \sigma_0^2 + \sigma_x^2)
\end{aligned}
$$
Note that the same result can be obtained by using instead the Gaussian models identities.
class: middle
For the update step, we need to condition on the observation at the first time step:
$$
\begin{aligned}
p(x_1 | e_1) &\propto p(e_1 | x_1) p(x_1) \\
&\propto \exp\left(-\frac{1}{2} \frac{(e_{1} - x_1)^2}{\sigma_e^2}\right) \exp\left( -\frac{1}{2} \frac{(x_1 - \mu_0)^2}{\sigma_0^2 + \sigma_x^2} \right) \\
&\propto \exp\left( -\frac{1}{2} \frac{\left(x_1 - \frac{(\sigma_0^2 + \sigma_x^2) e_1 + \sigma_e^2 \mu_0}{\sigma_0^2 + \sigma_x^2 + \sigma_e^2}\right)^2}{\frac{(\sigma_0^2 + \sigma_x^2)\sigma_e^2}{\sigma_0^2 + \sigma_x^2 + \sigma_e^2}} \right) \\
&= \mathcal{N}\left(x_1 \bigg\vert \frac{(\sigma_0^2 + \sigma_x^2) e_1 + \sigma_e^2 \mu_0}{\sigma_0^2 + \sigma_x^2 + \sigma_e^2}, \frac{(\sigma_0^2 + \sigma_x^2)\sigma_e^2}{\sigma_0^2 + \sigma_x^2 + \sigma_e^2}\right)
\end{aligned}
$$
class: middle
.center.width-70[ ]
In summary, the update equations given a new evidence $e_{t+1}$ are:
$$
\begin{aligned}
\mu_{t+1} &= \frac{(\sigma_t^2 + \sigma_x^2) e_{t+1} + \sigma_e^2 \mu_t }{\sigma_t^2 + \sigma_x^2 + \sigma_e^2} \\
\sigma_{t+1}^2 &= \frac{(\sigma_t^2 + \sigma_x^2) \sigma_e^2}{\sigma_t^2 + \sigma_x^2 + \sigma_e^2}
\end{aligned}
$$
???
We can interpret
the calculation for the new mean $\mu_{t+1}$ as simply a weighted mean of the new observation
$e_{t+1}$ and the old mean $\mu_t$ .
If the observation is unreliable, then $\sigma_e^2$ is large and we pay more
attention to the old mean;
If the observation is reliable, then we pay more attention to the evidence and less to the old mean.
if the old mean is unreliable ($\sigma_t^2$ is large) or the process is highly
unpredictable ($\sigma_x^2$ is large), then we pay more attention to the observation
class: middle
The same derivations generalize to multivariate normal distributions.
Assuming the transition and sensor models
$$
\begin{aligned}
p(\mathbf{x}_{t+1} | \mathbf{x}_t) &= \mathcal{N}(\mathbf{x}_{t+1} | \mathbf{F} \mathbf{x}_t, \mathbf{\Sigma}_{\mathbf{x}}) \\
p(\mathbf{e}_{t} | \mathbf{x}_t) &= \mathcal{N}(\mathbf{e}_{t} | \mathbf{H} \mathbf{x}_t, \mathbf{\Sigma}_{\mathbf{e}}),
\end{aligned}
$$
we arrive at the following general update equations:
$$
\begin{aligned}
\mu_{t+1} &= \mathbf{F}\mathbf{\mu}_t + \mathbf{K}_{t+1} (\mathbf{e}_{t+1} - \mathbf{H} \mathbf{F} \mathbf{\mu}_t) \\
\mathbf{\Sigma}_{t+1} &= (\mathbf{I} - \mathbf{K}_{t+1} \mathbf{H}) (\mathbf{F}\mathbf{\Sigma}_t \mathbf{F}^T + \mathbf{\Sigma}_x) \\
\mathbf{K}_{t+1} &= (\mathbf{F}\mathbf{\Sigma}_t \mathbf{F}^T + \mathbf{\Sigma}_x) \mathbf{H}^T (\mathbf{H}(\mathbf{F}\mathbf{\Sigma}_t \mathbf{F}^T + \mathbf{\Sigma}_x)\mathbf{H}^T + \mathbf{\Sigma}_e)^{-1}
\end{aligned}$$
where $\mathbf{K}_{t+1}$ is the Kalman gain matrix.
???
Note that $\mathbf{\Sigma}_{t+1}$ and $\mathbf{K}_{t+1}$ are independent of the evidence. Therefore, they can be computed offline.
These equations intuitively make sense.
Consider
the update for the mean state estimate $\mu_{t+1}$ .
The term $\mathbf{F}\mathbf{\mu}_t$ is the predicted state at $t + 1$ ,
so
$\mathbf{H} \mathbf{F} \mathbf{\mu}_t$ is the predicted observation.
Therefore, the term $\mathbf{e}_{t+1} - \mathbf{H} \mathbf{F} \mathbf{\mu}_t$ represents the error in
the predicted observation.
This is multiplied by $ \mathbf{K}_{t+1}$ to correct the predicted state; hence,
$ \mathbf{K}_{t+1}$ is a measure of how seriously to take the new observation relative to the prediction.
class: middle
The Apollo Guidance Computer used a Kalman filter to estimate the position of the spacecraft. The Kalman filter was used to merge new data with past position measurements to produce an optimal position estimate of the spacecraft.
.grid[
.kol-1-3[.width-100[ ]]
.kol-2-3[.width-100[ ]]
]
.footnote[Credits: Apollo-11 source code ]
class: middle
.center.width-75[ ]
.center[Demo: tracking an object in space using the Kalman Filter .]
class: middle
Data assimilation for weather forecasts
In weather forecasting, filtering is used to combine observations of the atmosphere with numerical models to estimate its current state.
This is called data assimilation .
Then, the model is used to predict the future states of the atmosphere.
class: middle, black-slide
.center[
<iframe width="640" height="400" src="https://www.youtube.com/embed/9c4kXW7btBE?cc_load_policy=1&hl=en&version=3 " frameborder="0" allowfullscreen></iframe>
]
Dynamic Bayesian networks
.grid[
.kol-2-3[.center.width-100[ ]]
.kol-1-3[.center.width-80[ ]]
]
Dynamics Bayesian networks (DBNs) can be used for tracking multiple variables over time, using multiple sources of evidence. Idea:
Repeat a fixed Bayes net structure at each time $t$ .
Variables from time $t$ condition on those from $t-1$ .
DBNs are a generalization of HMMs and of the Kalman filter.
.footnote[Credits: CS188 , UC Berkeley.]
class: middle
.pull-right[ ]
Application: ICU monitoring
.center.width-100[ ]
class: middle
.center.width-100[ ]
Unroll the network through time and run any exact inference algorithm (e.g., variable elimination)
Problem: inference cost for each update grows with $t$ .
Rollup filtering: add slice $t+1$ , sum out slice $t$ using variable elimination.
Largest factor is $O(d^{n+k})$ and the total update cost per step is $O(nd^{n+k})$ .
Better than HMMs, which is $O(d^{2n})$ , but still infeasible for large numbers of variables.
Basic idea:
Maintain a finite population of samples, called particles .
The representation of our beliefs is a list of $N$ particles.
Ensure the particles track the high-likelihood regions of the
state space.
Throw away samples that have very low weight, according to the evidence.
Replicate those that have high weight.
This scales to high dimensions!
.center.width-50[ ]
.footnote[Credits: CS188 , UC Berkeley.]
class: middle
.center.width-100[ ]
.footnote[Credits: CS188 , UC Berkeley.]
class: middle
.center.width-100[ ]
class: middle
.center.width-70[ ]
.center[(See demo)]
Temporal models use state and sensor variables replicated over time.
Their purpose is to maintain a belief state as time passes and as more evidence is collected.
The Markov and stationarity assumptions imply that we only need to specify
a transition model ${\bf P}(\mathbf{X}_{t+1} | \mathbf{X}_t)$ ,
a sensor model ${\bf P}(\mathbf{E}_t | \mathbf{X}_t)$ .
Inference tasks include filtering, prediction, smoothing and finding the most likely sequence.
Filter algorithms are all based on the core of idea of
projecting the current belief state through the transition model,
updating the prediction according to the new evidence.
class: end-slide, center
count: false
The end.