<< Chapter < Page | Chapter >> Page > |
For example, one may choose to learn a linear model of the form
using an algorithm similar to linear regression. Here, the parameters of the model are the matrices $A$ and $B$ , and we can estimate them using the data collected from our $m$ trials, by picking
(This corresponds to the maximum likelihood estimate of the parameters.)
Having learned $A$ and $B$ , one option is to build a deterministic model, in which given an input ${s}_{t}$ and ${a}_{t}$ , the output ${s}_{t+1}$ is exactly determined. Specifically, we always compute ${s}_{t+1}$ according to Equation [link] . Alternatively, we may also build a stochastic model, in which ${s}_{t+1}$ is a random function of the inputs, by modelling it as
where here ${\u03f5}_{t}$ is a noise term, usually modeled as ${\u03f5}_{t}\sim \mathcal{N}(0,\Sigma )$ . (The covariance matrix $\Sigma $ can also be estimated from data in a straightforward way.)
Here, we've written the next-state ${s}_{t+1}$ as a linear function of the current state and action; but of course, non-linear functions are also possible.Specifically, one can learn a model ${s}_{t+1}=A{\phi}_{s}\left({s}_{t}\right)+B{\phi}_{a}\left({a}_{t}\right)$ , where ${\phi}_{s}$ and ${\phi}_{a}$ are some non-linear feature mappings of the states and actions. Alternatively, one can also use non-linear learning algorithms, such as locally weighted linearregression, to learn to estimate ${s}_{t+1}$ as a function of ${s}_{t}$ and ${a}_{t}$ . These approaches can also be used to build either deterministic or stochastic simulatorsof an MDP.
We now describe the fitted value iteration algorithm for approximating the value function of a continuous state MDP. In the sequel, we will assumethat the problem has a continuous state space $S={\mathbb{R}}^{n}$ , but that the action space $A$ is small and discrete. In practice, most MDPs have much smaller action spaces than state spaces. E.g., a car has a 6d state space, and a2d action space (steering and velocity controls); the inverted pendulum has a 4d state space, and a 1d action space; a helicopter has a 12d state space, and a4d action space. So, discretizing ths set of actions is usually less of a problem than discretizing the state space would have been.
Recall that in value iteration, we would like to perform the update
(In "Value iteration and policy iteration" , we had written the value iteration update with a summation $V\left(s\right):=R\left(s\right)+\gamma {max}_{a}{\sum}_{{s}^{\text{'}}}{P}_{sa}\left({s}^{\text{'}}\right)V\left({s}^{\text{'}}\right)$ rather than an integral over states; the new notation reflects that we are now working in continuous states rather than discrete states.)
The main idea of fitted value iteration is that we are going to approximately carry out this step, over a finite sampleof states ${s}^{\left(1\right)},...,{s}^{\left(m\right)}$ . Specifically, we will use a supervised learning algorithm—linear regression in our description below—to approximate the value functionas a linear or non-linear function of the states:
Here, $\phi $ is some appropriate feature mapping of the states.
For each state $s$ in our finite sample of $m$ states, fitted value iteration will first compute a quantity ${y}^{\left(i\right)}$ , which will be our approximation to $R\left(s\right)+\gamma {max}_{a}{\mathrm{E}}_{{s}^{\text{'}}\sim {P}_{sa}}\left[V\left({s}^{\text{'}}\right)\right]$ (the right hand side of Equation [link] ). Then, it will apply a supervised learning algorithm to try to get $V\left(s\right)$ close to $R\left(s\right)+\gamma {max}_{a}{\mathrm{E}}_{{s}^{\text{'}}\sim {P}_{sa}}\left[V\left({s}^{\text{'}}\right)\right]$ (or, in other words, to try to get $V\left(s\right)$ close to ${y}^{\left(i\right)}$ ).
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?