<< Chapter < Page | Chapter >> Page > |
Choosing features for approximating the value function is often somewhat harder, so because the value of a state is how good is starting off in this state. What is my expected sum of discounted rewards? What if I start in a certain state? And so what the feature of the state have to measure is really how good is it to start in a certain state? And so for inverted pendulum you actually have that states where the poles are vertical and when a cart that’s centered on your track or something, maybe better and so you can come up with features that measure the orientation of the pole and how close you are to the center of the track and so on and those will be reasonable features to use to approximate V*. Although in general it is true that choosing features, the value function approximation, is often slightly trickier than choosing good features for linear regression.
Okay and then Justin’s questions of so given V*, how do you go back to actually find a policy? In the discrete case, so we have that ?*(s) is equal to all [inaudible] over A of that. So that’s again, I used to write this as a sum over states [inaudible]. I’ll just write this as an expectation and so then once you find the optimal value function V*, you can then find the optimal policy ?* by computing the [inaudible]. So if you’re in a continuous state MDP, then you can’t actually do this in advance for every single state because there’s an infinite number of states and so you can’t actually perform this computation in advance to every single state.
What you do instead is whenever your robot is in some specific state S is only when your system is in some specific state S like your car is at some position orientation or your inverted pendulum is in some specific position, posed in some specific angle T. It’s only when your system, be it a factor or a board game or a robot, is in some specific state S that you would then go ahead and compute this augmax, so it’s only when you’re in some state S that you then compute this augmax, and then you execute that action A and then as a result of your action, your robot would transition to some new state and then so it’ll be given that specific new state that you compute as augmax using that specific state S that you’re in.
There’re a few ways to do it. One way to do this is actually the same as in the inner loop of the fitted value iteration algorithm so because of an expectation of a large number of states, you’d need to sample some set of states from the simulator and then approximate this expectation using an average over your samples, so it’s actually as inner loop of the value iteration algorithm. So you could do that. That’s sometimes done. Sometimes it can also be a pain to have to sample a set of states to approximate those expectations every time you want to take an action in your MDP.
Couple of special cases where this can be done, one special case is if you have a deterministic simulator. If it’s a deterministic simulator, so in other words, if your similar is just some function, could be a linear or a nonlinear function. If it’s a deterministic simulator then the next state, ST+1, is just some function of your previous stated action. If that’s the case then this expectation, well, then this simplifies to augmax of A of V* of F of I guess S,A because this is really saying S prime=F(s),A. I switched back and forth between notation; I hope that’s okay. S to denote the current state, and S prime to deterministic state versus ST and ST+1 through the current state. Both of these are sorta standing notation and don’t mind my switching back and forth between them. But if it’s a deterministic simulator you can then just compute what the next state S prime would be for each action you might take from the current state, and then take the augmax of actions A, basically choose the action that gets you to the highest value state.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?