<< Chapter < Page Chapter >> Page >

Reinforcement learning and control

We now begin our study of reinforcement learning and adaptive control.

In supervised learning, we saw algorithms that tried to make their outputs mimic the labels y given in the training set. In that setting, the labels gave an unambiguous“right answer” for each of the inputs x . In contrast, for many sequential decision making and control problems, it is very difficult to provide thistype of explicit supervision to a learning algorithm. For example, if wehave just built a four-legged robot and are trying to program it to walk, then initially we have no idea what the “correct” actions to take are to make itwalk, and so do not know how to provide explicit supervision for a learning algorithm to try to mimic.

In the reinforcement learning framework, we will instead provide our algorithms only a reward function, which indicates to the learning agent when itis doing well, and when it is doing poorly. In the four-legged walking example, the reward function might give therobot positive rewards for moving forwards, and negative rewards for either moving backwards or falling over. It will then be the learningalgorithm's job to figure out how to choose actions over time so as to obtain large rewards.

Reinforcement learning has been successful in applications as diverse as autonomous helicopter flight, robot legged locomotion, cell-phone networkrouting, marketing strategy selection, factory control, and efficient web-page indexing.Our study of reinforcement learning will begin with a definition of the Markov decision processes (MDP) , which provides the formalism in which RL problems are usually posed.

Markov decision processes

A Markov decision process is a tuple ( S , A , { P s a } , γ , R ) , where:

  • S is a set of states . (For example, in autonomous helicopter flight, S might be the set of all possible positions and orientations of the helicopter.)
  • A is a set of actions . (For example, the set of all possible directions in which you can push the helicopter's control sticks.)
  • P s a are the state transition probabilities. For each state s S and action a A , P s a is a distribution over the state space. We'll say more about this later, but briefly, P s a gives the distribution over what states we will transition to if we takeaction a in state s .
  • γ [ 0 , 1 ) is called the discount factor .
  • R : S × A R is the reward function . (Rewards are sometimes also written as a function of a state S only, in which case we would have R : S R ).

The dynamics of an MDP proceeds as follows: We start in some state s 0 , and get to choose some action a 0 A to take in the MDP. As a result of our choice, the state of the MDPrandomly transitions to some successor state s 1 , drawn according to s 1 P s 0 a 0 . Then, we get to pick another action a 1 . As a result of this action, the state transitions again, now tosome s 2 P s 1 a 1 . We then pick a 2 , and so on.... Pictorially, we can represent this process as follows:

s 0 a 0 s 1 a 1 s 2 a 2 s 3 a 3 ...

Upon visiting the sequence of states s 0 , s 1 , ... with actions a 0 , a 1 , ... , our total payoff is given by

R ( s 0 , a 0 ) + γ R ( s 1 , a 1 ) + γ 2 R ( s 2 , a 2 ) + .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask