Jim Ratliff's Game Theory Course:
4.3: Solution Concepts in Extensive-Form Games

This is a chapter from Jim Ratliff's Graduate-Level Game-Theory Course. See outline for the entire course. I no longer maintain, update, or correct these notes. However, I would appreciate hearing from people who download these notes and find them useful. Also I may eventually post problem sets and their solutions. Let me know if you'd like to be notified of such changes. (Please email me.)

Abstract

Download
this chapter
(525KB; 15 pages)

In real-world games we look ahead and plan what we would do in various contingencies. However, our plan is not a commitment. As the situation progresses, we constantly reassess our play in response to new information. Even if there is no new information, and the situation is exactly what we forecast it to be, we can still reassess the wisdom of the plan we made earlier. If at some point what we had planned to do no longer seems optimal, we must deviate from our original plan. Because there are situations in which we cannot commit to a particular decision in the future, a rational player should also be sequentially rational: Her planned action at any situation and point in time must actually be optimal at that time and in that situation given her beliefs. We will see that this requirement of dynamic consistency is a stronger form of rationality assumption than we have made thus far.

In order to impose dynamic consistency upon our solution concepts, we learn how to decompose an extensive-form game into a subgame and its complement, viz., its difference game. Then we learn how to restrict extensive-form game strategies to the subgame and to the difference game, as well as the reverse: how to compose a new extensive-form game strategy from a subgame strategy and a difference-game strategy.

We define the solution concept of subgame-perfect equilibrium as a refinement of Nash equilibrium that imposes the desired dynamic consistency. A subgame-perfect equilibrium of an extensive-form game is a behavior-strategy profile whose restriction to each subgame is a Nash equilibrium of that subgame.

An extensive-form game need not have a pure-strategy Nash equilibrium. However, we then consider the special case of extensive-form games of perfect information, i.e., where every information set contains exactly one decision node. We use Zermelo's backward-induction algorithm to prove that all such games of perfect information have a pure-strategy subgame-perfection equilibrium. This algorithm also provides a useful technique for finding the equilibria of actual games.

Previous section:
§4.2: Strategies in Extensive-Form Games
Course Outline Next section:
§5.1: Introduction to Repeated Games


jim@virtualperfection.com Jim Ratliff virtualperfection.com/jim/