Perfect Bayesian equilibrium

Perfect Bayesian Equilibrium
A solution concept in game theory
Relationships
Subset of Subgame perfect equilibrium, Bayesian Nash equilibrium
Significance
Proposed by Cho and Kreps
Used for Dynamic Bayesian games
Example signaling game

In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information (sequential Bayesian games). A PBE is a refinement of both Bayesian Nash equilibrium (BNE) and subgame perfect equilibrium (SPE). A PBE has two components - strategies and beliefs:

The strategies and beliefs should satisfy the following conditions:

Every PBE is both a SPE and a BNE, but the opposite is not necessarily true.

PBE in signaling games

A signaling game is the simplest kind of a dynamic Bayesian game. There are two players, one of them (the "responder") has only one possible type, and the other (the "sender") has several possible types. The sender plays first, then the receiver.

To calculate a PBE in a signaling game, we consider two kinds of equilibria: a separating equilibrium and a pooling equilibrium. In a separating equilibrium each sender-type plays a different action, so the sender's action gives information to the receiver; in a pooling equilibrium, all sender-types play the same action, so the sender's action gives no information to the receiver.

Gift game 1

Consider the following gift game:[1]

To analyze PBE in this game, let's look first at the following potential separating equilibria:

  1. The sender's strategy is: a friend gives and an enemy does not give. The receiver's beliefs are updated accordingly: if she receives a gift she knows that the sender is a friend, otherwise she knows that the sender is an enemy. The receiver's strategy is: accept. This is NOT an equilibrium, since the sender's strategy is not optimal: an enemy sender can increase his payoff from 0 to 1 by sending a gift.
  2. The sender's strategy is: a friend does not give and an enemy gives. The receiver's beliefs are updated accordingly: if she receives a gift she knows that the sender is a enemy, otherwise she knows that the sender is a friend. The receiver's strategy is: reject. Again, this is NOT an equilibrium, since the sender's strategy is not optimal: an enemy sender can increase his payoff from -1 to 0 by not sending a gift.

We conclude that in this game, there is no separating equilibrium.

Now, let's look at the following potential pooling equilibria:

  1. The sender's strategy is: always give. The receiver's beliefs are not updated: she still believes in the a-priori probability, that the sender is a friend with probability and an enemy with probability . Her payoff from accepting is , so she accepts if-and-only-if . So this is a PBE (best-response for both sender and receiver) if-and-only-if the apriori probability for being a friend satisfies .
  2. The sender's strategy is: never give. Here, the receiver's beliefs when receiving a gift can be arbitrary, since receiving a gift is an event with probability 0, so Bayes' rule does not apply. For example, suppose the receiver's beliefs when receiving a gift is that the sender is a friend with probability 0.2 (or any other number less than 0.5). The receiver's strategy is: reject. This is a PBE regardless of the apriori probability. Both the sender and the receiver get expected payoff 0, and none of them can improve the expected payoff by deviating.

To summarize:

Gift game 2

In the following example, the set of PBEs is strictly smaller than the set of SPEs and BNEs. It is a variant of the above gift-game, with the following change to the receiver's utility:

Note that in this variant, accepting is a dominant strategy for the receiver.

Similarly to example 1, there is no separating equilibrium. Let's look at the following potential pooling equilibria:

  1. The sender's strategy is: always give. The receiver's beliefs are not updated: she still believes in the a-priori probability, that the sender is a friend with probability and an enemy with probability . Her payoff from accepting is always higher than from rejecting, so she accepts (regardless of the value of ). This is a PBE - it is a best-response for both sender and receiver.
  2. The sender's strategy is: never give. Suppose the receiver's beliefs when receiving a gift is that the sender is a friend with probability , where is any number in [0,1]. Regardless of , the receiver's optimal strategy is: accept. This is NOT a PBE, since the sender can improve his payoff from 0 to 1 by giving a gift.
  3. The sender's strategy is: never give, and the receiver's strategy is: reject. This is NOT a PBE, since for any belief of the receiver, rejecting is not a best-response for her.

Note that option 3 is a Nash equilibrium! If we ignore beliefs, then rejecting can be considered a best-response for the receiver, since it does not affect her payoff (since there is no gift anyway). Moreover, option 3 is even a SPE, since the only subgame here is the entire game! Such implausible equilibria might arise also in games with complete information, but they may be eliminated by applying subgame perfect Nash equilibrium. However, Bayesian games often contain non-singleton information sets and since subgames must contain complete information sets, sometimes there is only one subgame—the entire game—and so every Nash equilibrium is trivially subgame perfect. Even if a game does have more than one subgame, the inability of subgame perfection to cut through information sets can result in implausible equilibria not being eliminated.

To summarize: in this variant of the gift game, there are two SPEs: either the sender always gives and the receiver always accepts, or the sender always does not give and the receiver always rejects. From these, only the first one is a PBE; the other is not a PBE since it cannot be supported by any belief-system.

More examples

For further examples, see signaling game#Examples. See also [2] for more examples.

PBE in multi-stage games

A multi-stage game is a sequence of simultaneous games played one after the other. These games may be identical (as in repeated games) or different.

Repeated public-good game

Build Don't
Build 1-C1, 1-C2 1-C1, 1
Don't 1, 1-C2 0,0
Public good game

The following game[3]:section 6.2 is a simple representation of the free-rider problem. There are two players, each of whom can either build a public good or not build. Each player gains 1 if the public good is built and 0 if not; in addition, if player builds the public good, he has to pay a cost of . The costs are private information - each player knows his own cost but not the other's cost. It is only known that each cost is drawn independently at random from some probability distribution. This makes this game a Bayesian game.

In the one-stage game, each player builds if-and-only-if his cost is smaller than his expected gain from building. The expected gain from building is exactly 1 times the probability that the other player does NOT build. In equilibrium, for every player , there is a threshold cost , such that the player contributes if-and-only-if his cost is less than . This threshold cost can be calculated based on the probability distribution of the players' costs. For example, if the costs are distributed uniformly on [0,2], then there is a symmetric equilibrium in which the threshold cost of both players is 2/3. This means that a player whose cost is between 2/3 and 1 will not contribute, even though his cost is below the benefit, because of the possibility that the other player will contribute.

Now, suppose that this game is repeated two times.[3]:section 8.2.3 The two plays are independent, i.e, each day the players decide simultaneously whether to build a public good in that day, get a payoff of 1 if the good is built in that day, and pay their cost if they built in that day. The only connection between the games is that, by playing in the first day, the players may reveal some information about their costs, and this information might affect the play in the second day.

We are looking for a symmetric PBE. Denote by the threshold cost of both players in day 1 (so in day 1, each player builds if-and-only-if his cost is at most ). To calculate , we work backwards and analyze the players' actions in day 2. Their actions depend on the history (= the two actions in day 1), and there are three options:

  1. In day 1, no player built. So now both players know that their opponent's cost is above . They update their belief accordingly, and conclude that there is a smaller chance that their opponent will build in day 2. Therefore, they increase their threshold cost, and the threshold cost in day 2 is .
  2. In day 1, both players built. So now both players know that their opponent's cost is below . They update their belief accordingly, and conclude that there is a larger chance that their opponent will build in day 2. Therefore, they decrease their threshold cost, and the threshold cost in day 2 is .
  3. In day 1, exactly one player built; suppose it is player 1. So now, it is known that the cost of player 1 is below and the cost of player 2 is above . There is an equilibrium in which the actions in day 2 are identical to the actions in day 1 - player 1 builds and player 2 does not build.

It is possible to calculate the expected payoff of the "threshold player" (a player with cost exactly ) in each of these situations. Since the threshold player should be indifferent between contributing and not contributing, it is possible to calculate the day-1 threshold cost . It turns out that this threshold is lower than - the threshold in the one-stage game. This means that, in a two-stage game, the players are less willing to build than in the one-stage game. Intuitively, the reason is that, when a player does not contribute in the first day, he makes the other player believe his cost is high, and this makes the other player more willing to contribute in the second day.

Jump-bidding

In an open-outcry English auction, the bidders can raise the current price in small steps (e.g. in $1 each time). However, often there is jump bidding - some bidders raise the current price much more than the minimal increment. One explanation to this is that it serves as a signal to the other bidders. There is a PBE in which each bidder jumps if-and-only-if his value is above a certain threshold. See Jump bidding#signaling.

See also

References

  1. James Peck. "Perfect Bayesian Equilibrium" (PDF). Ohio State University. Retrieved 2 September 2016.
  2. Zack Grossman. "Perfect Bayesian Equilibrium" (PDF). University of California. Retrieved 2 September 2016.
  3. 1 2 Fudenberg, Drew; Tirole, Jean (1991). Game theory. Cambridge, Massachusetts: MIT Press. ISBN 9780262061414. Book preview.
This article is issued from Wikipedia - version of the 10/9/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.