game theory

8 months ago by
Given a finite game \(G=(N,A,u)\), define a finitely repeated game \(G^T\), \(T\geq 1\) as follows. For every time period \(t=1,2,...,T,\) player \(i\) chooses an action \(a_i^t\in A_i\). The vector \(a^t\) is defined as having \(i\)-th coordinate \(a_i^t\). The history of the game up to round \(t\) is then \(h^t=(a^1,a^2,...,a^{t-1})\). The set of all possible history vectors \(h^t\) on round \(t\) is \(A^{t-1}\), the cross product of \(A\) with itself \(t-1\) times. The payoff to player \(i\) is obtained as an average of his payoffs on the individual rounds of the game, giving a total payoff of \(\frac{1}{T}\sum_{t=1}^T u_i(a^t)\). A pure strategy for player \(i\) is defined as a function \(s_i:H^T\rightarrow A_i\), where \(H^T=\bigcup_{t=0}^{T-1}A^t\) . Similarly, a mixed strategy for player \(i\) is defined as a function \(\alpha_i:H^T\rightarrow \Delta_i\). The idea of this definition is that, given any sequence of moves, the strategy will say what to do next. Therefore, it is assumed that player \(i\) chooses his action \(a_i^t\) by following a strategy \(s_i\) and based on the history of the game. More precisely, \(a_i^t=s_i(h^t)\).

Infinitely Repeated Game
Given the game \(G\) as above, one could attempt to define an infinitely repeated game \(G^{\infty}\) as follows. A pure strategy for player \(i\) is defined as a function \(s_i:H^{*}\rightarrow A_i\), where \(H^{*}=\bigcup_{t=0}^{\infty}A^t\) . Similarly, a mixed strategy is defined as a function \(\alpha_i:H^{*}\rightarrow \Delta_i\).

It might seem natural to define the payoff function in \(G^{\infty}\) as \(\lim_{T\rightarrow \infty}\frac{1}{T}\sum_{t=1}^T u_i(a^t)\). However, this limit might not exist so the game \(G^{\infty}\)is not defined in this generality. Instead, introducing a discount parameter \(\lambda\in (0,1)\), an infinitely repeated game \(G^{\infty}(\lambda)\) is defined as having payoff function for player \(i\) given by \((1-\lambda)\sum_{t=1}^{\infty}u_i(a^t)\lambda^{t-1}\). Since \(\sum_{t=1}^{\infty}\lambda^{t-1}=\frac{1}{1-\lambda}\), the total payoff is the weighted sum of the payoff received in each individual game.

In order to understand the definition, one way is to consider the time value of money. An alternative way is to take \(\lambda\) as the probability that there is going to be a next game so that the total payoff evaluates the expected amount of money received. If \(\lambda\) is close to 1, we say that the player is patient. If \(\lambda\) is close to 0, we say that the player is myopic.

The following theorem shows that no matter what the value of \(\lambda\), there always exists a Nash Equilibrium for \(G^{\infty}(\lambda)\). In fact, the equilibria can be naturally inherited from the original game \(G\). Given a mixed strategy profile \(\sigma\) for \(G\), define a mixed strategy profile \(\alpha^{\sigma}\) for \(G^{\infty}(\lambda)\) by \(\alpha_i^{\sigma}(h)=\sigma_i\)for all players \(i\) and all history vectors \(h\).

If \(\sigma\) is a Nash Equilibrium for G, then \(\alpha^\sigma\) is a Nash Equilibrium for \(G^{\infty}(\lambda)\).

Proof: Let \(U\) denote the payoff function for the game \(G^{\infty}(\lambda)\). Given any player \(i\), it must be shown that \(U_i(\alpha^{\sigma})\geq U_i(\alpha_i^{\prime},\alpha^{\sigma}_{-i}) \) for every mixed strategy \(\alpha_i^{\prime}\) for player \(i\). It follows from the definition of \(\alpha^{\sigma}\) that when the game \(G^{\infty}(\lambda)\) is played following this strategy, the action vectors \(a^t\) are equal to \(\sigma\) for all \(t\). Therefore, \(U_i(\alpha^{\sigma})=(1-\lambda)\sum_{t=1}^\infty u_i(\sigma)\lambda^{t-1}=u_i(\sigma)\). Similarly, when the game \(G^{\infty}(\lambda)\) is played following the strategy \((\alpha_i^{\prime},\alpha^{\sigma}_{-i})\), the action vectors \(a^t\) are given by \(a^t=(\alpha_i^{\prime}(h^{t-1}),\sigma_{-i})\). Hence \(U_i(\alpha_i^{\prime},\alpha^{\sigma}_{-i})=(1-\lambda)\sum_{t=1}^\infty u_i(\alpha_i^{\prime}(h^{t-1}),\sigma_{-i})\lambda^{t-1}\). Since \(\sigma\) is a Nash equilibrium for \(G\), then \(u_i(\sigma)\geq u_i(\alpha_i^{\prime}(h^{t-1}),\sigma_{-i})\). Multiplying both sides of the inequality by \(\lambda^{t-1}\) and adding up these inequalities for all \(t\) it follows from the previous formulas that \(U_i(\alpha^{\sigma})\geq U_i(\alpha_i^{\prime},\alpha^{\sigma}_{-i}) \), thus proving that \(\alpha^{\sigma}\) is a Nash equilibrium for \(G^{\infty}(\lambda)\).

Repeated Prisoners' Dilemma
{{Payoff matrix | Name = Fig. 1: A [[payoff matrix]] for Prisoners' Dilemma
                | 2L = Cooperate     | 2R = Defect   |
1U = Cooperate     | UL = 1,1  | UR = -1,2|
1D = Defect   | DL = 2,-1 | DR = 0,0}}
Fig. 1: A payoff matrix for Prisoners' Dilemma
Consider two strategies.

Tit for Tat: On the first round, both players play C. On round \(t+1\), player \(i\) players \(a^t_{-i}\).

Trigger: On the first round, both players play C. On round \(t+1\), player \(i\) plays C if and only if \(a_{-i}^1=a_{-i}^2=...=a_{-i}^t=C\).

Both of the above strategies are \(\epsilon\)-NE for a finitely repeated game \(G^T\) where \(T\geq \frac{1}{\epsilon}\). The reason is that a player can only earn extra profit by defecting on the final round, which produces \(\frac{1}{T}\) more profit.

In the case of an infinitely repeated game \(G^{\infty}(\lambda)\), it can be proved that <Trigger,Trigger> is a Nash Equilibrium as long as the players are all patient.

If \(\lambda\geq \frac{1}{2}\), the strategy <Trigger,Trigger> is a Nash Equilibrium.

Proof: If both players play according to Trigger, then their payoff will be 1.

Let \(k\) be the number of the first round that player \(i\) does not cooperate. Because the other player is playing Trigger, player \(i\) will defect for the rest of the game. Then his payoff will be


It's easy to see that when \(\lambda\geq \frac{1}{2}\), neither player will earn positive profit by deviating to other strategies. Q.E.D.

< Trigger, Trigger> is also a subgame perfect Nash Equilibrium.
Community: bbbcd hi
Please login to add an answer/comment or follow this question.

Similar posts:
Search »
  • Nothing matches yet.