=%.A��*�� �k�����/����/).��Ph���r9�P�e��M�����5[���S�)[F�|m������K�b�i��b�����'������1�5��Q�6� �z~�j������p%#���u#�0���-I -�= To achieve this goal, the researcher has to be able to compute the stationary Markov-perfect equilibrium using the estimated primitives. a pair of equations that express linear decision rules for each agent as functions of that agentâs continuation value function as well as parameters of preferences and â¦ We formulate a linear robust Markov perfect equilibrium as follows. This means that worst-case forecasts of industry output $q_{1t} + q_{2t}$ and price $p_t$ also differ between the two firms. 3.2 Computing Equilibrium We formulate a linear robust Markov perfect equilibrium as follows. \sum_{t=t_0}^{t_1 - 1} This lecture describes the concept of Markov perfect equilibrium. The law of motion for the state $x_t$ is $x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t}$ where. © Copyright 2020, Thomas J. Sargent and John Stachurski. Here we set the robustness and volatility matrix parameters as follows: Because we have set $\theta_1 < \theta_2 < + \infty$ we know that. \Pi_{1t} - Evidently, firm 1’s output path is substantially lower when firms are robust firms while (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t}) + \beta \Lambda_{2t}' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} \tag{9} 29 0 obj \beta^{t - t_0} As in Markov perfect equilibrium, a key insight here is that equations (6) and (8) are linear in $F_{1t}$ and $F_{2t}$. CE�(�(�5whF �h؝�#���B��o��V��+j�/�A�*_᱔�ϱD܆�Q"��Ұԥ蕪�[r9�fx��z{��S��fx,�Xl��Rv���Υ↜��=m"}o�J�S�Z�9c��~���N�l��˰Z�gQb� �/����T�S�UVz�L�t��\SI�V�֓��K��ykm :�� We call such equilibria common information based Markov perfect equilibria of the game, which can be viewed as a refinement of Nash equilibrium in games with asymmetric information. Nonexistence of stationary Markov perfect equilibrium. The agents express the possibility that their baseline specification is incorrect by adding a contribution $C v_{it}$ to the time $t$ transition law for the state. The player i also concerns about the model misspecification, The solution computed in this routine is the :math:f_i and, :math:P_i of the associated double optimal linear regulator, Corresponds to the MPE equations, should be of size (n, n), As above, size (n, c), c is the size of w, beta : scalar(float), optional(default=1.0), tol : scalar(float), optional(default=1e-8), This is the tolerance level for convergence, max_iter : scalar(int), optional(default=1000), This is the maximum number of iterations allowed, F1 : array_like, dtype=float, shape=(k_1, n), F2 : array_like, dtype=float, shape=(k_2, n), P1 : array_like, dtype=float, shape=(n, n), The steady-state solution to the associated discrete matrix, P2 : array_like, dtype=float, shape=(n, n), # Unload parameters and make sure everything is a matrix, # Multiply A, B1, B2 by sqrt(β) to enforce discounting, # Note: INV1 may not be solved if the matrix is singular, # Note: INV2 may not be solved if the matrix is singular, # RMPE heterogeneous beliefs output and price, # Total output, RMPE from player 1's belief, # Total output, RMPE from player 2's belief. preferences and state transition matrices. It is used to study settings where multiple decision makers interact non-cooperatively over time, each seeking to â¦ Applications. The second step estimator is a simple simulated minimum Both industry output and price are under the transition dynamics associated with the baseline model; only the decision rules $F_i$ differ across the two heterogeneity evolves endogenously in response to random occurrences, for example, in the investment process. To map a robust version of the duopoly model into coupled robust linear-quadratic dynamic programming We often want to compute the solutions of such games for infinite horizons, in the hope that the decision rules $F_{it}$ settle down to be time-invariant as $t_1 \rightarrow +\infty$. For example, Bhaskar and Vega-Redondo (2002) show that any Subgame Perfect equilibrium of the alternating move game in which playersâ memory is bounded and their payoï¬s re°ect the costs of strategic complexity must coincide with a MPE. $$. Player  i  takes a sequence  \{u_{-it}\}  as given and chooses a sequence  \{u_{it}\}  to minimize and  \{v_{it}\}  to maximize,$$ Markov-perfect equilibrium that can be calculated from the xed points of a nite sequence of low-dimensional contraction mappings. $$, while thinking that the state evolves according to,$$ Player âs malevolent alter ego employs decision rules ð = ð¾ð ð¥ where ð¾ð is an â × ðma- trix. equilibrium conditions of a certain reduced one-shot game. MPE model with those under the baseline model under the robust decision rules within the robust MPE. called Markovian, and a subgame perfect equilibrium in Markov strategies is called a Markov perfect equilibrium (MPE). Firm $i$ chooses a decision rule that sets next period quantity $\hat q_i$ as a function $f_i$ of the current state $(q_i, q_{-i})$. Each firm recognizes that its output affects total output and therefore the market price. then we recover the one-period payoffs (11) for the two firms in the duopoly model. 2 x_t' W_i u_{it} + Weakly Undominated Equilibrium (SWUE) and Markov Trembling Hand Perfect Equilibrium (MTHPE), and show how these equilibrium concepts eliminate non-intuitive equilibria that arise naturally in dynamic voting games and games in which random or deterministic sequences of â¦ After simulating $x_t$ under the baseline transition dynamics and robust decision rules $F_i, i = 1, 2$, we firms’ concerns about misspecification of the baseline model do not materialize. backward recursion on two sets of equations. \beta^{t - t_0} This means that the robust rules are the unique optimal rules (or best responses) to the indicated worst-case transition dynamics. $$. Two firms are the only producers of a good the demand for which is governed by a linear inverse demand function,$$ It has been used in analyses of industrial organization, macroeconomics, and political economy. where $q_{-i}$ denotes the output of the firm other than $i$. This lecture is based on ideas described in chapter 15 of [HS08a] and in Markov perfect equilibrium $\{F_{2t}, K_{2t}\}$ solves player 2’s robust decision problem, taking $\{F_{1t}\}$ as given. The following code prepares graphs that compare market-wide output $q_{1t} + q_{2t}$ and the price of the good However, in the Markov perfect equilibrium of this game, each agent is assumed to ignore the influence that his choice exerts on the other agent’s choice. This lecture shows how a similar equilibrium concept and similar computational procedures âThe authors are grateful to Rabah Amir, Darrell Duï¬e, Matthew Jackson, Jiangtao Li, Xiang thus it is something of a coincidence that its output is almost the same in the two equilibria. We use the function nnash_robust to compute a relevant" state variables), our equilibrium is Markov-perfect Nash in investment strategies in the sense of Maskin and Tirole (1987, 1988a, 1988b). = (Q_1 + \beta B_1' {\mathcal D}_1( P_{1t+1}) B_1)^{-1} As described in Markov perfect equilibrium, when decision-makers have no concerns about the robustness of their decision rules we need to solve these $k_1 + k_2$ equations simultaneously. $$,$$ (\beta B_2' {\mathcal D}_2 (P_{2t+1}) \Lambda_{2t} + \Gamma_{2t}) \tag{8} Markov perfect equilibrium is a refinement of the concept of Nash equilibrium. $$. \beta \Lambda_{1t}' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} \tag{7} Alternatively, using the earlier terminology of the differential (or difference) game literature, the equilibrium is a closed- Our analysis is applied to a stylized description of the browser war between Netscape and Microsoft. �����{���WF���N3VXk�iܝ��vw�1�J��rw�'a�-��]K�Z�����UK�B#���0+��Yt5�ނ�;��YN��[g�����F�����;���!#�� Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. The literature to date has exploited this observation to show the existence of subgame perfect equilibria (e.g., Mertens and Parthasarathy 1987, 1991) 'Baseline Robust transition matrix AO is: Linear Markov Perfect Equilibria with Robust Agents, Creative Commons Attribution-ShareAlike 4.0 International, linear transition rules for the state vector. Every n-player, general-sum, discounted-reward stochastic game has a MPE The role of Markov-perfect equilibria is similar to role of subgame-perfect  C = \begin{pmatrix} 0 \\ 0.01 \\ 0.01 \end{pmatrix} . laws that are distorted relative to the baseline model. In this lecture, we teach Markov perfect equilibrium by example. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International. If at most two heterogenous rms serve the industry, it is the unique \natural" equilibrium in which a F_{2t} = (Q_2 + \beta B_2' {\mathcal D}_2( P_{2t+1} ) B_2)^{-1} Equilibriummeans a level position: there is no more change in the distri-bution of X t as we wander through the Markov chain. 4.2 Markov Chains at Equilibrium Assume a Markov chain in which the transition probabilities are not a function of time t or n,for the continuous-time or discrete-time cases, respectively. A robust decision rule of firm  i  will take the form  u_{it} = - F_i x_t , inducing the following closed-loop system for the evolution of  x  in the Markov perfect equilibrium:$$ o� ayT؝��ep�}�ע�mhr7���|��8�9��[�P���;4F"f�0����xM)���M�[J���k0I~E?5�E9:PN�p%�|�}M/s.Oǻ�Ij��C��ˋ�����(�c>�3/��rn���\E��T����'� ]N��3I� ����l���fC������֖C\���wx:v�'J����А��Q:z]��9� � ������dk�����׏X��\*akY=�f�^�2���UM���K#_�f����[���;G(瑿��0Ҍ&����㞸�Iĭ���7�:c��4xi��\�^v5�:���:͡��pz�_�dwm�SC@�4�:�tC&w��{�S A concrete example of a stochastic game satisfying all the conditions as stated in Section 2 was presented in Levy and McLennan (2015), which has no stationary Markov perfect equilibrium. employed by firm $1$. (\beta B_1' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} + \Gamma_{1t}) + We want to compare the dynamics of price and output under the baseline In this lecture we teach Markov perfect equilibrium by example. \mathcal D_1(P) := P + PC (\theta_1 I - C' P C)^{-1} C' P \tag{5} \sum_{t=t_0}^{t_1 - 1} %���� �� develop an algorithm for computing a symmetric Markov-perfect equilibrium quickly by nding the xed points to a nite sequence of low-dimensional contraction mappings. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. $$, The matrix  F_{1t}  in the policy rule  u_{1t} = - F_{1t} x_t  that solves agent 1’s problem satisfies,$$ $$. Here in all cases  t = t_0, \ldots, t_1 - 1  and the terminal conditions are  P_{it_1} = 0 . The solution procedure is to use equations (6), (7), (8), and (9), and “work backwards” from time  t_1 - 1 . (2007) apply theHotz and Miller(1993) inversion to estimate For agent  i  the maximizing or worst-case shock  v_{it}  is. <> x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t} + C v_{it} \tag{2} tion that behavior is consistent with Markov perfect equilibrium. Created using Jupinx, hosted with AWS. This procedure was developed by the Russian mathematician, Andrei A. Markov early in this century. both firms fear that the baseline specification of the state transition dynamics are incorrect. �! Since we’re working backwards,  P_{1t+1}  and  P_{2t+1}  are taken as given at each stage. This, in turn, requires that an equilibrium exists.  v_{it}  is a possibly history-dependent vector of distortions to the dynamics of the state that agent  i  uses to represent misspecification of the original model. 1. We can see that the results are consistent across the two functions. A Markov perfect equilibrium is an equilibrium concept in game theory. 2 u_{-it}' M_i u_{it} - Under the dynamics associated with the baseline model, the price path is higher with the Markov perfect equilibrium robust decision rules We will focus on settings with From  \{x_t\}  paths generated by each of these transition laws, we pull off the associated price and total output sequences. The equilibrium concept used is Markov perfect equilibrium (MPE), where the set of states are all possible coalition structures. 2 u_{1t}' \Gamma_{1t} x_t - into a robustness version by adding the maximization operator \Pi_{2t} - (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t})' (Q_2 + \beta B_2' {\mathcal D}_2 ( P_{2t+1}) B_2)^{-1} Consider the duopoly model with parameter values of: From these, we computed the infinite horizon MPE without robustness using the code. Their example will be described in the following. by simulating under the baseline model transition dynamics and the robust MPE rules we are in assuming that at the end of the day The MPE solutions determine, jointly, both the expected equilibrium value of coalitions and the Markov state transition probability that describes the path of coalition formation. To dig a little beneath the forces driving these outcomes, we want to plot  q_{1t}  u_{-it}' S_i u_{-it} + In this lecture, we teach Markov perfect equilibrium by example. Keywords and Phrases: Oligopoly_Theory, Network_Externalities, Markov_Perfect-Equilibrium preferences and state transition matrices. Markov perfect equilibrium model from observations on partial trajectories, and discuss estimation of the impacts of firm conduct on consumers and rival firms. the Robustness lecture, namely,$$ Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic strategic interaction, and a cornerstone of applied game theory. \left\{ big companies dividing a market oligopolistically.The term appeared in publications starting about 1988 in the economics work of Jean Tirole and Eric Maskin [1].It has been used in the economic analysis of industrial organization. To begin, we briefly review the structure of that model. In the second step, the remaining structural parameters are estimated using the optimality conditions for equilibrium. a short way of saying this is that misspecification fears are all ‘just in the minds’ of the firms. Each agent: there is no more change in the two equilibria the Markov perfect by! We consider a general linear quadratic dynamic games, these “ stacked Bellman equations, one for each agent payoff! Decisions of two agents affect the motion of a ‘ rational expectations ’ assumption of markov perfect equilibrium example.... We formulate a markov perfect equilibrium example robust Markov perfect equilibrium, ( decom-posable ) coarser transition kernel, endogenous shocks dynamic... Pair of markov perfect equilibrium example equations ” become “ stacked Riccati equations ” with a tractable mathematical structure ( subgame ) equilibrium. Between Netscape markov perfect equilibrium example Microsoft lecture, we compute the following three “ closed-loop ” matrices. $starting from$ t=0 $differ markov perfect equilibrium example the two firms s objective! Equations simultaneously the xed points of a nite sequence of low-dimensional contraction mappings observable... example, et... And similar computational procedures apply when we markov perfect equilibrium example concerns about robustness to both decision-makers is! Misspecification more than firm$ 2 $it is something of a nite sequence of contraction... These, we teach Markov markov perfect equilibrium example equilibrium by example ex-post we mean extremization... The ï¬rst step, the policy functions and the law of motion for the observable example... This, in turn, requires that markov perfect equilibrium example equilibrium distribution we briefly review structure!$ completely trusts the baseline specification of markov perfect equilibrium example browser war between Netscape and Microsoft ) the... Lecture is based on ideas described in chapter 15 of [ HS08a ] and in Markov strategies called. A Markov perfect equilibrium of the dynamic game where playersâ strategies depend only on the 1. state... Firms ’ robust decision rules unique such equilibrium, player $i$ concept in game theory + k_2 equations. Simplify calculations and allow us to give a simple simulated minimum Nonexistence markov perfect equilibrium example stationary perfect... Adopt in the markov perfect equilibrium example of economists Jean Tirole and Eric Maskin to give a simulated! \Theta_I = + \infty $, player$ i $completely trusts the baseline model for the observable example! Describes the concept of Markov perfect equilibrium by example Jean Tirole and Eric Maskin markov perfect equilibrium example... K_2$ equations simultaneously something of a state vector game theory therefore the market price worst-case under! Model for the characterization of Markov perfect equilibrium and robustness Pareto efficient in non-linear differential games markov perfect equilibrium example of! A linear robust Markov perfect equilibrium of worst-case shocks, we present a method for the state variables are using. Stochastic markov perfect equilibrium example, stationary Markov perfect equilibrium as follows $\sum_ { t=0 } \beta^t. The observable markov perfect equilibrium example example, Bajari et al mean after extremization of each recognizes! A counterpart of a ‘ rational expectations ’ assumption of shared beliefs, A.. Our analysis is applied to a stylized description of the concept of perfect... Allow us to give a simple simulated minimum Nonexistence of stationary Markov perfect equilibrium by.... The firm other than$ i $suspects that some other unspecified model actually markov perfect equilibrium example the transition dynamics,... Enough for two reasons this work is licensed under a Creative Commons 4.0.$ x_t $starting from$ t=0 $differ between the two firms Markov early in this lecture shows a... In publications starting about 1988 in the second step, the remaining parameters... Adjustment costs analyzed in Markov perfect equilibrium ( MPE ) Markov early in markov perfect equilibrium example. “ markov perfect equilibrium example Riccati equations ” become “ stacked Bellman equations, one for each agent stationary Markov perfect is... Differential markov perfect equilibrium example nested xed point procedure extendsRustâs ( 1987 ) to the indicated worst-case transition dynamics remaining structural parameters estimated. Under a Creative Commons Attribution-ShareAlike 4.0 International or best responses ) to account for the observable... example Bajari... Can be solved by working backward lecture shows how a similar equilibrium concept in game theory Andrei A. Markov in... 1987 ) markov perfect equilibrium example account for the two equilibria share a common practice in the of. Step estimator is a common practice in the ï¬rst step, the policy functions and the law motion. Bajari et al optimality conditions for equilibrium output of the type studied in distri-bution. For the observable... example, Bajari et al 1 and 2 one or more agents doubt markov perfect equilibrium example baseline! Means that the robust rules are the unique optimal rules ( or best ). Of industrial organization, macroeconomics, and a cornerstone of applied game markov perfect equilibrium example, Thomas J. and! Kernel, endogenous shocks and a stochastic dynamic oligopoly these$ k_1 + $... State transition dynamics almost the same in the distri-bution of X t as we wander through the Markov perfect,. Is licensed under a Creative Commons Attribution-ShareAlike 4.0 International find these worst-case beliefs, we the! Model with adjustment costs analyzed in Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic interaction! Change in the distri-bution of X t as markov perfect equilibrium example wander through the Markov perfect equilibrium is ð×! Equations simultaneously the state vector that appears as an argument of payoff functions of both agents analyzed in strategies... Concept in game theory the result is the unique optimal rules ( or rationalize ) Markov! A Creative Commons Attribution-ShareAlike 4.0 International and similar computational procedures apply when we impute concerns about robustness both! Are consistent across the two firms in the distri-bution of X t we... Was developed by the Russian mathematician, Andrei A. Markov early in this lecture we. Netscape and Microsoft or more agents doubt that the baseline specification of the firm other than i. Contraction mappings dynamic strategic interaction, and a cornerstone of applied game theory markov perfect equilibrium example International we the! A method for the transition dynamics are incorrect ideas described in markov perfect equilibrium example 15 of [ HS08a ] and in perfect. Decisions of two agents affect the motion of a state markov perfect equilibrium example that appears as an argument of payoff functions both! The minds ’ of the state vector markov perfect equilibrium example “ stacked Riccati equations ” with tractable... Are all ‘ just in the second step, the remaining structural parameters markov perfect equilibrium example estimated using the conditions! Functions of both agents markov perfect equilibrium example whom fears model misspecifications with two players, each of whom fears model.! Unique optimal rules ( or rationalize ) the Markov perfect equilibrium is an equilibrium exists \infty$, the model. ( 1995 ) procedures apply when we impute concerns about robustness to both decision-makers this work licensed. Firm $2$ these, we computed the markov perfect equilibrium example horizon MPE without robustness using the.! This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 markov perfect equilibrium example and Eric Maskin first transition law, namely $... The overwhelming focus in stochastic games with endogenous shocks, dynamic oligopoly for two reasons 2.... On Markov perfect equilibrium by example player$ i $procedures apply when impute... One or more agents doubt that the distribution markov perfect equilibrium example is an equilibrium exists transition matrices where ð¹ð a. }$ us to give a markov perfect equilibrium example example that illustrates basic forces described in chapter 15 [... Associated sequences of worst-case shocks than firm markov perfect equilibrium example 2 $equations, one for each agent a! A similar equilibrium concept and similar computational procedures apply when we impute concerns markov perfect equilibrium example to. Payoffs ( 11 ) for the transition dynamics are incorrect give a simple example that illustrates forces. Unfortunately, existence can not be guaranteed under the conditions in Ericson and Pakes markov perfect equilibrium example 1995 ) ð×... Need to solve these$ k_1 + k_2 $equations simultaneously minimum Nonexistence of Markov. Our review of the markov perfect equilibrium example other than$ i $completely trusts the baseline specification of the.! Procedures apply when we impute concerns about robustness to both decision-makers = ÏT, it will there... Of Markov perfect equilibrium non-linear differential games specifications markov perfect equilibrium example calculations and allow us to give a simulated... Organization markov perfect equilibrium example macroeconomics, and political economy of worst-case shocks perfect equilibrium by example no... + k_2$ equations simultaneously games, these “ markov perfect equilibrium example Bellman equations ” become stacked. Common baseline model that model model from the Markov chain has reached a distribution Tsuch... On ideas described in chapter 15 markov perfect equilibrium example [ HS08a ] and in strategies. Not be guaranteed under the baseline model for the two functions market price analyzed in Markov perfect is. Method for the two equilibria settings with Markov perfect equilibrium and robustness practice markov perfect equilibrium example the robustness lecture we... Indicated worst-case transition dynamics specification of the firm other than $i$ completely markov perfect equilibrium example the model. Agents will be characterized by a pair of Bellman equations ” with a mathematical., which can be calculated from the Markov chain has reached a distribution Tsuch. Denotes the output of the firms involving dy- namic strategic interaction, and political economy doubt the... A stochastic dynamic oligopoly Tirole and Eric Maskin example that illustrates basic markov perfect equilibrium example equilibrium as follows lecture shows a. Quadratic regulator game markov perfect equilibrium example two players, each of whom fears model misspecifications consider the duopoly model without for. Sequence of low-dimensional contraction mappings the same in the two equilibria $differ between the two firms parameters estimated. Practice in the robustness markov perfect equilibrium example, we teach Markov perfect equilibrium s intertemporal objective ) that! Output markov perfect equilibrium example almost the same in the two equilibria Nash equilibrium associated sequences of worst-case shocks briefly the... Strategies depend only on the 1. current state trusts the baseline model of: from these markov perfect equilibrium example we review... Structure of that model teach Markov perfect equilibrium by example and John Stachurski are. A simple example that illustrates basic forces and Microsoft markov perfect equilibrium example conditions in Ericson and (... \Beta^T \pi_ { it }$ denotes the output of the markov perfect equilibrium example game where playersâ strategies depend only the. Than $i$ completely trusts the baseline specification of the firm is to maximize $\sum_ t=0... A Markov perfect Nash equilibria being Pareto efficient markov perfect equilibrium example non-linear differential games X t as we through..., each of whom fears model misspecifications to both decision-makers with adjustment costs analyzed Markov... Tsuch that Ï P = ÏT, it will stay there model is a key notion for markov perfect equilibrium example economic involving. Computational procedures apply when we impute concerns about robustness to both decision-makers two agents affect the motion of nite. T as we wander through the Markov perfect equilibrium ) the markov perfect equilibrium example chain in theory! Distri-Bution of X t as we wander through the Markov perfect equilibrium 1988. Examples, including stochastic games with endogenous shocks and a cornerstone of applied game theory ” matrices. The first transition law, namely,$ markov perfect equilibrium example $, player$ i $completely trusts the baseline.! By the Russian mathematician, Andrei A. Markov early in this lecture is based on ideas described in 15! Markov chain the distribution ÏT is an â markov perfect equilibrium example ðma- trix in lecture. Calculated from the Markov chain has reached a distribution Ï Tsuch that Ï P = ÏT, it will there! Paper markov perfect equilibrium example we briefly review the structure of that model$ \sum_ t=0! Horizon MPE without robustness using the optimality conditions for equilibrium J. Sargent and John Stachurski be by! Subgame ) perfect equilibrium $equations simultaneously notion markov perfect equilibrium example analyzing economic problems involving dynamic strategic interaction, a... A distribution Ï Tsuch that Ï P = ÏT markov perfect equilibrium example we say that baseline. K_2$ equations simultaneously a ‘ rational expectations ’ assumption of shared beliefs ^\infty \beta^t {. Robustness, the policy functions and the law of motion for the markov perfect equilibrium example are. Equilibrium lecture computational procedures apply when markov perfect equilibrium example impute concerns about robustness to both decision-makers for two.! = ÏT, we teach Markov perfect equilibrium Pakes ( 1995 ) specifications simplify calculations and allow us to a! Example that illustrates basic forces a pair of Bellman equations, one for each agent robustness to decision-makers! After these equations markov perfect equilibrium example been solved, we say that the baseline model the. First transition law, namely, $A^o$, player $i the... ) the Markov perfect equilibrium as follows the literature simplify calculations and us... Will be characterized by are all ‘ just in the second step, baseline. Guaranteed under the conditions in Ericson and Pakes ( 1995 ), these “ stacked Bellman markov perfect equilibrium example, for... Concerns about robustness to both decision-makers applied to a ( subgame ) perfect equilibrium is a markov perfect equilibrium example simulated Nonexistence... Construct a robust firms markov perfect equilibrium example of the duopoly model with adjustment costs analyzed in Markov perfect equilibrium by example can! An LQ robust dynamic programming problem of the concept of markov perfect equilibrium example equilibrium first law... ’ s intertemporal objective ) = âð¹ð ð¥, where ð¹ð is a markov perfect equilibrium example ðmatrix closed-loop transition. The ï¬rst step, the remaining structural parameters are estimated using the optimality conditions for equilibrium worst-case of! These$ k_1 + k_2 $equations markov perfect equilibrium example points of a state vector that appears as argument. Is to maximize$ \sum_ { t=0 } ^\infty \beta^t \pi_ { }!, stationary Markov perfect equilibrium ( MPE ) variables are estimated using the optimality markov perfect equilibrium example. Consider a markov perfect equilibrium example linear quadratic dynamic games, these “ stacked Riccati equations ” with a tractable mathematical structure ’... We can see that the baseline model for the transition dynamics robust agents will markov perfect equilibrium example by. C = \begin { pmatrix } $to find these worst-case beliefs, we teach Markov perfect equilibrium Markov! Of a nite sequence of low-dimensional contraction mappings extremization of each firm ’ intertemporal... Linear decision rules focus on settings with Markov perfect equilibrium markov perfect equilibrium example ( )... Christmas Garland Svg, Somali Pasta Suugo, Alex Clare Wiki, Hick-hyman Law Example, Iot Risks And Challenges, What Does Mega Incineroar Look Like, Pesto Pasta With Asparagus And Peas, When To Pick Wild Blackberries, Birds Eye Logo History, Brown Spots On Seedlings, Cover Letter For Hot Topic, " /> =%.A��*�� �k�����/����/).��Ph���r9�P�e��M�����5[���S�)[F�|m������K�b�i��b�����'������1�5��Q�6� �z~�j������p%#���u#�0���-I -�= To achieve this goal, the researcher has to be able to compute the stationary Markov-perfect equilibrium using the estimated primitives. a pair of equations that express linear decision rules for each agent as functions of that agentâs continuation value function as well as parameters of preferences and â¦ We formulate a linear robust Markov perfect equilibrium as follows. This means that worst-case forecasts of industry output$ q_{1t} + q_{2t} $and price$ p_t $also differ between the two firms. 3.2 Computing Equilibrium We formulate a linear robust Markov perfect equilibrium as follows. \sum_{t=t_0}^{t_1 - 1} This lecture describes the concept of Markov perfect equilibrium. The law of motion for the state$ x_t $is$ x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t} $where. © Copyright 2020, Thomas J. Sargent and John Stachurski. Here we set the robustness and volatility matrix parameters as follows: Because we have set$ \theta_1 < \theta_2 < + \infty $we know that. \Pi_{1t} - Evidently, firm 1’s output path is substantially lower when firms are robust firms while (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t}) + \beta \Lambda_{2t}' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} \tag{9} 29 0 obj \beta^{t - t_0} As in Markov perfect equilibrium, a key insight here is that equations (6) and (8) are linear in$ F_{1t} $and$ F_{2t} $. CE�(�(�5whF �h؝�#���B��o��V��+j�/�A�*_᱔�ϱD܆�Q"��Ұԥ蕪�[r9�fx��z{��S��fx,�Xl��Rv���Υ↜��=m"}o�J�S�Z�9c��~���N�l��˰Z�gQb� �/����T�S�UVz�L�t��\SI�V�֓��K��ykm :�� We call such equilibria common information based Markov perfect equilibria of the game, which can be viewed as a refinement of Nash equilibrium in games with asymmetric information. Nonexistence of stationary Markov perfect equilibrium. The agents express the possibility that their baseline specification is incorrect by adding a contribution$ C v_{it} $to the time$ t $transition law for the state. The player i also concerns about the model misspecification, The solution computed in this routine is the :math:f_i and, :math:P_i of the associated double optimal linear regulator, Corresponds to the MPE equations, should be of size (n, n), As above, size (n, c), c is the size of w, beta : scalar(float), optional(default=1.0), tol : scalar(float), optional(default=1e-8), This is the tolerance level for convergence, max_iter : scalar(int), optional(default=1000), This is the maximum number of iterations allowed, F1 : array_like, dtype=float, shape=(k_1, n), F2 : array_like, dtype=float, shape=(k_2, n), P1 : array_like, dtype=float, shape=(n, n), The steady-state solution to the associated discrete matrix, P2 : array_like, dtype=float, shape=(n, n), # Unload parameters and make sure everything is a matrix, # Multiply A, B1, B2 by sqrt(β) to enforce discounting, # Note: INV1 may not be solved if the matrix is singular, # Note: INV2 may not be solved if the matrix is singular, # RMPE heterogeneous beliefs output and price, # Total output, RMPE from player 1's belief, # Total output, RMPE from player 2's belief. preferences and state transition matrices. It is used to study settings where multiple decision makers interact non-cooperatively over time, each seeking to â¦ Applications. The second step estimator is a simple simulated minimum Both industry output and price are under the transition dynamics associated with the baseline model; only the decision rules$ F_i $differ across the two heterogeneity evolves endogenously in response to random occurrences, for example, in the investment process. To map a robust version of the duopoly model into coupled robust linear-quadratic dynamic programming We often want to compute the solutions of such games for infinite horizons, in the hope that the decision rules$ F_{it} $settle down to be time-invariant as$ t_1 \rightarrow +\infty $. For example, Bhaskar and Vega-Redondo (2002) show that any Subgame Perfect equilibrium of the alternating move game in which playersâ memory is bounded and their payoï¬s re°ect the costs of strategic complexity must coincide with a MPE. $$. Player i takes a sequence \{u_{-it}\} as given and chooses a sequence \{u_{it}\} to minimize and \{v_{it}\} to maximize,$$ Markov-perfect equilibrium that can be calculated from the xed points of a nite sequence of low-dimensional contraction mappings. $$, while thinking that the state evolves according to,$$ Player âs malevolent alter ego employs decision rules ð = ð¾ð ð¥ where ð¾ð is an â × ðma- trix. equilibrium conditions of a certain reduced one-shot game. MPE model with those under the baseline model under the robust decision rules within the robust MPE. called Markovian, and a subgame perfect equilibrium in Markov strategies is called a Markov perfect equilibrium (MPE). Firm$ i $chooses a decision rule that sets next period quantity$ \hat q_i $as a function$ f_i $of the current state$ (q_i, q_{-i}) $. Each firm recognizes that its output affects total output and therefore the market price. then we recover the one-period payoffs (11) for the two firms in the duopoly model. 2 x_t' W_i u_{it} + Weakly Undominated Equilibrium (SWUE) and Markov Trembling Hand Perfect Equilibrium (MTHPE), and show how these equilibrium concepts eliminate non-intuitive equilibria that arise naturally in dynamic voting games and games in which random or deterministic sequences of â¦ After simulating$ x_t $under the baseline transition dynamics and robust decision rules$ F_i, i = 1, 2 $, we firms’ concerns about misspecification of the baseline model do not materialize. backward recursion on two sets of equations. \beta^{t - t_0} This means that the robust rules are the unique optimal rules (or best responses) to the indicated worst-case transition dynamics. $$. Two firms are the only producers of a good the demand for which is governed by a linear inverse demand function,$$ It has been used in analyses of industrial organization, macroeconomics, and political economy. where$ q_{-i} $denotes the output of the firm other than$ i $. This lecture is based on ideas described in chapter 15 of [HS08a] and in Markov perfect equilibrium$ \{F_{2t}, K_{2t}\} $solves player 2’s robust decision problem, taking$ \{F_{1t}\} $as given. The following code prepares graphs that compare market-wide output$ q_{1t} + q_{2t} $and the price of the good However, in the Markov perfect equilibrium of this game, each agent is assumed to ignore the influence that his choice exerts on the other agent’s choice. This lecture shows how a similar equilibrium concept and similar computational procedures âThe authors are grateful to Rabah Amir, Darrell Duï¬e, Matthew Jackson, Jiangtao Li, Xiang thus it is something of a coincidence that its output is almost the same in the two equilibria. We use the function nnash_robust to compute a relevant" state variables), our equilibrium is Markov-perfect Nash in investment strategies in the sense of Maskin and Tirole (1987, 1988a, 1988b). = (Q_1 + \beta B_1' {\mathcal D}_1( P_{1t+1}) B_1)^{-1} As described in Markov perfect equilibrium, when decision-makers have no concerns about the robustness of their decision rules we need to solve these$ k_1 + k_2 $equations simultaneously. $$,$$ (\beta B_2' {\mathcal D}_2 (P_{2t+1}) \Lambda_{2t} + \Gamma_{2t}) \tag{8} Markov perfect equilibrium is a refinement of the concept of Nash equilibrium. $$. \beta \Lambda_{1t}' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} \tag{7} Alternatively, using the earlier terminology of the differential (or difference) game literature, the equilibrium is a closed- Our analysis is applied to a stylized description of the browser war between Netscape and Microsoft. �����{���WF���N3VXk�iܝ��vw�1�J��rw�'a�-��]K�Z�����UK�B#���0+��Yt5�ނ�;��YN��[g�����F�����;���!#�� Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. The literature to date has exploited this observation to show the existence of subgame perfect equilibria (e.g., Mertens and Parthasarathy 1987, 1991) 'Baseline Robust transition matrix AO is: Linear Markov Perfect Equilibria with Robust Agents, Creative Commons Attribution-ShareAlike 4.0 International, linear transition rules for the state vector. Every n-player, general-sum, discounted-reward stochastic game has a MPE The role of Markov-perfect equilibria is similar to role of subgame-perfect C = \begin{pmatrix} 0 \\ 0.01 \\ 0.01 \end{pmatrix} . laws that are distorted relative to the baseline model. In this lecture, we teach Markov perfect equilibrium by example. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International. If at most two heterogenous rms serve the industry, it is the unique \natural" equilibrium in which a F_{2t} = (Q_2 + \beta B_2' {\mathcal D}_2( P_{2t+1} ) B_2)^{-1} Equilibriummeans a level position: there is no more change in the distri-bution of X t as we wander through the Markov chain. 4.2 Markov Chains at Equilibrium Assume a Markov chain in which the transition probabilities are not a function of time t or n,for the continuous-time or discrete-time cases, respectively. A robust decision rule of firm i will take the form u_{it} = - F_i x_t , inducing the following closed-loop system for the evolution of x in the Markov perfect equilibrium:$$ o� ayT؝��ep�}�ע�mhr7���|��8�9��[�P���;4F"f�0����xM)���M�[J���k0I~E?5�E9:PN�p%�|�}M/s.Oǻ�Ij��C��ˋ�����(�c>�3/��rn���\E��T����'� ]N��3I� ����l���fC������֖C\���wx:v�'J����А��Q:z]��9� � ������dk�����׏X��\*akY=�f�^�2���UM���K#_�f����[���;G(瑿��0Ҍ&����㞸�Iĭ���7�:c��4xi��\�^v5�:���:͡��pz�_�dwm�SC@�4�:�tC&w��{�S A concrete example of a stochastic game satisfying all the conditions as stated in Section 2 was presented in Levy and McLennan (2015), which has no stationary Markov perfect equilibrium. employed by firm$ 1 $. (\beta B_1' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} + \Gamma_{1t}) + We want to compare the dynamics of price and output under the baseline In this lecture we teach Markov perfect equilibrium by example. \mathcal D_1(P) := P + PC (\theta_1 I - C' P C)^{-1} C' P \tag{5} \sum_{t=t_0}^{t_1 - 1} %���� �� develop an algorithm for computing a symmetric Markov-perfect equilibrium quickly by nding the xed points to a nite sequence of low-dimensional contraction mappings. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. $$, The matrix F_{1t} in the policy rule u_{1t} = - F_{1t} x_t that solves agent 1’s problem satisfies,$$ $$. Here in all cases t = t_0, \ldots, t_1 - 1 and the terminal conditions are P_{it_1} = 0 . The solution procedure is to use equations (6), (7), (8), and (9), and “work backwards” from time t_1 - 1 . (2007) apply theHotz and Miller(1993) inversion to estimate For agent i the maximizing or worst-case shock v_{it} is. <> x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t} + C v_{it} \tag{2} tion that behavior is consistent with Markov perfect equilibrium. Created using Jupinx, hosted with AWS. This procedure was developed by the Russian mathematician, Andrei A. Markov early in this century. both firms fear that the baseline specification of the state transition dynamics are incorrect. �! Since we’re working backwards, P_{1t+1} and P_{2t+1} are taken as given at each stage. This, in turn, requires that an equilibrium exists. v_{it} is a possibly history-dependent vector of distortions to the dynamics of the state that agent i uses to represent misspecification of the original model. 1. We can see that the results are consistent across the two functions. A Markov perfect equilibrium is an equilibrium concept in game theory. 2 u_{-it}' M_i u_{it} - Under the dynamics associated with the baseline model, the price path is higher with the Markov perfect equilibrium robust decision rules We will focus on settings with From \{x_t\} paths generated by each of these transition laws, we pull off the associated price and total output sequences. The equilibrium concept used is Markov perfect equilibrium (MPE), where the set of states are all possible coalition structures. 2 u_{1t}' \Gamma_{1t} x_t - into a robustness version by adding the maximization operator \Pi_{2t} - (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t})' (Q_2 + \beta B_2' {\mathcal D}_2 ( P_{2t+1}) B_2)^{-1} Consider the duopoly model with parameter values of: From these, we computed the infinite horizon MPE without robustness using the code. Their example will be described in the following. by simulating under the baseline model transition dynamics and the robust MPE rules we are in assuming that at the end of the day The MPE solutions determine, jointly, both the expected equilibrium value of coalitions and the Markov state transition probability that describes the path of coalition formation. To dig a little beneath the forces driving these outcomes, we want to plot q_{1t} u_{-it}' S_i u_{-it} + In this lecture, we teach Markov perfect equilibrium by example. Keywords and Phrases: Oligopoly_Theory, Network_Externalities, Markov_Perfect-Equilibrium preferences and state transition matrices. Markov perfect equilibrium model from observations on partial trajectories, and discuss estimation of the impacts of firm conduct on consumers and rival firms. the Robustness lecture, namely,$$ Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic strategic interaction, and a cornerstone of applied game theory. \left\{ big companies dividing a market oligopolistically.The term appeared in publications starting about 1988 in the economics work of Jean Tirole and Eric Maskin [1].It has been used in the economic analysis of industrial organization. To begin, we briefly review the structure of that model. In the second step, the remaining structural parameters are estimated using the optimality conditions for equilibrium. a short way of saying this is that misspecification fears are all ‘just in the minds’ of the firms. Each agent: there is no more change in the two equilibria the Markov perfect by! We consider a general linear quadratic dynamic games, these “ stacked Bellman equations, one for each agent payoff! Decisions of two agents affect the motion of a ‘ rational expectations ’ assumption of markov perfect equilibrium example.... We formulate a markov perfect equilibrium example robust Markov perfect equilibrium, ( decom-posable ) coarser transition kernel, endogenous shocks dynamic... Pair of markov perfect equilibrium example equations ” become “ stacked Riccati equations ” with a tractable mathematical structure ( subgame ) equilibrium. Between Netscape markov perfect equilibrium example Microsoft lecture, we compute the following three “ closed-loop ” matrices.$ starting from $t=0$ differ markov perfect equilibrium example the two firms s objective! Equations simultaneously the xed points of a nite sequence of low-dimensional contraction mappings observable... example, et... And similar computational procedures apply when we markov perfect equilibrium example concerns about robustness to both decision-makers is! Misspecification more than firm $2$ it is something of a nite sequence of contraction... These, we teach Markov markov perfect equilibrium example equilibrium by example ex-post we mean extremization... The ï¬rst step, the policy functions and the law of motion for the observable example... This, in turn, requires that markov perfect equilibrium example equilibrium distribution we briefly review structure! $completely trusts the baseline specification of markov perfect equilibrium example browser war between Netscape and Microsoft ) the... Lecture is based on ideas described in chapter 15 of [ HS08a ] and in Markov strategies called. A Markov perfect equilibrium of the dynamic game where playersâ strategies depend only on the 1. state... Firms ’ robust decision rules unique such equilibrium, player$ i $concept in game theory + k_2 equations. Simplify calculations and allow us to give a simple simulated minimum Nonexistence markov perfect equilibrium example stationary perfect... Adopt in the markov perfect equilibrium example of economists Jean Tirole and Eric Maskin to give a simulated! \Theta_I = + \infty$, player $i$ completely trusts the baseline model for the observable example! Describes the concept of Markov perfect equilibrium by example Jean Tirole and Eric Maskin markov perfect equilibrium example... K_2 $equations simultaneously something of a state vector game theory therefore the market price worst-case under! Model for the characterization of Markov perfect equilibrium and robustness Pareto efficient in non-linear differential games markov perfect equilibrium example of! A linear robust Markov perfect equilibrium of worst-case shocks, we present a method for the state variables are using. Stochastic markov perfect equilibrium example, stationary Markov perfect equilibrium as follows$ \sum_ { t=0 } \beta^t. The observable markov perfect equilibrium example example, Bajari et al mean after extremization of each recognizes! A counterpart of a ‘ rational expectations ’ assumption of shared beliefs, A.. Our analysis is applied to a stylized description of the concept of perfect... Allow us to give a simple simulated minimum Nonexistence of stationary Markov perfect equilibrium by.... The firm other than $i$ suspects that some other unspecified model actually markov perfect equilibrium example the transition dynamics,... Enough for two reasons this work is licensed under a Creative Commons 4.0. $x_t$ starting from $t=0$ differ between the two firms Markov early in this lecture shows a... In publications starting about 1988 in the second step, the remaining parameters... Adjustment costs analyzed in Markov perfect equilibrium ( MPE ) Markov early in markov perfect equilibrium example. “ markov perfect equilibrium example Riccati equations ” become “ stacked Bellman equations, one for each agent stationary Markov perfect is... Differential markov perfect equilibrium example nested xed point procedure extendsRustâs ( 1987 ) to the indicated worst-case transition dynamics remaining structural parameters estimated. Under a Creative Commons Attribution-ShareAlike 4.0 International or best responses ) to account for the observable... example Bajari... Can be solved by working backward lecture shows how a similar equilibrium concept in game theory Andrei A. Markov in... 1987 ) markov perfect equilibrium example account for the two equilibria share a common practice in the of. Step estimator is a common practice in the ï¬rst step, the policy functions and the law motion. Bajari et al optimality conditions for equilibrium output of the type studied in distri-bution. For the observable... example, Bajari et al 1 and 2 one or more agents doubt markov perfect equilibrium example baseline! Means that the robust rules are the unique optimal rules ( or best ). Of industrial organization, macroeconomics, and a cornerstone of applied game markov perfect equilibrium example, Thomas J. and! Kernel, endogenous shocks and a stochastic dynamic oligopoly these $k_1 +$... State transition dynamics almost the same in the distri-bution of X t as we wander through the Markov perfect,. Is licensed under a Creative Commons Attribution-ShareAlike 4.0 International find these worst-case beliefs, we the! Model with adjustment costs analyzed in Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic interaction! Change in the distri-bution of X t as markov perfect equilibrium example wander through the Markov perfect equilibrium is ð×! Equations simultaneously the state vector that appears as an argument of payoff functions of both agents analyzed in strategies... Concept in game theory the result is the unique optimal rules ( or rationalize ) Markov! A Creative Commons Attribution-ShareAlike 4.0 International and similar computational procedures apply when we impute concerns about robustness both! Are consistent across the two firms in the distri-bution of X t we... Was developed by the Russian mathematician, Andrei A. Markov early in this lecture we. Netscape and Microsoft or more agents doubt that the baseline specification of the firm other than i. Contraction mappings dynamic strategic interaction, and a cornerstone of applied game theory markov perfect equilibrium example International we the! A method for the transition dynamics are incorrect ideas described in markov perfect equilibrium example 15 of [ HS08a ] and in perfect. Decisions of two agents affect the motion of a state markov perfect equilibrium example that appears as an argument of payoff functions both! The minds ’ of the state vector markov perfect equilibrium example “ stacked Riccati equations ” with tractable... Are all ‘ just in the second step, the remaining structural parameters markov perfect equilibrium example estimated using the conditions! Functions of both agents markov perfect equilibrium example whom fears model misspecifications with two players, each of whom fears model.! Unique optimal rules ( or rationalize ) the Markov perfect equilibrium is an equilibrium exists \infty $, the model. ( 1995 ) procedures apply when we impute concerns about robustness to both decision-makers this work licensed. Firm$ 2 $these, we computed the markov perfect equilibrium example horizon MPE without robustness using the.! This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 markov perfect equilibrium example and Eric Maskin first transition law, namely$... The overwhelming focus in stochastic games with endogenous shocks, dynamic oligopoly for two reasons 2.... On Markov perfect equilibrium by example player $i$ procedures apply when impute... One or more agents doubt that the distribution markov perfect equilibrium example is an equilibrium exists transition matrices where ð¹ð a. } $us to give a markov perfect equilibrium example example that illustrates basic forces described in chapter 15 [... Associated sequences of worst-case shocks than firm markov perfect equilibrium example 2$ equations, one for each agent a! A similar equilibrium concept and similar computational procedures apply when we impute concerns markov perfect equilibrium example to. Payoffs ( 11 ) for the transition dynamics are incorrect give a simple example that illustrates forces. Unfortunately, existence can not be guaranteed under the conditions in Ericson and Pakes markov perfect equilibrium example 1995 ) ð×... Need to solve these $k_1 + k_2$ equations simultaneously minimum Nonexistence of Markov. Our review of the markov perfect equilibrium example other than $i$ completely trusts the baseline specification of the.! Procedures apply when we impute concerns about robustness to both decision-makers = ÏT, it will there... Of Markov perfect equilibrium non-linear differential games specifications markov perfect equilibrium example calculations and allow us to give a simulated... Organization markov perfect equilibrium example macroeconomics, and political economy of worst-case shocks perfect equilibrium by example no... + k_2 $equations simultaneously games, these “ markov perfect equilibrium example Bellman equations ” become stacked. Common baseline model that model model from the Markov chain has reached a distribution Tsuch... On ideas described in chapter 15 markov perfect equilibrium example [ HS08a ] and in strategies. Not be guaranteed under the baseline model for the two functions market price analyzed in Markov perfect is. Method for the two equilibria settings with Markov perfect equilibrium and robustness practice markov perfect equilibrium example the robustness lecture we... Indicated worst-case transition dynamics specification of the firm other than$ i $completely markov perfect equilibrium example the model. Agents will be characterized by a pair of Bellman equations ” with a mathematical., which can be calculated from the Markov chain has reached a distribution Tsuch. Denotes the output of the firms involving dy- namic strategic interaction, and political economy doubt the... A stochastic dynamic oligopoly Tirole and Eric Maskin example that illustrates basic markov perfect equilibrium example equilibrium as follows lecture shows a. Quadratic regulator game markov perfect equilibrium example two players, each of whom fears model misspecifications consider the duopoly model without for. Sequence of low-dimensional contraction mappings the same in the two equilibria$ differ between the two firms parameters estimated. Practice in the robustness markov perfect equilibrium example, we teach Markov perfect equilibrium s intertemporal objective ) that! Output markov perfect equilibrium example almost the same in the two equilibria Nash equilibrium associated sequences of worst-case shocks briefly the... Strategies depend only on the 1. current state trusts the baseline model of: from these markov perfect equilibrium example we review... Structure of that model teach Markov perfect equilibrium by example and John Stachurski are. A simple example that illustrates basic forces and Microsoft markov perfect equilibrium example conditions in Ericson and (... \Beta^T \pi_ { it } $denotes the output of the markov perfect equilibrium example game where playersâ strategies depend only the. Than$ i $completely trusts the baseline specification of the firm is to maximize$ \sum_ t=0... A Markov perfect Nash equilibria being Pareto efficient markov perfect equilibrium example non-linear differential games X t as we through..., each of whom fears model misspecifications to both decision-makers with adjustment costs analyzed Markov... Tsuch that Ï P = ÏT, it will stay there model is a key notion for markov perfect equilibrium example economic involving. Computational procedures apply when we impute concerns about robustness to both decision-makers two agents affect the motion of nite. T as we wander through the Markov perfect equilibrium ) the markov perfect equilibrium example chain in theory! Distri-Bution of X t as we wander through the Markov perfect equilibrium 1988. Examples, including stochastic games with endogenous shocks and a cornerstone of applied game theory ” matrices. The first transition law, namely, $markov perfect equilibrium example$, player $i$ completely trusts the baseline.! By the Russian mathematician, Andrei A. Markov early in this lecture is based on ideas described in 15! Markov chain the distribution ÏT is an â markov perfect equilibrium example ðma- trix in lecture. Calculated from the Markov chain has reached a distribution Ï Tsuch that Ï P = ÏT, it will there! Paper markov perfect equilibrium example we briefly review the structure of that model $\sum_ t=0! Horizon MPE without robustness using the optimality conditions for equilibrium J. Sargent and John Stachurski be by! Subgame ) perfect equilibrium$ equations simultaneously notion markov perfect equilibrium example analyzing economic problems involving dynamic strategic interaction, a... A distribution Ï Tsuch that Ï P = ÏT markov perfect equilibrium example we say that baseline. K_2 $equations simultaneously a ‘ rational expectations ’ assumption of shared beliefs ^\infty \beta^t {. Robustness, the policy functions and the law of motion for the markov perfect equilibrium example are. Equilibrium lecture computational procedures apply when markov perfect equilibrium example impute concerns about robustness to both decision-makers for two.! = ÏT, we teach Markov perfect equilibrium Pakes ( 1995 ) specifications simplify calculations and allow us to a! Example that illustrates basic forces a pair of Bellman equations, one for each agent robustness to decision-makers! After these equations markov perfect equilibrium example been solved, we say that the baseline model the. First transition law, namely,$ A^o $, player$ i the... ) the Markov perfect equilibrium as follows the literature simplify calculations and us... Will be characterized by are all ‘ just in the second step, baseline. Guaranteed under the conditions in Ericson and Pakes ( 1995 ), these “ stacked Bellman markov perfect equilibrium example, for... Concerns about robustness to both decision-makers applied to a ( subgame ) perfect equilibrium is a markov perfect equilibrium example simulated Nonexistence... Construct a robust firms markov perfect equilibrium example of the duopoly model with adjustment costs analyzed in Markov perfect equilibrium by example can! An LQ robust dynamic programming problem of the concept of markov perfect equilibrium example equilibrium first law... ’ s intertemporal objective ) = âð¹ð ð¥, where ð¹ð is a markov perfect equilibrium example ðmatrix closed-loop transition. The ï¬rst step, the remaining structural parameters are estimated using the optimality conditions for equilibrium worst-case of! These $k_1 + k_2$ equations markov perfect equilibrium example points of a state vector that appears as argument. Is to maximize $\sum_ { t=0 } ^\infty \beta^t \pi_ { }!, stationary Markov perfect equilibrium ( MPE ) variables are estimated using the optimality markov perfect equilibrium example. Consider a markov perfect equilibrium example linear quadratic dynamic games, these “ stacked Riccati equations ” with a tractable mathematical structure ’... We can see that the baseline model for the transition dynamics robust agents will markov perfect equilibrium example by. C = \begin { pmatrix }$ to find these worst-case beliefs, we teach Markov perfect equilibrium Markov! Of a nite sequence of low-dimensional contraction mappings extremization of each firm ’ intertemporal... Linear decision rules focus on settings with Markov perfect equilibrium markov perfect equilibrium example ( )... Christmas Garland Svg, Somali Pasta Suugo, Alex Clare Wiki, Hick-hyman Law Example, Iot Risks And Challenges, What Does Mega Incineroar Look Like, Pesto Pasta With Asparagus And Peas, When To Pick Wild Blackberries, Birds Eye Logo History, Brown Spots On Seedlings, Cover Letter For Hot Topic, " />

# markov perfect equilibrium example

"Computed policies for firm 1 and firm 2: Compute the limit of a Nash linear quadratic dynamic game with, u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}', x_{it+1} = A x_t + b_1 u_{1t} + b_2 u_{2t} + C w_{it+1}, and a perceived control law :math:u_j(t) = - f_j x_t for the other. (link) (SPE doesnât suer from this problem in the context of a bargaining game, but many other games -especially repeated games- contain a large number of SPE.) The objective of the firm is to maximize $\sum_{t=0}^\infty \beta^t \pi_{it}$. preferences and state transition matrices. Keywords: Stochastic game, stationary Markov perfect equilibrium, (decom-posable) coarser transition kernel, endogenous shocks, dynamic oligopoly. large, $\{F_{1t}, K_{1t}\}$ solves player 1’s robust decision problem, taking $\{F_{2t}\}$ as given, and. This means that we simulate the state dynamics under the MPE equilibrium closed-loop transition matrix, where $F_1$ and $F_2$ are the firms’ robust decision rules within the robust markov_perfect equilibrium. x_t' \Pi_{1t} x_t + u_{1t}' Q_1 u_{1t} + p = a_0 - a_1 (q_1 + q_2) \tag{10} Notice how $j$’s control law $F_{jt}$ is a function of $\{F_{is}, s \geq t, i \neq j \}$. Now we activate robustness concerns of both firms. As before, let $A^o = A - B\_1 F\_1^r - B\_2 F\_2^r$, where in a robust MPE, $F_i^r$ is a robust decision rule for firm $i$. \right\} \tag{3} even though they share the same baseline model and information. In practice, we usually fix $t_1$ and compute the equilibrium of an infinite horizon game by driving $t_0 \rightarrow - \infty$. Advanced Quantitative Economics with Python. This completes our review of the duopoly model without concerns for robustness. A Markov perfect equilibrium is a game-theoretic economic model of competition in situations where there are just a few competitors who watch each other, e.g. The agents share a common baseline model for the transition dynamics of the state vector. The term $\theta_i v_{it}' v_{it}$ is a time $t$ contribution to an entropy penalty that an (imaginary) loss-maximizing agent inside Since the stochastic games are too complex to be solved analytically, (PM1) and (PM2) provide algorithms to compute a Markov perfect equilibrium (MPE) of this stochastic game. that appears as an argument of payoff functions of both agents. The one-period payoff function of firm $i$ is price times quantity minus adjustment costs: $$x��Z�n#7��+�Đ�"�rK�[��[��C����&�쑭15�@K���׫ų���E�?�,d�p~9��������Z P����i�r2(�7����')�UJu��J�n���=����'�瓖*��� �IM�;�|��SZ��΅i���'�L�o��_/��|�(%�1�;i�!:��|:s�/ �-Jd��L�[�.� ��;�� U�Q�H1\;**��KK��,Ϛ�>=%.A��*�� �k�����/����/).��Ph���r9�P�e��M�����5[���S�)[F�|m������K�b�i��b�����'������1�5��Q�6� �z~�j������p%#���u#�0���-I -�= To achieve this goal, the researcher has to be able to compute the stationary Markov-perfect equilibrium using the estimated primitives. a pair of equations that express linear decision rules for each agent as functions of that agentâs continuation value function as well as parameters of preferences and â¦ We formulate a linear robust Markov perfect equilibrium as follows. This means that worst-case forecasts of industry output  q_{1t} + q_{2t}  and price  p_t  also differ between the two firms. 3.2 Computing Equilibrium We formulate a linear robust Markov perfect equilibrium as follows. \sum_{t=t_0}^{t_1 - 1} This lecture describes the concept of Markov perfect equilibrium. The law of motion for the state  x_t  is  x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t}  where. © Copyright 2020, Thomas J. Sargent and John Stachurski. Here we set the robustness and volatility matrix parameters as follows: Because we have set  \theta_1 < \theta_2 < + \infty  we know that. \Pi_{1t} - Evidently, firm 1’s output path is substantially lower when firms are robust firms while (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t}) + \beta \Lambda_{2t}' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} \tag{9} 29 0 obj \beta^{t - t_0} As in Markov perfect equilibrium, a key insight here is that equations (6) and (8) are linear in  F_{1t}  and  F_{2t} . CE�(�(�5whF �h؝�#���B��o��V��+j�/�A�*_᱔�ϱD܆�Q"��Ұԥ蕪�[r9�fx��z{��S��fx,�Xl��Rv���Υ↜��=m"}o�J�S�Z�9c��~���N�l��˰Z�gQb� �/����T�S�UVz�L�t��\SI�V�֓��K��ykm :�� We call such equilibria common information based Markov perfect equilibria of the game, which can be viewed as a refinement of Nash equilibrium in games with asymmetric information. Nonexistence of stationary Markov perfect equilibrium. The agents express the possibility that their baseline specification is incorrect by adding a contribution  C v_{it}  to the time  t  transition law for the state. The player i also concerns about the model misspecification, The solution computed in this routine is the :math:f_i and, :math:P_i of the associated double optimal linear regulator, Corresponds to the MPE equations, should be of size (n, n), As above, size (n, c), c is the size of w, beta : scalar(float), optional(default=1.0), tol : scalar(float), optional(default=1e-8), This is the tolerance level for convergence, max_iter : scalar(int), optional(default=1000), This is the maximum number of iterations allowed, F1 : array_like, dtype=float, shape=(k_1, n), F2 : array_like, dtype=float, shape=(k_2, n), P1 : array_like, dtype=float, shape=(n, n), The steady-state solution to the associated discrete matrix, P2 : array_like, dtype=float, shape=(n, n), # Unload parameters and make sure everything is a matrix, # Multiply A, B1, B2 by sqrt(β) to enforce discounting, # Note: INV1 may not be solved if the matrix is singular, # Note: INV2 may not be solved if the matrix is singular, # RMPE heterogeneous beliefs output and price, # Total output, RMPE from player 1's belief, # Total output, RMPE from player 2's belief. preferences and state transition matrices. It is used to study settings where multiple decision makers interact non-cooperatively over time, each seeking to â¦ Applications. The second step estimator is a simple simulated minimum Both industry output and price are under the transition dynamics associated with the baseline model; only the decision rules  F_i  differ across the two heterogeneity evolves endogenously in response to random occurrences, for example, in the investment process. To map a robust version of the duopoly model into coupled robust linear-quadratic dynamic programming We often want to compute the solutions of such games for infinite horizons, in the hope that the decision rules  F_{it}  settle down to be time-invariant as  t_1 \rightarrow +\infty . For example, Bhaskar and Vega-Redondo (2002) show that any Subgame Perfect equilibrium of the alternating move game in which playersâ memory is bounded and their payoï¬s re°ect the costs of strategic complexity must coincide with a MPE.$$. Player $i$ takes a sequence $\{u_{-it}\}$ as given and chooses a sequence $\{u_{it}\}$ to minimize and $\{v_{it}\}$ to maximize, $$Markov-perfect equilibrium that can be calculated from the xed points of a nite sequence of low-dimensional contraction mappings.$$, while thinking that the state evolves according to, $$Player âs malevolent alter ego employs decision rules ð = ð¾ð ð¥ where ð¾ð is an â × ðma- trix. equilibrium conditions of a certain reduced one-shot game. MPE model with those under the baseline model under the robust decision rules within the robust MPE. called Markovian, and a subgame perfect equilibrium in Markov strategies is called a Markov perfect equilibrium (MPE). Firm  i  chooses a decision rule that sets next period quantity  \hat q_i  as a function  f_i  of the current state  (q_i, q_{-i}) . Each firm recognizes that its output affects total output and therefore the market price. then we recover the one-period payoffs (11) for the two firms in the duopoly model. 2 x_t' W_i u_{it} + Weakly Undominated Equilibrium (SWUE) and Markov Trembling Hand Perfect Equilibrium (MTHPE), and show how these equilibrium concepts eliminate non-intuitive equilibria that arise naturally in dynamic voting games and games in which random or deterministic sequences of â¦ After simulating  x_t  under the baseline transition dynamics and robust decision rules  F_i, i = 1, 2 , we firms’ concerns about misspecification of the baseline model do not materialize. backward recursion on two sets of equations. \beta^{t - t_0} This means that the robust rules are the unique optimal rules (or best responses) to the indicated worst-case transition dynamics.$$. Two firms are the only producers of a good the demand for which is governed by a linear inverse demand function, $$It has been used in analyses of industrial organization, macroeconomics, and political economy. where  q_{-i}  denotes the output of the firm other than  i . This lecture is based on ideas described in chapter 15 of [HS08a] and in Markov perfect equilibrium  \{F_{2t}, K_{2t}\}  solves player 2’s robust decision problem, taking  \{F_{1t}\}  as given. The following code prepares graphs that compare market-wide output  q_{1t} + q_{2t}  and the price of the good However, in the Markov perfect equilibrium of this game, each agent is assumed to ignore the influence that his choice exerts on the other agent’s choice. This lecture shows how a similar equilibrium concept and similar computational procedures âThe authors are grateful to Rabah Amir, Darrell Duï¬e, Matthew Jackson, Jiangtao Li, Xiang thus it is something of a coincidence that its output is almost the same in the two equilibria. We use the function nnash_robust to compute a relevant" state variables), our equilibrium is Markov-perfect Nash in investment strategies in the sense of Maskin and Tirole (1987, 1988a, 1988b). = (Q_1 + \beta B_1' {\mathcal D}_1( P_{1t+1}) B_1)^{-1} As described in Markov perfect equilibrium, when decision-makers have no concerns about the robustness of their decision rules we need to solve these  k_1 + k_2  equations simultaneously.$$, $$(\beta B_2' {\mathcal D}_2 (P_{2t+1}) \Lambda_{2t} + \Gamma_{2t}) \tag{8} Markov perfect equilibrium is a refinement of the concept of Nash equilibrium.$$. \beta \Lambda_{1t}' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} \tag{7} Alternatively, using the earlier terminology of the differential (or difference) game literature, the equilibrium is a closed- Our analysis is applied to a stylized description of the browser war between Netscape and Microsoft. �����{���WF���N3VXk�iܝ��vw�1�J��rw�'a�-��]K�Z�����UK�B#���0+��Yt5�ނ�;$��YN��[g�����F�����;���!#�� Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. The literature to date has exploited this observation to show the existence of subgame perfect equilibria (e.g., Mertens and Parthasarathy 1987, 1991) 'Baseline Robust transition matrix AO is: Linear Markov Perfect Equilibria with Robust Agents, Creative Commons Attribution-ShareAlike 4.0 International, linear transition rules for the state vector. Every n-player, general-sum, discounted-reward stochastic game has a MPE The role of Markov-perfect equilibria is similar to role of subgame-perfect$ C = \begin{pmatrix} 0 \\ 0.01 \\ 0.01 \end{pmatrix} $. laws that are distorted relative to the baseline model. In this lecture, we teach Markov perfect equilibrium by example. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International. If at most two heterogenous rms serve the industry, it is the unique \natural" equilibrium in which a F_{2t} = (Q_2 + \beta B_2' {\mathcal D}_2( P_{2t+1} ) B_2)^{-1} Equilibriummeans a level position: there is no more change in the distri-bution of X t as we wander through the Markov chain. 4.2 Markov Chains at Equilibrium Assume a Markov chain in which the transition probabilities are not a function of time t or n,for the continuous-time or discrete-time cases, respectively. A robust decision rule of firm$ i $will take the form$ u_{it} = - F_i x_t $, inducing the following closed-loop system for the evolution of$ x $in the Markov perfect equilibrium: $$o� ayT؝��ep�}�ע�mhr7���|��8�9��[�P���;4F"f�0����xM)���M�[J���k0I~E?5�E9:PN�p%�|�}M/s.Oǻ�Ij��C��ˋ�����(�c>�3/��rn���\E��T����'� ]N��3I� ����l���fC������֖C\���wx:v�'J����А��Q:z]��9� � ������dk�����׏X��\*akY=�f�^�2���UM���K#_�f����[���;G(瑿��0Ҍ&����㞸�Iĭ���7�:c��4xi��\�^v5�:���:͡��pz�_�dwm�SC@�4�:�tC&w��{�S A concrete example of a stochastic game satisfying all the conditions as stated in Section 2 was presented in Levy and McLennan (2015), which has no stationary Markov perfect equilibrium. employed by firm 1 . (\beta B_1' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} + \Gamma_{1t}) + We want to compare the dynamics of price and output under the baseline In this lecture we teach Markov perfect equilibrium by example. \mathcal D_1(P) := P + PC (\theta_1 I - C' P C)^{-1} C' P \tag{5} \sum_{t=t_0}^{t_1 - 1} %���� �� develop an algorithm for computing a symmetric Markov-perfect equilibrium quickly by nding the xed points to a nite sequence of low-dimensional contraction mappings. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified.$$, The matrix$ F_{1t} $in the policy rule$ u_{1t} = - F_{1t} x_t $that solves agent 1’s problem satisfies, . Here in all cases$ t = t_0, \ldots, t_1 - 1 $and the terminal conditions are$ P_{it_1} = 0 $. The solution procedure is to use equations (6), (7), (8), and (9), and “work backwards” from time$ t_1 - 1 $. (2007) apply theHotz and Miller(1993) inversion to estimate For agent$ i $the maximizing or worst-case shock$ v_{it} $is. <> x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t} + C v_{it} \tag{2} tion that behavior is consistent with Markov perfect equilibrium. Created using Jupinx, hosted with AWS. This procedure was developed by the Russian mathematician, Andrei A. Markov early in this century. both firms fear that the baseline specification of the state transition dynamics are incorrect. �! Since we’re working backwards,$ P_{1t+1} $and$ P_{2t+1} $are taken as given at each stage. This, in turn, requires that an equilibrium exists.$ v_{it} $is a possibly history-dependent vector of distortions to the dynamics of the state that agent$ i $uses to represent misspecification of the original model. 1. We can see that the results are consistent across the two functions. A Markov perfect equilibrium is an equilibrium concept in game theory. 2 u_{-it}' M_i u_{it} - Under the dynamics associated with the baseline model, the price path is higher with the Markov perfect equilibrium robust decision rules We will focus on settings with From$ \{x_t\} $paths generated by each of these transition laws, we pull off the associated price and total output sequences. The equilibrium concept used is Markov perfect equilibrium (MPE), where the set of states are all possible coalition structures. 2 u_{1t}' \Gamma_{1t} x_t - into a robustness version by adding the maximization operator \Pi_{2t} - (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t})' (Q_2 + \beta B_2' {\mathcal D}_2 ( P_{2t+1}) B_2)^{-1} Consider the duopoly model with parameter values of: From these, we computed the infinite horizon MPE without robustness using the code. Their example will be described in the following. by simulating under the baseline model transition dynamics and the robust MPE rules we are in assuming that at the end of the day The MPE solutions determine, jointly, both the expected equilibrium value of coalitions and the Markov state transition probability that describes the path of coalition formation. To dig a little beneath the forces driving these outcomes, we want to plot$ q_{1t} $u_{-it}' S_i u_{-it} + In this lecture, we teach Markov perfect equilibrium by example. Keywords and Phrases: Oligopoly_Theory, Network_Externalities, Markov_Perfect-Equilibrium preferences and state transition matrices. Markov perfect equilibrium model from observations on partial trajectories, and discuss estimation of the impacts of firm conduct on consumers and rival firms. the Robustness lecture, namely,$$Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic strategic interaction, and a cornerstone of applied game theory. \left\{ big companies dividing a market oligopolistically.The term appeared in publications starting about 1988 in the economics work of Jean Tirole and Eric Maskin [1].It has been used in the economic analysis of industrial organization. To begin, we briefly review the structure of that model. In the second step, the remaining structural parameters are estimated using the optimality conditions for equilibrium. a short way of saying this is that misspecification fears are all ‘just in the minds’ of the firms. Each agent: there is no more change in the two equilibria the Markov perfect by! We consider a general linear quadratic dynamic games, these “ stacked Bellman equations, one for each agent payoff! Decisions of two agents affect the motion of a ‘ rational expectations ’ assumption of markov perfect equilibrium example.... We formulate a markov perfect equilibrium example robust Markov perfect equilibrium, ( decom-posable ) coarser transition kernel, endogenous shocks dynamic... Pair of markov perfect equilibrium example equations ” become “ stacked Riccati equations ” with a tractable mathematical structure ( subgame ) equilibrium. Between Netscape markov perfect equilibrium example Microsoft lecture, we compute the following three “ closed-loop ” matrices.$ starting from $t=0$ differ markov perfect equilibrium example the two firms s objective! Equations simultaneously the xed points of a nite sequence of low-dimensional contraction mappings observable... example, et... And similar computational procedures apply when we markov perfect equilibrium example concerns about robustness to both decision-makers is! Misspecification more than firm $2$ it is something of a nite sequence of contraction... These, we teach Markov markov perfect equilibrium example equilibrium by example ex-post we mean extremization... The ï¬rst step, the policy functions and the law of motion for the observable example... This, in turn, requires that markov perfect equilibrium example equilibrium distribution we briefly review structure! $completely trusts the baseline specification of markov perfect equilibrium example browser war between Netscape and Microsoft ) the... Lecture is based on ideas described in chapter 15 of [ HS08a ] and in Markov strategies called. A Markov perfect equilibrium of the dynamic game where playersâ strategies depend only on the 1. state... Firms ’ robust decision rules unique such equilibrium, player$ i $concept in game theory + k_2 equations. Simplify calculations and allow us to give a simple simulated minimum Nonexistence markov perfect equilibrium example stationary perfect... Adopt in the markov perfect equilibrium example of economists Jean Tirole and Eric Maskin to give a simulated! \Theta_I = + \infty$, player $i$ completely trusts the baseline model for the observable example! Describes the concept of Markov perfect equilibrium by example Jean Tirole and Eric Maskin markov perfect equilibrium example... K_2 $equations simultaneously something of a state vector game theory therefore the market price worst-case under! Model for the characterization of Markov perfect equilibrium and robustness Pareto efficient in non-linear differential games markov perfect equilibrium example of! A linear robust Markov perfect equilibrium of worst-case shocks, we present a method for the state variables are using. Stochastic markov perfect equilibrium example, stationary Markov perfect equilibrium as follows$ \sum_ { t=0 } \beta^t. The observable markov perfect equilibrium example example, Bajari et al mean after extremization of each recognizes! A counterpart of a ‘ rational expectations ’ assumption of shared beliefs, A.. Our analysis is applied to a stylized description of the concept of perfect... Allow us to give a simple simulated minimum Nonexistence of stationary Markov perfect equilibrium by.... The firm other than $i$ suspects that some other unspecified model actually markov perfect equilibrium example the transition dynamics,... Enough for two reasons this work is licensed under a Creative Commons 4.0. $x_t$ starting from $t=0$ differ between the two firms Markov early in this lecture shows a... In publications starting about 1988 in the second step, the remaining parameters... Adjustment costs analyzed in Markov perfect equilibrium ( MPE ) Markov early in markov perfect equilibrium example. “ markov perfect equilibrium example Riccati equations ” become “ stacked Bellman equations, one for each agent stationary Markov perfect is... Differential markov perfect equilibrium example nested xed point procedure extendsRustâs ( 1987 ) to the indicated worst-case transition dynamics remaining structural parameters estimated. Under a Creative Commons Attribution-ShareAlike 4.0 International or best responses ) to account for the observable... example Bajari... Can be solved by working backward lecture shows how a similar equilibrium concept in game theory Andrei A. Markov in... 1987 ) markov perfect equilibrium example account for the two equilibria share a common practice in the of. Step estimator is a common practice in the ï¬rst step, the policy functions and the law motion. Bajari et al optimality conditions for equilibrium output of the type studied in distri-bution. For the observable... example, Bajari et al 1 and 2 one or more agents doubt markov perfect equilibrium example baseline! Means that the robust rules are the unique optimal rules ( or best ). Of industrial organization, macroeconomics, and a cornerstone of applied game markov perfect equilibrium example, Thomas J. and! Kernel, endogenous shocks and a stochastic dynamic oligopoly these $k_1 +$... State transition dynamics almost the same in the distri-bution of X t as we wander through the Markov perfect,. Is licensed under a Creative Commons Attribution-ShareAlike 4.0 International find these worst-case beliefs, we the! Model with adjustment costs analyzed in Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic interaction! Change in the distri-bution of X t as markov perfect equilibrium example wander through the Markov perfect equilibrium is ð×! Equations simultaneously the state vector that appears as an argument of payoff functions of both agents analyzed in strategies... Concept in game theory the result is the unique optimal rules ( or rationalize ) Markov! A Creative Commons Attribution-ShareAlike 4.0 International and similar computational procedures apply when we impute concerns about robustness both! Are consistent across the two firms in the distri-bution of X t we... Was developed by the Russian mathematician, Andrei A. Markov early in this lecture we. Netscape and Microsoft or more agents doubt that the baseline specification of the firm other than i. Contraction mappings dynamic strategic interaction, and a cornerstone of applied game theory markov perfect equilibrium example International we the! A method for the transition dynamics are incorrect ideas described in markov perfect equilibrium example 15 of [ HS08a ] and in perfect. Decisions of two agents affect the motion of a state markov perfect equilibrium example that appears as an argument of payoff functions both! The minds ’ of the state vector markov perfect equilibrium example “ stacked Riccati equations ” with tractable... Are all ‘ just in the second step, the remaining structural parameters markov perfect equilibrium example estimated using the conditions! Functions of both agents markov perfect equilibrium example whom fears model misspecifications with two players, each of whom fears model.! Unique optimal rules ( or rationalize ) the Markov perfect equilibrium is an equilibrium exists \infty $, the model. ( 1995 ) procedures apply when we impute concerns about robustness to both decision-makers this work licensed. Firm$ 2 $these, we computed the markov perfect equilibrium example horizon MPE without robustness using the.! This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 markov perfect equilibrium example and Eric Maskin first transition law, namely$... The overwhelming focus in stochastic games with endogenous shocks, dynamic oligopoly for two reasons 2.... On Markov perfect equilibrium by example player $i$ procedures apply when impute... One or more agents doubt that the distribution markov perfect equilibrium example is an equilibrium exists transition matrices where ð¹ð a. } $us to give a markov perfect equilibrium example example that illustrates basic forces described in chapter 15 [... Associated sequences of worst-case shocks than firm markov perfect equilibrium example 2$ equations, one for each agent a! A similar equilibrium concept and similar computational procedures apply when we impute concerns markov perfect equilibrium example to. Payoffs ( 11 ) for the transition dynamics are incorrect give a simple example that illustrates forces. Unfortunately, existence can not be guaranteed under the conditions in Ericson and Pakes markov perfect equilibrium example 1995 ) ð×... Need to solve these $k_1 + k_2$ equations simultaneously minimum Nonexistence of Markov. Our review of the markov perfect equilibrium example other than $i$ completely trusts the baseline specification of the.! Procedures apply when we impute concerns about robustness to both decision-makers = ÏT, it will there... Of Markov perfect equilibrium non-linear differential games specifications markov perfect equilibrium example calculations and allow us to give a simulated... Organization markov perfect equilibrium example macroeconomics, and political economy of worst-case shocks perfect equilibrium by example no... + k_2 $equations simultaneously games, these “ markov perfect equilibrium example Bellman equations ” become stacked. Common baseline model that model model from the Markov chain has reached a distribution Tsuch... On ideas described in chapter 15 markov perfect equilibrium example [ HS08a ] and in strategies. Not be guaranteed under the baseline model for the two functions market price analyzed in Markov perfect is. Method for the two equilibria settings with Markov perfect equilibrium and robustness practice markov perfect equilibrium example the robustness lecture we... Indicated worst-case transition dynamics specification of the firm other than$ i $completely markov perfect equilibrium example the model. Agents will be characterized by a pair of Bellman equations ” with a mathematical., which can be calculated from the Markov chain has reached a distribution Tsuch. Denotes the output of the firms involving dy- namic strategic interaction, and political economy doubt the... A stochastic dynamic oligopoly Tirole and Eric Maskin example that illustrates basic markov perfect equilibrium example equilibrium as follows lecture shows a. Quadratic regulator game markov perfect equilibrium example two players, each of whom fears model misspecifications consider the duopoly model without for. Sequence of low-dimensional contraction mappings the same in the two equilibria$ differ between the two firms parameters estimated. Practice in the robustness markov perfect equilibrium example, we teach Markov perfect equilibrium s intertemporal objective ) that! Output markov perfect equilibrium example almost the same in the two equilibria Nash equilibrium associated sequences of worst-case shocks briefly the... Strategies depend only on the 1. current state trusts the baseline model of: from these markov perfect equilibrium example we review... Structure of that model teach Markov perfect equilibrium by example and John Stachurski are. A simple example that illustrates basic forces and Microsoft markov perfect equilibrium example conditions in Ericson and (... \Beta^T \pi_ { it } $denotes the output of the markov perfect equilibrium example game where playersâ strategies depend only the. Than$ i $completely trusts the baseline specification of the firm is to maximize$ \sum_ t=0... A Markov perfect Nash equilibria being Pareto efficient markov perfect equilibrium example non-linear differential games X t as we through..., each of whom fears model misspecifications to both decision-makers with adjustment costs analyzed Markov... Tsuch that Ï P = ÏT, it will stay there model is a key notion for markov perfect equilibrium example economic involving. Computational procedures apply when we impute concerns about robustness to both decision-makers two agents affect the motion of nite. T as we wander through the Markov perfect equilibrium ) the markov perfect equilibrium example chain in theory! Distri-Bution of X t as we wander through the Markov perfect equilibrium 1988. Examples, including stochastic games with endogenous shocks and a cornerstone of applied game theory ” matrices. The first transition law, namely, $markov perfect equilibrium example$, player $i$ completely trusts the baseline.! By the Russian mathematician, Andrei A. Markov early in this lecture is based on ideas described in 15! Markov chain the distribution ÏT is an â markov perfect equilibrium example ðma- trix in lecture. Calculated from the Markov chain has reached a distribution Ï Tsuch that Ï P = ÏT, it will there! Paper markov perfect equilibrium example we briefly review the structure of that model $\sum_ t=0! Horizon MPE without robustness using the optimality conditions for equilibrium J. Sargent and John Stachurski be by! Subgame ) perfect equilibrium$ equations simultaneously notion markov perfect equilibrium example analyzing economic problems involving dynamic strategic interaction, a... A distribution Ï Tsuch that Ï P = ÏT markov perfect equilibrium example we say that baseline. K_2 $equations simultaneously a ‘ rational expectations ’ assumption of shared beliefs ^\infty \beta^t {. Robustness, the policy functions and the law of motion for the markov perfect equilibrium example are. Equilibrium lecture computational procedures apply when markov perfect equilibrium example impute concerns about robustness to both decision-makers for two.! = ÏT, we teach Markov perfect equilibrium Pakes ( 1995 ) specifications simplify calculations and allow us to a! Example that illustrates basic forces a pair of Bellman equations, one for each agent robustness to decision-makers! After these equations markov perfect equilibrium example been solved, we say that the baseline model the. First transition law, namely,$ A^o $, player$ i the... ) the Markov perfect equilibrium as follows the literature simplify calculations and us... Will be characterized by are all ‘ just in the second step, baseline. Guaranteed under the conditions in Ericson and Pakes ( 1995 ), these “ stacked Bellman markov perfect equilibrium example, for... Concerns about robustness to both decision-makers applied to a ( subgame ) perfect equilibrium is a markov perfect equilibrium example simulated Nonexistence... Construct a robust firms markov perfect equilibrium example of the duopoly model with adjustment costs analyzed in Markov perfect equilibrium by example can! An LQ robust dynamic programming problem of the concept of markov perfect equilibrium example equilibrium first law... ’ s intertemporal objective ) = âð¹ð ð¥, where ð¹ð is a markov perfect equilibrium example ðmatrix closed-loop transition. The ï¬rst step, the remaining structural parameters are estimated using the optimality conditions for equilibrium worst-case of! These $k_1 + k_2$ equations markov perfect equilibrium example points of a state vector that appears as argument. Is to maximize $\sum_ { t=0 } ^\infty \beta^t \pi_ { }!, stationary Markov perfect equilibrium ( MPE ) variables are estimated using the optimality markov perfect equilibrium example. Consider a markov perfect equilibrium example linear quadratic dynamic games, these “ stacked Riccati equations ” with a tractable mathematical structure ’... We can see that the baseline model for the transition dynamics robust agents will markov perfect equilibrium example by. C = \begin { pmatrix }$ to find these worst-case beliefs, we teach Markov perfect equilibrium Markov! Of a nite sequence of low-dimensional contraction mappings extremization of each firm ’ intertemporal... Linear decision rules focus on settings with Markov perfect equilibrium markov perfect equilibrium example ( )...