Tải bản đầy đủ (.pdf) (18 trang)

Recursive macroeconomic theory, Thomas Sargent 2nd Ed - Chapter 7 pdf

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (200.15 KB, 18 trang )

Part III
Competitive equilibria and applica-
tions
Chapter 7
Recursive (Partial) Equilibrium
7.1. An equilibrium concept
This chapter formulates competitive and oligopolistic equilibria in some dynamic
settings. Up to now, we have studied single-agent problems where components
of the state vector not under the control of the agent were taken as given. In this
chapter, we describe multiple-agent settings in which some of the components
of the state vector that one agent takes as exogenous are determined by the
decisions of other agents. We study partial equilibrium models of a kind applied
in microeconomics.
1
We describe two closely related equilibrium concepts for
such models: a rational expectations or recursive competitive equilibrium, and
a Markov perfect equilibrium. The first equilibrium concept jointly restricts a
Bellman equation and a transition law that is taken as given in that Bellman
equation. The second equilibrium concept leads to pairs (in the duopoly case)
or sets (in the oligopoly case) of Bellman equations and transition equations
that are to be solved jointly by simultaneous backward induction.
Though the equilibrium concepts introduced in this chapter obviously tran-
scend linear-quadratic setups, we choose to present them in the context of linear
quadratic examples in which the Bellman equations remain tractable.
1
For example, see Rosen and Topel (1988) and Rosen, Murphy, and Scheinkman
(1994)
– 186 –
Example: adjustment costs 187
7.2. Example: adjustment costs
This section describes a model of a competitive market with producers who


face adjustment costs.
2
The model consists of n identical firms whose profit
function makes them want to forecast the aggregate output decisions of other
firms just like them in order to determine their own output. We assume that
n is a large number so that the output of any single firm has a negligible effect
on aggregate output and, hence, firms are justified in treating their forecast of
aggregate output as unaffected by their own output decisions. Thus, one of n
competitive firms sells output y
t
and chooses a production plan to maximize


t=0
β
t
R
t
(7.2.1)
where
R
t
= p
t
y
t
− .5d (y
t+1
− y
t

)
2
(7.2.2)
subject to y
0
being a given initial condition. Here β ∈ (0, 1) is a discount factor,
and d>0 measures a cost of adjusting the rate of output. The firm is a price
taker. The price p
t
lies on the demand curve
p
t
= A
0
− A
1
Y
t
(7.2.3)
where A
0
> 0,A
1
> 0andY
t
is the marketwide level of output, being the sum
of output of n identical firms. The firm believes that marketwide output follows
the law of motion
Y
t+1

= H
0
+ H
1
Y
t
≡ H (Y
t
) , (7.2.4)
where Y
0
is a known initial condition. The belief parameters H
0
,H
1
are among
the equilibrium objects of the analysis, but for now we proceed on faith and take
them as given. The firm observes Y
t
and y
t
at time t when it chooses y
t+1
.
The adjustment costs d(y
t+1
− y
t
)
2

give the firm the incentive to forecast the
market price.
Substituting equation (7.2.3) into equation (7.2.2) gives
R
t
=(A
0
− A
1
Y
t
) y
t
− .5d (y
t+1
− y
t
)
2
.
2
The model is a version of one analyzed by Lucas and Prescott (1971) and
Sargent (1987a). The recursive competitive equilibrium concept was used by
Lucas and Prescott (1971) and described further by Prescott and Mehra (1980).
188 Recursive (Partial) Equilibrium
The firm’s incentive to forecast the market price translates into an incentive to
forecast the level of market output Y . We can write the Bellman equation for
the firm as
v (y,Y )=max
y



A
0
y − A
1
yY − .5d (y

− y)
2
+ βv (y

,Y

)

(7.2.5)
where the maximization is subject to Y

= H(Y ). Here

denotes next period’s
value of a variable. The Euler equation for the firm’s problem is
−d (y

− y)+βv
y
(y

,Y


)=0. (7.2.6)
Noting that for this problem the control is y

and applying the Benveniste-
Scheinkman formula from chapter 5 gives
v
y
(y,Y )=A
0
− A
1
Y + d (y

− y) .
Substituting this equation into equation (7.2.6) gives
−d (y
t+1
− y
t
)+β [A
0
− A
1
Y
t+1
+ d (y
t+2
− y
t+1

)] = 0. (7.2.7)
In the process of solving its Bellman equation, the firm sets an output path
that satisfies equation (7.2.7), taking equation (7.2.4) as given, subject to the
initial conditions (y
0
,Y
0
) as well as an extra terminal condition. The terminal
condition is
lim
t→∞
β
t
y
t
v
y
(y
t
,Y
t
)=0. (7.2.8)
This is called the transversality condition and acts as a first-order necessary
condition “at infinity.” The firm’s decision rule solves the difference equation
(7.2.7) subject to the given initial condition y
0
and the terminal condition
(7.2.8). Solving the Bellman equation by backward induction automatically
incorporates both equations (7.2.7) and (7.2.8).
The firm’s optimal policy function is

y
t+1
= h (y
t
,Y
t
) . (7.2.9)
Then with n identical firms, setting Y
t
= ny
t
makes the actual law of motion
for output for the market
Y
t+1
= nh (Y
t
/n, Y
t
) . (7.2.10)
Example: adjustment costs 189
Thus, when firms believe that the law of motion for marketwide output is equa-
tion (7.2.4), their optimizing behavior makes the actual law of motion equation
(7.2.10).
A recursive competitive equilibrium equates the actual and perceived laws
of motion (7.2.4) and(7.2.10). For this model, we adopt the following definition:
Definition: A recursive competitive equilibrium
3
of the model with adjust-
ment costs is a value function v(y,Y ), an optimal policy function h(y, Y ), and

a law of motion H(Y ) such that
a. Given H , v(y, Y ) satisfies the firm’s Bellman equation and h(y,Y )isthe
optimal policy function.
b. The law of motion H satisfies H(Y )=nh(Y/n,Y ).
The firm’s optimum problem induces a mapping M from a perceived law
of motion for capital H to an actual law of motion M(H). The mapping is
summarized in equation (7.2.10). The H component of a rational expectations
equilibrium is a fixed point of the operator M.
This equilibrium just defined is a special case of a recursive competitive
equilibrium, to be defined more generally in the next section. How might we
find an equilibrium? The next subsection shows a method that works in the
present case and often works more generally. The method involves noting that
the equilibrium solves an associated planning problem. For convenience, we’ll
assume from now on that the number of firms is one, while retaining the as-
sumption of price-taking behavior.
3
This is also often called a rational expectations equilibrium.
190 Recursive (Partial) Equilibrium
7.2.1. A planning problem
Our solution strategy is to match the Euler equations of the market problem
with those for a planning problem that can be solved as a single-agent dynamic
programming problem. The optimal quantities from the planning problem are
then the recursive competitive equilibrium quantities, and the equilibrium price
can be coaxed from shadow prices for the planning problem.
To determine the planning problem, we first compute the sum of consumer
and producer surplus at time t, defined as
S
t
= S (Y
t

,Y
t+1
)=

Y
t
0
(A
0
− A
1
x) dx− .5d (Y
t+1
− Y
t
)
2
. (7.2.11)
The first term is the area under the demand curve. The planning problem is to
choose a production plan to maximize


t=0
β
t
S (Y
t
,Y
t−1
)(7.2.12)

subject to an initial condition Y
0
. The Bellman equation for the planning
problem is
V (Y )=max
Y


A
0
Y −
A
1
2
Y
2
− .5d (Y

− Y )
2
+ βV (Y

)

. (7.2.13)
The Euler equation is
−d (Y

− Y )+βV


(Y

)=0. (7.2.14)
Applying the Benveniste-Scheinkman formula gives
V

(Y )=A
0
− A
1
Y + d (Y

− Y ) . (7.2.15)
Substituting this into equation (7.2.14) and rearranging gives
βA
0
+ dY
t
− [βA
1
+ d (1 + β)] Y
t+1
+ dβY
t+2
=0 (7.2.16)
Return to equation (7.2.7) and set y
t
= Y
t
for all t.(Rememberthatwe

have set n =1. When n = 1 we have to adjust pieces of the argument for n.)
Notice that with y
t
= Y
t
,equations(7.2.16) and (7.2.7) are identical. Thus,
a solution of the planning problem also is an equilibrium. Setting y
t
= Y
t
in
Recursive competitive equilibrium 191
equation (7.2.7) amounts to dropping equation (7.2.4) and instead solving for
the coefficients H
0
,H
1
that make y
t
= Y
t
true and that jointly solve equations
(7.2.4) and (7.2.7).
It follows that for this example we can compute an equilibrium by forming
the optimal linear regulator problem corresponding to the Bellman equation
(7.2.13). The optimal policy function for this problem can be used to form the
rational expectations H(Y ).
4
7.3. Recursive competitive equilibrium
The equilibrium concept of the previous section is widely used. Following

Prescott and Mehra (1980), it is useful to define the equilibrium concept more
generally as a recursive competitive equilibrium.Letx be a vector of state
variables under the control of a representative agent and let X be the vector
of those same variables chosen by “the market.” Let Z be a vector of other
state variables chosen by “nature”, that is, determined outside the model. The
representative agent’s problem is characterized by the Bellman equation
v (x, X, Z)=max
u
{R (x, X, Z, u)+βv (x

,X

,Z

)} (7.3.1)
where

denotes next period’s value, and where the maximization is subject to
the restrictions:
x

= g (x, X, Z, u)(7.3.2)
X

= G (X, Z) . (7.3.3)
Z

= ζ (Z)(7.3.4)
Here g describes the impact of the representative agent’s controls u on his state
x


; G and ζ describe his beliefs about the evolution of the aggregate state. The
solution of the representative agent’s problem is a decision rule
u = h (x, X, Z) . (7.3.5)
4
The method of this section was used by Lucas and Prescott (1971). It
uses the connection between equilibrium and Pareto optimality expressed in the
fundamental theorems of welfare economics. See Mas-Colell, Whinston, and
Green (1995).
192 Recursive (Partial) Equilibrium
To make the representative agent representative, we impose X = x, but
only “after” we have solved the agent’s decision problem. Substituting equation
(7.3.5) and X = x
t
into equation (7.3.2) gives the actual law of motion
X

= G
A
(X, Z) , (7.3.6)
where G
A
(X, Z) ≡ g[X, X, Z, h(X, X,Z)]. We are now ready to propose a
definition:
Definition: A recursive competitive equilibrium is a policy function h,an
actual aggregate law of motion G
A
, and a perceived aggregate law G such that
(a) Given G, h solves the representative agent’s optimization problem; and (b)
h implies that G

A
= G.
This equilibrium concept is also sometimes called a rational expectations
equilibrium. The equilibrium concept makes G an outcome of the analysis. The
functions giving the representative agent’s expectations about the aggregate
state variables contribute no free parameters and are outcomes of the analysis.
There are no free parameters that characterize expectations.
5
In exercise 7.1,
you are asked to implement this equilibrium concept.
7.4. Markov perfect equilibrium
It is instructive to consider a dynamic model of duopoly. A market has two
firms. Each firm recognizes that its output decision will affect the aggregate
output and therefore influence the market price. Thus, we drop the assumption
of price-taking behavior.
6
The one-period return function of firm i is
R
it
= p
t
y
it
− .5d (y
it+1
− y
it
)
2
. (7.4.1)

There is a demand curve
p
t
= A
0
− A
1
(y
1t
+ y
2t
) . (7.4.2)
5
This is the sense in which rational expectations models make expectations
disappear from a model.
6
One consequence of departing from the price-taking framework is that the
market outcome will no longer maximize welfare, measured as the sum of con-
sumer and producer surplus. See exercise 7.4 for the case of a monopoly.
Markov perfect equilibrium 193
Substituting the demand curve into equation (7.4.1) lets us express the return
as
R
it
= A
0
y
it
− A
1

y
2
it
− A
1
y
it
y
−i,t
− .5d (y
it+1
− y
it
)
2
, (7.4.3)
where y
−i,t
denotes the output of the firm other than i.Firmi chooses a
decision rule that sets y
it+1
as a function of (y
it
,y
−i,t
) and that maximizes


t=0
β

t
R
it
.
Temporarily assume that the maximizing decision rule is y
it+1
= f
i
(y
it
,y
−i,t
).
Given the function f
−i
, the Bellman equation of firm i is
v
i
(y
it
,y
−i,t
)=max
y
it+1
{R
it
+ βv
i
(y

it+1
,y
−i,t+1
)}, (7.4.4)
where the maximization is subject to the perceived decision rule of the other
firm
y
−i,t+1
= f
−i
(y
−i,t
,y
it
) . (7.4.5)
Note the cross-reference between the two problems for i =1, 2.
We now advance the following definition:
Definition: A Markov perfect equilibrium is a pair of value functions v
i
and
a pair of policy functions f
i
for i =1, 2 such that
a. Given f
−i
,v
i
satisfies the Bellman equation (7.4.4).
b. The policy function f
i

attains the right side of the Bellman equation
(7.4.4).
The adjective Markov denotes that the equilibrium decision rules depend
only on the current values of the state variables y
it
, not their histories. Perfect
means that the equilibrium is constructed by backward induction and therefore
builds in optimizing behavior for each firm for all conceivable future states, in-
cluding many that are not realized by iterating forward on the pair of equilibrium
strategies f
i
.
194 Recursive (Partial) Equilibrium
7.4.1. Computation
If it exists, a Markov perfect equilibrium can be computed by iterating to con-
vergence on the pair of Bellman equations (7.4.4). In particular, let v
j
i
,f
j
i
be
the value function and policy function for firm i at the j th iteration. Then
imagine constructing the iterates
v
j+1
i
(y
it
,y

−i,t
)=max
y
i,t+1

R
it
+ βv
j
i
(y
it+1
,y
−i,t+1
)

, (7.4.6)
where the maximization is subject to
y
−i,t+1
= f
j
−i
(y
−i,t
,y
it
) . (7.4.7)
In general, these iterations are difficult.
7

In the next section, we de-
scribe how the calculations simplify for the case in which the return function is
quadratic and the transition laws are linear.
7.5. Linear Markov perfect equilibria
In this section, we show how the optimal linear regulator can be used to solve a
model like that in the previous section. That model should be considered to be
an example of a dynamic game. A dynamic game consists of these objects: (a)
a list of players; (b) a list of dates and actions available to each player at each
date; and (c) payoffs for each player expressed as functions of the actions taken
by all players.
The optimal linear regulator is a good tool for formulating and solving dy-
namic games. The standard equilibrium concept—subgame perfection—in these
games requires that each player’s strategy be computed by backward induction.
This leads to an interrelated pair of Bellman equations. In linear-quadratic
dynamic games, these “stacked Bellman equations” become “stacked Riccati
equations” with a tractable mathematical structure.
We now consider the following two-player, linear quadratic dynamic game.
An (n × 1) state vector x
t
evolves according to a transition equation
x
t+1
= A
t
x
t
+ B
1t
u
1t

+ B
2t
u
2t
(7.5.1)
7
See Levhari and Mirman (1980) for how a Markov perfect equilibrium can
be computed conveniently with logarithmic returns and Cobb-Douglas transition
laws. Levhari and Mirman construct a model of fish and fishers.
Linear Markov perfect equilibria 195
where u
jt
is a (k
j
× 1) vector of controls of player j. We start with a finite
horizon formulation, where t
0
is the initial date and t
1
istheterminaldatefor
the common horizon of the two players. Player 1 maximizes

t
1
−1

t=t
0

x

T
t
R
1
x
t
+ u
T
1t
Q
1
u
1t
+ u
T
2t
S
1
u
2t

(7.5.2)
where R
1
and S
1
are positive semidefinite and Q
1
is positive definite. Player
2 maximizes


t
1
−1

t=t
0

x
T
t
R
2
x
t
+ u
T
2t
Q
2
u
2t
+ u
T
1t
S
2
u
1t


(7.5.3)
where R
2
and S
2
are positive semidefinite and Q
2
is positive definite.
We formulate a Markov perfect equilibrium as follows. Player j employs
linear decision rules
u
jt
= −F
jt
x
t
,t= t
0
, ,t
1
− 1
where F
jt
is a (k
j
× n) matrix. Assume that player i knows {F
−i,t
; t =
t
0

, ,t
1
− 1}. Then player 1’s problem is to maximize expression (7.5.2) sub-
ject to the known law of motion (7.5.1) and the known control law u
2t
= −F
2t
x
t
of player 2. Symmetrically, player 2’s problem is to maximize expression (7.5.3)
subject to equation (7.5.1) and u
1t
= −F
1t
x
t
. A Markov perfect equilibrium is
a pair of sequences {F
1t
,F
2t
; t = t
0
,t
0
+1, ,t
1
− 1} such that {F
1t
} solves

player 1’s problem, given {F
2t
},and{F
2t
} solves player 2’s problem, given
{F
1t
}. We have restricted each player’s strategy to depend only on x
t
,and
not on the history h
t
= {(x
s
,u
1s
,u
2s
),s = t
0
, ,t}. This restriction on strat-
egy spaces accounts for the adjective “Markov” in the phrase “Markov perfect
equilibrium.”
Player 1’s problem is to maximize

t
1
−1

t=t

0

x
T
t

R
1
+ F
T
2t
S
1
F
2t

x
t
+ u
T
1t
Q
1
u
1t

subject to
x
t+1
=(A

t
− B
2t
F
2t
) x
t
+ B
1t
u
1t
.
This is an optimal linear regulator problem, and it can be solved by working
backward. Evidently, player 2’s problem is also an optimal linear regulator
problem.
196 Recursive (Partial) Equilibrium
The solution of player 1’s problem is given by
F
1t
=

B
T
1t
P
1t+1
B
1t
+ Q
1


−1
B
T
1t
P
1t+1
(A
t
− B
2t
F
2t
)(7.5.4)
t = t
0
,t
0
+1, ,t
1
− 1
where P
1t
is the solution of the following matrix Riccati difference equation,
with terminal condition P
1t
1
=0:
P
1t

=(A
t
− B
2t
F
2t
)
T
P
1t+1
(A
t
− B
2t
F
2t
)

R
1
+ F
T
2t
S
1
F
2t

− (A
t

− B
2t
F
2t
)
T
P
1t+1
B
1t

B
T
1t
P
1t+1
B
1t
+ Q
1

−1
B
T
1t
P
1t+1
(A
t
− B

2t
F
2t
) .
(7.5.5)
The solution of player 2’s problem is
F
2t
=

B
T
2t
P
2t+1
B
2t
+ Q
2

−1
B
T
2t
P
2t+1
(A
t
− B
1t

F
1t
)(7.5.6)
where P
2t
solves the following matrix Riccati difference equation, with terminal
condition P
2t
1
=0:
P
2t
=(A
t
− B
1t
F
1t
)
T
P
2t+1
(A
t
− B
1t
F
1t
)+


R
2
+ F
T
1t
S
2
F
1t

− (A
t
− B
1t
F
1t
)
T
P
2t+1
B
2t

B
T
2t
P
2t+1
B
2t

+ Q
2

−1
B
T
2t
P
2t+1
(A
t
− B
1t
F
1t
) .
(7.5.7)
The equilibrium sequences {F
1t
,F
2t
; t = t
0
,t
0
+1, ,t
1
−1} can be calcu-
lated from the pair of coupled Riccati difference equations (7.5.5) and (7.5.7).
In particular, we use equations (7.5.4), (7.5.5), (7.5.6), and (7.5.7) to “work

backward” from time t
1
− 1. Notice that given P
1t+1
and P
2t+1
,equations
(7.5.4) and (7.5.6) are a system of (k
2
× n)+(k
1
× n) linear equations in the
(k
2
× n)+(k
1
× n) unknowns in the matrices F
1t
and F
2t
.
Notice how j’s control law F
jt
is a function of {F
is
,s ≥ t, i = j}.Thus,
agent i’s choice of {F
it
; t = t
0

, ,t
1
−1} influences agent j ’s choice of control
laws. However, in the Markov perfect equilibrium of this game, each agent
is assumed to ignore the influence that his choice exerts on the other agent’s
choice.
8
8
In an equilibrium of a Stackelberg or dominant player game, the timing of
moves is so altered relative to the present game that one of the agents called
the leader takes into account the influence that his choices exert on the other
agent’s choices. See chapter 18.
Linear Markov perfect equilibria 197
We often want to compute the solutions of such games for infinite horizons,
in the hope that the decision rules F
it
settledowntobetimeinvariantas
t
1
→ +∞. In practice, we usually fix t
1
and compute the equilibrium of an
infinite horizon game by driving t
0
→−∞. Judd followed that procedure in
the following example.
7.5.1. An example
This section describes the Markov perfect equilibrium of an infinite horizon
linear quadratic game proposed by Kenneth Judd (1990). The equilibrium is
computed by iterating to convergence on the pair of Riccati equations defined

by the choice problems of two firms. Each firm solves a linear quadratic op-
timization problem, taking as given and known the sequence of linear decision
rules used by the other player. The firms set prices and quantities of two goods
interrelated through their demand curves. There is no uncertainty. Relevant
variables are defined as follows:
I
it
= inventories of firm i at beginning of t.
q
it
= production of firm i during period t.
p
it
= price charged by firm i during period t.
S
it
= sales made by firm i during period t.
E
it
= costs of production of firm i during period t.
C
it
= costs of carrying inventories for firm i during t.
The firms’ cost functions are
C
it
= c
i1
+ c
i2

I
it
+ .5c
i3
I
2
it
E
it
= e
i1
+ e
i2
q
it
+ .5e
i3
q
2
it
where e
ij
,c
ij
are positive scalars.
Inventories obey the laws of motion
I
i,t+1
=(1−δ) I
it

+ q
it
− S
it
Demand is governed by the linear schedule
S
t
= dp
it
+ B
198 Recursive (Partial) Equilibrium
where S
t
=[S
1t
S
2t
]

, d is a (2 × 2) negative definite matrix, and B is a
vector of constants. Firm i maximizes the undiscounted sum
lim
T →∞
1
T
T

t=0
(p
it

S
it
− E
it
− C
it
)
by choosing a decision rule for price and quantity of the form
u
it
= −F
i
x
t
where u
it
=[p
it
q
it
]

, and the state is x
t
=[I
1t
I
2t
].
In the web site for the book, we supply a Matlab program nnash.m that

computes a Markov perfect equilibrium of the linear quadratic dynamic game
in which player i maximizes



t=0
{x

t
r
i
x
t
+2x

t
w
i
u
it
+ u

it
q
i
u
it
+ u

jt

s
i
u
jt
+2u

jt
m
i
u
it
}
subject to the law of motion
x
t+1
= ax
t
+ b
1
u
1t
+ b
2
u
2t
and a control law u
jt
= −f
j
x

t
for the other player; here a is n × n; b
1
is
n ×k
1
; b
2
is n × k
2
; r
1
is n × n; r
2
is n × n; q
1
is k
1
×k
1
; q
2
is k
2
×k
2
; s
1
is k
2

× k
2
; s
2
is k
1
× k
1
; w
1
is n ×k
1
; w
2
is n ×k
2
; m
1
is k
2
× k
1
;andm
2
is k
1
× k
2
. The equilibrium of Judd’s model can be computed by filling in the
matrices appropriately. A Matlab tutorial judd.m uses nnash.m to compute the

equilibrium.
Exercises 199
7.6. Concluding remarks
This chapter has introduced two equilibrium concepts and illustrated how dy-
namic programming algorithms are embedded in each. For the linear models we
have used as illustrations, the dynamic programs become optimal linear regula-
tors, making it tractable to compute equilibria even for large state spaces. We
chose to define these equilibria concepts in partial equilibrium settings that are
more natural for microeconomic applications than for macroeconomic ones. In
the next chapter, we use the recursive equilibrium concept to analyze a general
equilibrium in an endowment economy. That setting serves as a natural starting
point for addressing various macroeconomic issues.
Exercises
These problems aim to teach about (1) mapping problems into recursive forms,
(2) different equilibrium concepts, and (3) using Matlab. Computer programs
are available from the web site for the book.
9
Exercise 7.1 A competitive firm
A competitive firm seeks to maximize
(1)


t=0
β
t
R
t
where β ∈ (0, 1), and time-t revenue R
t
is

(2) R
t
= p
t
y
t
− .5d (y
t+1
− y
t
)
2
,d>0,
where p
t
is the price of output, and y
t
is the time-t output of the firm. Here
.5d(y
t+1
− y
t
)
2
measures the firm’s cost of adjusting its rate of output. The
firm starts with a given initial level of output y
0
. The price lies on the market
demand curve
(3) p

t
= A
0
− A
1
Y
t
,A
0
,A
1
> 0
9
The web site is />200 Recursive (Partial) Equilibrium
where Y
t
is the market level of output, which the firm takes as exogenous, and
which the firm believes follows the law of motion
(4) Y
t+1
= H
0
+ H
1
Y
t
,
with Y
0
as a fixed initial condition.

a. Formulate the Bellman equation for the firm’s problem.
b. Formulate the firm’s problem as a discounted optimal linear regulator prob-
lem, being careful to describe all of the objects needed. What is the state for
the firm’s problem?
c. Use the Matlab program olrp.m to solve the firm’s problem for the following
parameter values: A
0
= 100,A
1
= .05,β = .95,d =10,H
0
=95.5, and H
1
=
.95. Express the solution of the firm’s problem in the form
(5) y
t+1
= h
0
+ h
1
y
t
+ h
2
Y
t
,
giving values for the h
j

’s.
d. If there were n identical competitive firms all behaving according to equation
(5), what would equation (5) imply for the actual law of motion (4) for the
market supply Y ?
e. Formulate the Euler equation for the firm’s problem.
Exercise 7.2 Rational expectations
Now assume that the firm in problem 1 is “representative.” We implement
this idea by setting n = 1 . In equilibrium, we will require that y
t
= Y
t
, but
we don’t want to impose this condition at the stage that the firm is optimizing
(because we want to retain competitive behavior). Define a rational expectations
equilibrium to be a pair of numbers H
0
,H
1
such that if the representative firm
solves the problem ascribed to it in problem 1, then the firm’s optimal behavior
given by equation (5) implies that y
t
= Y
t
∀ t ≥ 0.
a. Use the program that you wrote for exercise 7.1 to determine which if any of
the following pairs (H
0
,H
1

) is a rational expectations equilibrium: (i) (94.0888,
.9211); (ii) (93.22, .9433), and (iii) (95.08187459215024, .95245906270392)?
b. Describe an iterative algorithm by which the program that you wrote for
exercise 7.1 might be used to compute a rational expectations equilibrium. (You
are not being asked actually to use the algorithm you are suggesting.)
Exercises 201
Exercise 7.3 Maximizing welfare
A planner seeks to maximize the welfare criterion
(6)


t=0
β
t
S
t
,
where S
t
is “consumer surplus plus producer surplus” defined to be
S
t
= S (Y
t
,Y
t+1
)=

Y
t

0
(A
0
− A
1
x) dx− .5d (Y
t+1
− Y
t
)
2
.
a. Formulate the planner’s Bellman equation.
b. Formulate the planner’s problem as an optimal linear regulator, and solve it
using the Matlab program olrp.m. Represent the solution in the form Y
t+1
=
s
0
+ s
1
Y
t
.
c. Compare your answer in part b with your answer to part a of exercise 7.2.
Exercise 7.4 Monopoly
A monopolist faces the industry demand curve (3) and chooses Y
t
to maximize



t=0
β
t
R
t
where R
t
= p
t
Y
t
− .5d(Y
t+1
− Y
t
)
2
and where Y
0
is given.
a. Formulate the firm’s Bellman equation.
b. For the parameter values listed in exercise 7.1, formulate and solve the firm’s
problem using olrp.m.
c. Compare your answer in part b with the answer you obtained to part b of
exercise 7.3.
Exercise 7.5 Duopoly
An industry consists of two firms that jointly face the industry-wide demand
curve (3), where now Y
t

= y
1t
+ y
2t
.Firmi =1, 2 maximizes
(7)


t=0
β
t
R
it
where R
it
= p
t
y
it
− .5d(y
i,t+1
− y
it
)
2
.
a. Define a Markov perfect equilibrium for this industry.
202 Recursive (Partial) Equilibrium
b. Formulate the Bellman equation for each firm.
c. Use the Matlab program nash.m to compute an equilibrium, assuming the

parameter values listed in exercise 7.1.
Exercise 7.6 Self-control
This is a model of a human who has time-inconsistent preferences, of a type
proposed by Phelps and Pollak (1968) and used by Laibson (1994).
10
The
human lives from t =0, ,T. Think of the human as actually consisting of
T +1 personalities, one for each period. Each personality is a distinct agent (i.e.,
a distinct utility function and constraint set). Personality T has preferences
ordered by u(c
T
) and personality t<T has preferences that are ordered by
u (c
t
)+δ
T −t

j=1
β
j
u (c
t+j
) , (7.1)
where u(·) is a twice continuously differentiable, increasing and strictly concave
function of consumption of a single good; β ∈ (0, 1), and δ ∈ (0, 1]. When
δ<1, preferences of the sequence of personalities are time-inconsistent (that
is, not recursive). At each t, let there be a savings technology described by
k
t+1
+ c

t
≤ f (k
t
) , (7.2)
where f is a production function with f

> 0,f

≤ 0.
a. Define a Markov perfect equilibrium for the T + 1 personalities.
b. Argue that the Markov perfect equilibrium can be computed by iterating on
the following functional equations:
V
j+1
(k)=max
c
{u (c)+βδW
j
(k

)} (7.3a)
W
j+1
(k)=u [c
j+1
(k)] + βW
j
[f (k) − c
j+1
(k)] (7.4)

where c
j+1
(k) is the maximizer of the right side of (7.3a)forj +1, starting
from W
0
(k)=u[f(k)] . Here W
j
(k)isthevalueofu(c
T −j
)+βu(c
T −j+1
)+
+ β
T −j
u(c
T
), taking the decision rules c
h
(k)asgivenforh =0, 1, ,j.
c. State the optimization problem of the time-0 person who is given the power
to dictate the choices of all subsequent persons. Write the Bellman equations
for this problem. The time zero person is said to have a commitment technology
for “self-control” in this problem.
10
See Gul and Pesendorfer (2000) for a single-agent recursive representation
of preferences exhibiting temptation and self-control.

×