Tải bản đầy đủ (.pdf) (54 trang)

Tài liệu Competition and Quality Choice in the CPU Market  ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (798.86 KB, 54 trang )

Competition and Quality Choice in the CPU Market

Chris Nosko
Harvard University
November 2010
Abstract
This paper uses the CPU market to study how multiproduct firms generate returns
from innovation. Using a new dataset, I estimate a discrete-choice model of CPU
demand and then recover estimates of the sunk cost of product introductions. I combine
these estimates with a model of firm product choice to examine how product line
decisions change with asymmetric technological capabilities and with the competitive
environment. I use the model to show how technological leaders can use product lines as
strategic weapons, isolating competition to less desirable areas of the product spectrum.
I apply this insight to a large shift in technological leadership – Intel’s introduction of
the Core 2 Duo – and quantify the portion of returns that came from Intel’s ability to
push its principle competitor, AMD, into lower-margin product segments. I find that
competition plays a key role in determining firms’ product line decisions and that these
decisions are important in generating returns from innovation. Ignoring endogenous
product choices leads to underestimates of the social welfare losses from monopoly.

I am grateful to Ulrich Doraszelski, Lisa Kahn, Greg Lewis, Julie Mortimer, Ariel Pakes, and Alan
Sorensen for invaluable advice. Also, I thank Brett Gordon for his help in putting together the pricing
data, Che-Lin Su for helpful discussions about computational methods, and especially Jeremy Davies of
Contextworld for generously supplying downstream data. All errors are solely mine.
1
1 Introduction
When firms innovate, they often don’t just introduce products at the top-end of the market.
Instead, they tend to reset their whole product line in an effort to extract the most possible
profit from their innovation. Their incentives to reshape lower market segments depend
on the industry structure, especially whether it is a monopoly or an oligopoly, and the
technological capabilities of rival firms. In oligopoly, one way that firms generate profit is


by strategically using product choices to change the nature of competition in the industry.
Our ability to understand this phenomenon requires knowledge of how firms make product
choices and how these product choices change with market structure. Despite the evident
importance of competition for driving decisions about product lines, and the effect that these
decisions have on consumer welfare, most antitrust analyses downplay them, and models of
innovation almost completely ignore them.
I use the CPU market to study how imperfectly competitive firms make product decisions
and how these decisions affect their ability to generate returns from innovation. In this
market, firms offer a menu of quality-differentiated products that is often reset to integrate
new technology and to respond to actions of competitors. In 2006, Intel introduced a new
product line, called the Core 2 Duo. Hailed as “the most impressive piece of silicon the
world has ever seen”
1
the market went from relative equality between Intel and its rival,
AMD, to one that was firmly dominated by Intel. Interestingly, it wasn’t just Intel’s ability
to produce faster chips that led to dominance. Instead, much of Intel’s increased profit
came from pushing AMD out of mid-range market segments, areas where AMD still had the
technological capability to compete. I use variation from the introduction of the Core 2 Duo,
and the relative frequency of product line changes in general, to ask the following questions:
In this industry, how do firms choose the number and quality of their products? How would
these choices be different if the industry were a monopoly? And how do firms use strategic
product choices to increase returns from innovation?
1
See the CNET article, “Intel’s Core 2 Duo lives up to hype” from July 16, 2006:
/>2
My first finding relates to the role that competition plays in driving product line decisions.
When marginal costs rise relatively slowly with quality level – a stylized fact in the CPU
industry – a monopolist has little incentive to introduce a broad spectrum of products.
Instead, a monopolist can extract almost all feasible profit with a limited number of products
at the high-end of the market. In oligopoly, this strategy is no longer optimal because a

competitor can steal marketshare by introducing products at lower price points. This process
leads to a competitive equilibrium with more products spread throughout the product line.
Thus, in markets like CPUs, quality-based product separation can be driven largely by
competitive interaction rather than a desire to discriminate between consumer types.
2
This
contrasts with the standard literature on price discrimination, which sees the introduction
of quality-differentiated products as a mechanism for extracting more revenue from high-
valued consumers.
3
This finding implies that, because product line decisions matter little
for a monopolist’s profitability, when they innovate, resetting a product line will play much
less of a role in extracting profit from that innovation than for firms in an oligopoly.
I next find that, in oligopoly, returns from innovation come not only from the ability
to produce a better product at the top end, but also from an innovator’s ability to steal
business from rivals throughout the product line. Using a simple model, I construct an
example showing how a technological leader can isolate competition to lower margin portions
of the market, thereby increasing market power over a larger product space. I combine my
model with data from the introduction of the Core 2 Duo to quantify that role that these
business stealing effects played in generating profit for Intel. I break apart the portion of
returns that came from Intel’s introduction of new top products from the returns that came
from strategic quality choices throughout the product spectrum. This comparison gives
an estimate of how much we would underestimate the effect of competition on innovation
incentives if we held product lines fixed. Next, I compare the profits that Intel generated in
2
In a series of theory papers, Johnson and Myatt (2003, 2006) discuss the role of competition in driving
quality choices in a Cournot setting with differentiated products.
3
There is a long literature on using quality as a price discrimination mechanism going back at least to Jules
Dupuit in the 19th century. The foundational modern work is Mussa and Rosen (1978), with generalizations

to multi-dimensional consumer types and/or multiple firms by Rochet and Stole (2002), Armstrong (1996),
Rochet and Chone (1998), and Armstrong and Vickers (2001).
3
the duopoly structure with profits that a counterfactual monopolist would have made with
the same innovation. I show that, while these returns are substantially lower than that of
an oligopolist in percentage terms, in absolute dollars, the returns are very similar.
My empirical strategy relies on a combination of institutional details, a rich dataset
containing some “exogenous” shifts, and a structural model. This industry has two firms,
and these firms compete over products that differ in relatively straightforward ways, allowing
me to write down a model that is simple enough to take to the data, but that still captures key
aspects of the industry. By using this model to estimate primitives that we don’t generally
observe in datasets – consumer preferences, marginal and sunk costs – I attempt to untangle
some of the key drivers of product line decisions. I then use these estimates to shed light on
Intel’s introduction of the Core 2 Duo, an event that I treat as the result of an exogenous
innovative process.
4
Using the structural model, I compute how a counterfactual monopolist
would choose products and compare these and the competitive outcomes to a social planner.
My dataset contains CPU list prices (when purchased in 1,000 lot units) combined with
European country and time specific data on desktop sales. Because the sales data contain
information on the CPU that shipped with the computer, I am able to construct a monthly
dataset of CPU prices and quantities sold across 8 European countries from 2003-2008. I
exploit the cross country variation to generate estimates of a horizontal taste for Intel’s vs.
AMD’s products and the near constant flow of new chips provides variation in quality levels.
As mentioned, I use the introduction of the Core 2 Duo to consider how innovative activity
interacts with quality choice to generate returns.
The basis of the structural model is an underlying utility framework that allows for
consumer heterogeneity in willingness to pay for quality (vertical differentiation) and het-
erogeneity in brand preference across CPU companies (horizontal heterogeneity). Demand
4

In this paper I treat innovation outcomes as exogenous events, focusing on how firms generate returns
from them. With respect to the Core 2 Duo in particular, this is probably not a bad assumption: The project
that led to the Core 2 Duo, codenamed Banias, began at least as early as 2001 to develop a CPU for laptops
(which later became the Pentium M). It was only later that this technology was thought appropriate for the
desktop market. See the Seattle Times, “How Israel saved Intel”, More
generally, the results in this paper can be seen as an input to a dynamic game that endogenizes innovation
decisions.
4
estimation proceeds following the Pure Characteristic Model of Berry and Pakes (2007) with
some modifications tailoring the problem to my setting. I recover sunk costs of product
introductions from observing decisions on whether and when firms introduced new products
into the market. Moment inequalities (Pakes, Porter, Ho, and Ishii (2006) ) allow me to
recover bounds on the sunk cost parameter.
I find that a counterfactual monopolist has little incentive to introduce a whole product
line: With a single product he can capture 98% of the profit that he earns with a full,
optimally-placed product line. Given the sunk cost estimates, a monopolist would introduce
between 1 and 3 products compared to the 8 to 10 products that exist in the competitive
market. Consumer surplus from a monopolist is found to go down by 65% compared to the
competitive outcome. Much of that comes from the increased monopolist prices, but a non-
trivial 13% comes from the reduction of products and their inefficiently high quality levels.
I further find that the returns to innovation are higher in percentage terms in an oligopoly
than in a monopoly. My estimates indicate that Intel’s profits increased by 96% with the
introduction of the Core 2 Duo (from 95 to 180 million dollars monthly). 49% of that came
from the introduction of new products (holding old products fixed), and the rest came from
the realignment of products throughout the spectrum. Finally, a monopolist with the same
innovation would have increased profits by 17% (from 488 to 573 million). Even though a
monopolist has lower percentage returns, in dollar values, the amount is very similar to the
oligopoly outcome.
These results speak to recent antitrust enforcement in this industry. The market leader,
Intel, has been widely accused of actively working to exclude AMD from the market. A

number of regulatory agencies including the European Union Competition Commission, the
U.S. Federal Trade Commission, and the Fair Trade Commission of Japan have either fined
or investigated Intel’s behavior. Naturally, in analyzing the possible effects of a market
dominated more strongly by Intel, we would like to know how the product landscape would
change. My results indicate that the current market is quite competitive. An Intel monopoly
would result in substantial lost consumer welfare, mostly because of higher monopoly prices,
5
but also because of a decrease in the number of products on the market.
There are a number of recent empirical IO papers that examine topics related to mine.
Eizenberg (2008) estimates a game where downstream OEMs choose a discrete portfolio
of CPU options to offer with their PC products in a first stage, and then set prices in
a second stage.
5
In contrast, I focus on competition between the upstream firms, Intel
and AMD, and on how product line decisions affect their ability to generate returns from
innovation. Fan (2009), Crawford and Shum (2006), Mazzeo (2002), and Draganska, Mazzeo,
and Seim (2009), consider endogenous product choice in newspapers, cable television, hotels,
and ice cream, respectively. Berry and Waldfogel (1999, 2001) explore firm choice of radio
station formats using merger activity brought on by the Telecommunications Act of 1996,
and Sweeting (2007) models single-product firms and estimates sunk cost of format switching.
Two papers use the CPU market to study different topics. Song (2007) estimates a demand
system closely related to the one I use below in order to compare consumer welfare measures
to more widely used models. Goettler and Gordon (2009) model firm innovation incentives
in the face of dynamic consumers. Because of the complexity of the dynamic model, they are
forced to limit firm heterogeneity, modeling Intel and AMD as single-product firms, which
doesn’t allow for the segmentation incentives that I focus on.
This paper is organized as follows. Section 2 describes the industry, introduces the
data that will be used, and discusses changes in the CPU market that make it a suitable
environment to study nonlinear pricing and competition. Section 3 details utility primitives
and goes through their estimation. Section 4 lays out the quality choice model which includes

the second-stage pricing game and estimation of marginal and sunk costs. Section 5 lays
out and solves counterfactuals using the estimates from earlier sections. The counterfactuals
simulate what the market would look like if it were a monopoly, run by a social planner, or
had different innovation outcomes.
5
Eizenberg shows how to account for issues of self-selection and partial identification in these sorts of
games, an estimation problem closely related to the one in this paper.
6
2 The CPU Market
The market for desktop, laptop, and server CPUs is dominated by two companies: Intel and
AMD, with (respectively) approximately 80% and 18% market share as of January 2009.
This paper concentrates on the market for desktop CPUs. These are CPUs that go into
home and business machines that are used for everyday tasks. I concentrate on this market
rather than the market for laptop chips because it is more competitive (Intel dominates the
market for laptop chips) and more stable (laptop growth has been explosive over the last few
years). More data are also available for this market because enthusiasts tend to buy desktop
chips and chart their performance extensively.
Within the desktop market, each firm typically offers between 10 and 15 chip varieties
at any given time. By far the largest difference between these chips is performance. Higher
performing chips tend to have higher clockspeed (operate at a higher frequency), more high-
speed cache memory available, include multiple cores, and use more advanced process tech-
nology. Firms can and do use all of these levers to manipulate performance, but at the end
of the day a consumer need only look at how the chip performs based on some benchmark
to determine it’s product quality.
6
The CPU market has long been known for offering quality-differentiated product lines.
The Intel 80486 was a popular example in both the economics literature (Deneckere and
McAfee (1996) ) and the popular press. Introduced in 1989, Intel created low-quality and
high-quality versions of this chip. Strikingly, in order to create the low-quality chip, they
went to some cost to destroy a perfectly good high-quality chip.

7
Recent examples include
6
This is, of course, a simplification. E.g., chips that have differing number of cores appeal to consumers
who do different sorts of tasks with their computer (making it not perfectly collapsible to a one dimensional
metric). Nevertheless, this is a product that comes about as close as possible to differing on a single vertical
dimension.
7
More specifically, Intel released a “DX” version that included a math co-processing chip, and an “SX”
version that did not. The co-processing chip gave a performance boost to power users but was ignored by
the mainstream software of most users. The DX version sold at a substantial premium. The story goes that
in order to create an SX chip, Intel manufactured a DX version and then incurred some cost to destroy the
connection between the CPU and the co-processor. This manufacturing process is somewhat apocryphal. At
first Intel used DX chips with properly functioning CPUs but with manufacturing defects in the co-processing
unit. Later, they created a separate mask to exclude the co-processing unit completely, which decreased the
die size and hence the cost.
7
Intel’s selling of a code that “unlocks” features of their G6951 processor. Consumers purchase
the chip at a stock level, and should they wish to access additional performance increases,
they can buy the magic code (which costs Intel nothing) that makes the chip perform better.
8
While these anecdotes illustrate isolated incidents, the industry has eluded systematic
study on the quality-choice dimension. I believe this is because up until the end of 2003, Intel
and AMD used a rather crude segmentation mechanism: chip manufacturers concentrated
on their top chips and left their older chips to serve more price sensitive segments of the
market (at reduced prices). While segmentation was occurring, the quality choice itself was
not being made every period – instead, firms were constrained by their top performing chips
from the last period, a strategy known as waterfalling. This changed toward the end of 2003
when firms began adjusting price and quality on a regular basis to hit different segments of
the market. Appendix 1 documents this shift in the industry.

The competitive nature of the industry has fluctuated markedly over the 2000’s. The
mid 2000’s were the height of AMDs competitiveness, peaking with 30% marketshare at the
beginning of 2006. This is up from around 10% at the beginning of 2003 and 20% at the
beginning of 2009. Figure 1 plots Intel and AMD marketshare from the end of 2003 through
the beginning of 2009.
[Figure 1 about here.]
Changing technical leadership is partially responsible for the marketshare fluctuations. From
2002-2006, AMD consistently released products whose price/performance characteristics
were similar to or beat Intel’s products. However, with the release of the Core 2 prod-
uct line at the end of 2006, Intel regained technical leadership, a position which they’ve held
every since.
Figures 2 and 3 tell the story of a rapidly and significantly changing market. In June
2006, Intel and AMD both offered products throughout the quality spectrum. In many
parts of the spectrum, AMD offered products that were better and cheaper than comparable
Intel products (the top panel of 2). In July 2006, Intel released a number of Core 2 Duo
8
See: I thank Kelly Shue for pointing this out to me.
8
products. The bottom panel of figure 2 shows that these products completely dominated
AMD’s offerings, substantially altering the price/quality landscape. AMD responded by
slashing prices and removing products that were no longer competitive (figure 3). By January
of 2008, the competitive nature of the market had changed so significantly that AMD was
relegated to the bottom portion of the quality spectrum, offering almost no chips at the
medium to high end. Interestingly, while AMD’s overall marketshare did not drop all the
much. Its share of more expensive chips dropped almost to 0 (figure 4).
[Figure 2 about here.]
[Figure 3 about here.]
[Figure 4 about here.]
I exploit this shift in technological leadership in two ways: First, it provides natural
variation in product characteristics that helps identify the demand system. Second, I explore

firms’ reaction to this technological change, focusing on ways that this affected quality choices
and the strategic interactions between firms. The section on counterfactuals discusses this
in more detail.
In addition to variation in marketshare across time, there is also substantial variation
across country. Figure 5 shows that AMD’s marketshare in France is consistently higher
than in other countries, especially when contrasted with their Southern neighbors, Spain.
This is a useful source of variation that will play a key role in identifying the horizontal
aspect of consumer heterogeneity.
[Figure 5 about here.]
2.1 Data
Data for this paper come from a variety of sources. Prices were gathered from websites
devoted to tracking the wholesale prices of Intel and AMD chips. These are prices paid by
9
distributors and system builders when purchased in lots of 1,000.
9
Quantities come from Contextworld, a European firm that tracks computer sales. Con-
textworld contracts with major retailers and distributors across the region to receive point
of sales data. They then extrapolate to include retailers that they do not have contracts
with. To check the accuracy of these extrapolations, I compared aggregate Contextworld
data to other consulting firms that report their versions of these numbers, such as IDC,
and the numbers were quite similar. The Contextworld data contain characteristics such as
hard drive size, ram, screen, and important for my purposes, the exact CPU that went into
the computer. These data can be broken down across country, distribution channel, and
computer type (laptop or desktop). One downside to using downstream sales data is that
there is a lag between when a CPU is purchased from the upstream supplier and when it is
sold to the end user (making it into the quantities data).
Performance data were collected from various enthusiast websites. These sites and forums
take CPUs and run them through a series of benchmarks and performance tests. The end
result is a number of metrics that allows chips to be compared to each other. It is important
for this paper to use actual benchmark numbers instead of CPU speed (or some combina-

tion of CPU speed and cache) because I am explicitly comparing performance across CPU
manufacturers with substantially different architectures. Simple clockspeed doesn’t reflect
actual performance in this case because the chip architecture interacts with numerous char-
acteristics of the chip in complex ways to actually move information through the pipeline.
9
There is plenty of evidence that the large OEMs do not pay these list prices. Instead, at a minimum,
they get percentage discounts off of them depending on their size. As long as these are the same across
the OEMs in my data, then my substantive empirical conclusions will not be affected. It would be more
problematic if specific OEMs got specific discounts off of certain chips and not on others. I have not heard
of instances of this occurring, but given the bilateral nature of these deals, I certainly cannot rule it out.
10
3 Demand
Demand follows the Pure Characteristics Model of Berry and Pakes (2007) (See also Song
(2007)).
10
Consumers choose a single product from the set of available options. Conditional
on purchasing, consumers get utility as specified in equation 1.
u
ij
= β
i
x
j

1
α
i
p
j
+ ξ

j
(1)
Individual: i, Product: j
x
j
are observable product characteristics and ξ
j
a product level unobservable. The vertical
random coefficient, α
i
, measures a consumer’s willingness to pay for quality (alternatively,
one could interpret it as price sensitivity).
Because I am primarily concerned with the level of competition between Intel and AMD,
I break out a dummy, d
j
from the product observables where d
j
= (d
Intel
− d
AMD
). d
Intel
and d
AMD
are dummies for whether the product is made by AMD or by Intel.
u
ij
= βx
j

+ ν
i
d
j

1
α
i
p
j
+ ξ
j
(2)
I allow for consumers to have varying tastes for purchasing an Intel or AMD product by
including the random coefficient, ν
i
on d
j
. If an Intel product, ν
i
will enter the utility
function positively and if an AMD product, negatively. ν
i
determines how substitutable
products are across firms. At one extreme, if ν
i
is a constant with value 0, then two products
(from different firms) with the same characteristics would be identical from a consumer’s
10
The key difference between this model and the more commonly used discrete choice model of Berry,

Levinsohn, and Pakes (BLP) is the absence of a product and individual specific error term (generally denoted

ij
). Although appending an 
ij
would ease the computational burden in estimation (a point to which I return
below), it is problematic in this context. Here, a product is defined at the chip level. Adding a BLP-style
error term would oddly imply that consumers received independent utility draws for closely related chips
(say products that only slightly differed in speed). The larger problem comes from considering the supply
side: Consumers with iid product level error terms provide strong incentives for firms to introduce new
products, even if they are very closely related to existing products. Firms can “generate utility” from simply
introducing a product because some consumers will get a high draw from the idiosyncratic shock even if they
got a low draw from a product with identical observables. If these shocks don’t accurately represent reality,
then a model which considers firms’ incentives to introduce new products will overstate profit opportunities.
These reasons for using the PCM are stressed in the original paper by Berry and Pakes. They are especially
important here because the supply side and counterfactuals explicitly consider product introduction choices.
11
perspective, which gives Bertrand pricing implying zero markup. On the other hand, the
higher the mean value of ν
i
for a given firm, the more market power that firm enjoys as
consumers will prefer that firm’s products over a competitor’s even with inferior product
characteristics. As the variance of ν
i
increases, price competition softens because the tails
of the distribution more heavily favor one firm or the other.
In the CPU market, there are a number of reasons that consumers might prefer one firm’s
product over another’s (holding observables constant). First, both Intel and AMD advertise
heavily. The Intel Inside campaign was widely credited with generating consumer awareness
of the CPU, moving it from a commodity semiconductor part to a major part of the consumer

buying decision. Second, due to differences in architectures, AMD and Intel chips perform
different types of tasks at different speeds. For instance, those who play computer games
tend to prefer AMD chips, while those with business oriented tasks prefer Intel. Lastly, there
are complementarities between the CPU and other components inside the computer. CPU’s
from a given company can often be upgraded without changing the motherboard, video card,
etc, while changing to a CPU from a different company would require re-purchasing those
components.
Splitting apart the random coefficients into a mean and variance term and assuming that
α
i
is distributed lognormal and ν
i
normal gives (where n
ν
and n
α
are standard normals):
u
ij
= βx
j
+ (¯ν + σ
ν
n
ν
)d
j

1
exp(¯α + σ

α
n
α
)
p
j
+ ξ
j
(3)
Because utility is only defined up to a monotonic transformation, two normalizations are
needed in order to identify the underlying components of the demand system. I first normal-
ize the utility of the outside option to 0. Without this normalization, an additive shift to all
prices would unrealistically not affect market shares. The second normalization – setting the
mean of the underlying normal on the price coefficient, ¯α to 0 – pins down the multiplicative
scale. Without it, multiplying both sides of the utility function by the same constant would
not change the implications of the model.
12
Define δ
j
as the mean product level quality:
δ
j
= βx
j
+ ¯νd
j
+ ξ
j
(4)
That leads to the underlying utility function that I use:

u
ij
= δ
j
+ σ
ν
n
ν
d
j

1
exp(σ
α
n
α
)
p
j
(5)
3.1 Estimation
The parameters to be estimated are the mean utility levels (δ
j
), and the variances of the
random coefficients (σ
α
, σ
ν
). α
i

is assumed to be log normal and, one of it’s parameters, µ,
normalized to 0. ν
i
is assumed to be normal with both its mean and variance estimated.
As in Berry and Pakes, to generate aggregate market shares, first consider a consumer’s
product choice conditional on purchasing from firm n. I construct upper and lower cutoff
values for each product:

j
n
= max
k<j
p
j
− p
k
δ
j
− δ
k

j
n
= min
j<k
p
k
− p
j
δ

k
− δ
j
(6)
Then all consumers with:

j
n
< α
i
< ∆
j
n
(7)
Will purchase product j.
Here I diverge from the original Berry and Pakes estimation routine. Because of the
structure of my model, one horizontal and one vertical characteristic with two firms, I am
able to calculate an analytical cutoff value for each α
i
type that determines which consumers
purchase from each firm. Let u
j

n
be a consumer’s preferred product conditional on purchasing
from firm n. Then, for every α
i
, there is a cutoff value, ˜ν, such that u
j


1
= u
j

2
.
Using the functional form of utility gives:
˜ν = (δ
j

2
− δ
j

1
) +
1
α
i
(p
j

1
− p
j

2
) (8)
13
Assuming marginal distributions for α, ν:f(α), g(ν), market shares for product j from firm

1(2) are given by:
S
j
1
=

j
1


j
1
[1 − G (˜ν(p, δ, α)|α)] dF (α) (9)
S
j
2
=

j
2


j
2
[G (˜ν(p, δ, α)|α)] dF (α) (10)
This formulation is helpful for two reasons: First, because I don’t need to simulate consumer
types in order to calculate predicted marketshares (I can use numerical quadrature), I re-
duce noise in the model. Second, the relative ease of computing marketshares allows me
to more easily formulate the problem as a constrained maximization problem and use the
Mathematical Programming with Equilibrium Constraints (MPEC) techniques as discussed

in Su and Judd (2008).
It is common in the discrete choice estimation literature to assume that observed and
unobserved product characteristics are uncorrelated with each other. This is useful for at
least two reasons: 1) The estimation routine usually involves a regression where observed
characteristics are regressors and the unobserved characteristic is the error term, resulting
in biased coefficients without this assumption (and in the absence of further instruments).
2) It allows for characteristics of products from firm j
−i
to be used as instruments for prices
of products from firm j. This relies on other firms’ characteristics being correlated with
price (which makes sense given that more crowded regions of the product space should
have lower markups) and uncorrelated with unobservables. If firms choose characteristics
knowing the unobservables of products produced by themselves and their competitors, then
these instruments will not be valid and the estimated parameters will be biased.
Assuming exogenous product characteristics is problematic in my model because the
whole point of the supply side is to investigate how quality choices are made. Using product
characteristics as instruments creates a situation where, without further assumptions, the
demand estimation is incompatible with the quality-choice model. My solution is to exploit
14
the panel structure of the data which allows me to insert product-level dummy variables in
the estimation.
As in Nevo (2001), product level dummies change the structural error in the estimation.
The unobserved product characteristics are “soaked” up by the dummies, which subsume all
characteristics that do not change from market to market. The structural error term is now
the market-level deviation from the mean utility level. In my case this means time/country
specific deviations for an individual chip.
Estimation proceeds by minimizing a GMM objective function subject to simulated mar-
ket shares equaling actual market shares:
min
σ

ν

α
,
˜
δ,p,β
˜
ξ(β, σ)

ZW Z

˜
ξ(β, σ) (11)
Subject To:
s
j
= S
j
∀j (12)
For each candidate value of the standard deviations of the random coefficients, the routine
calculates the mean utility levels that equate actual with observed marketshares and finds
the value of the objective function by taking the residuals from an IV regression. The
IV regression has the mean product utilities on the left-hand side and the right-hand side
includes product dummies (k
j
), time dummies (k
t
), and market dummies (k
m
). See equation

13.
˜
ξ
jtm
is the market/time specific error term. The product dummy variables, β
j
give the
δ
j
’s from the underlying utility model and are used as inputs to the supply side of the
model.
11
˜
δ
jtm
= β
j
k
j
+ β
t
k
t
+ β
m
k
m
+
˜
ξ

jtm
(13)
This estimation routine presents a computational challenge: Unlike in BLP, where the id-
iosyncratic error term ensures positive market shares for all products at all potential param-
eter values, this model often predicts that some products will have 0 market share. This
11
The downside to this estimation strategy is that recovering the underlying utility coefficients requires
an extra step of regressing the dummies on product characteristics – a regression that is only valid if the
unobserved product characteristics are uncorrelated with observed product characteristics. In this context
this means firms don’t know ξ when they choose product characteristics, an assumption that is unlikely to
be true in my case. Fortunately, I’m not actually concerned with the coefficients on product characteristics
– recovering the δ

s is sufficient for the supply side estimation and counterfactuals.
15
occurs when the mean utility levels become “un-ordered,” making some products dominated
by their neighbors. When this happens, the gradient of the constraints for those products
are 0 and standard computational techniques no longer work. However, for parameter val-
ues where the marketshares are all non-0, this is a straightforward constrained maximization
problem. The key, then, is to get good starting values. To do this, I implement a routine that
smoothes out the marketshare constraints by viewing consumer’s choices probabilistically.
12
I then use these as starting values for the analytical market share routine. I solve this as
a mathematical program with equilibrium constraints (Su and Judd 2008) using the opti-
mization routine KNITRO where marketshares are computed with quadrature. Appendix 2
details this process.
3.2 Instruments
Instruments other than the X’s themselves are necessary both because I estimate the standard
deviations of the random coefficients (requiring at least two extra instruments) and because
firms choose prices knowing the demand shocks, leading to potential correlation between

prices and the unobservables.
Price correlation is less of a problem than in demand system estimation without product
dummies because the unobservable here is not the mean unobservable quality level (which I
estimate as part of the δ’s), rather its idiosyncratic deviations.
The instruments I use come from the downstream data and consist of products that are
bundled in the computer systems that OEMs sell. Specifically, I use hard drive size, amount
of ram, screen size, presence of discrete video card, and the version of installed operating
system. These are all characteristics that have clear ordinal rankings where consumers prefer
more to less. Firms tend to bundle higher priced chips with better characteristics, creating
the correlation between the instruments and price that is necessary. Furthermore, decisions
about which products to bundle are made by the OEM at much less frequency than the
frequency with which CPU prices change, making it highly plausible that the instruments
12
This probabilistic formulation was developed by Che-Lin Su in currently unpublished work. I thank him
for sharing it with me.
16
are uncorrelated with the unobserved demand shocks.
3.3 Results
Table 1 present the results of the demand estimation routine. The left-hand panel lists the
estimated quality for a selection of chips that were available at the end of 2006 and in early
2007.
13
Because the mean of the price coefficient is essentially 1, the coefficients can roughly
be thought of as the average consumer’s willingness to pay (relative to the outside option) for
each chip. Using estimated mean quality levels (as opposed to performance comparisons tied
to CPU clockspeed) allows for the easy comparison of chips across generations and between
firms. I’ve sorted the table within firm in a manner that corresponds with my a priori view of
how these chips should be ranked.
14
For the most part, the estimated coefficients line up with

this informal ranking.
15
The right-hand panel of 1 shows a selection of month dummies, the
country dummies, and the estimates of the standard deviations of the random coefficients.
This industry is highly seasonal resulting in month dummies that are higher in December
compared to the rest of the year.
[Table 1 about here.]
4 Quality Choices
Supply side competition is assumed to proceed in two stages. In a first stage, firms choose
product qualities, paying a sunk cost for moving products from their previous spot (products
are moved by introducing a new product and taking out an old one that was at the same
price level). In a second stage, firms compete in prices taking quality choices as given.
13
I list these as being illustrative examples that come out of the estimation. A table listing the quality
levels of the 91 chips that exist in my data would not add all that much and would significantly detract from
the readability of the table.
14
That is to say, by chip generation in the order of better chip generations and by product number within
chip generation because product numbers often give a rough idea of how the CPU companies themselves
view which products are better than others.
15
It is somewhat interesting to see that the high end products from lower chip lines sometimes are valued
higher than the low end chips from higher chip lines (compare, for example, the Celeron 360 and the Pentium
4 531).
17
In reality, product quality choices are dynamic. Because products live for more than
one period, their placement today affects sunk costs that would need to be paid tomorrow
should they be moved. I follow most of the literature in assuming a two-period model for a
number of reasons: First, the main point of this paper concerns ways in which competition
interacts with product placement decisions, and to understand these incentives, dynamics

are not necessary. Second, in order to explore changes in market primitives it is necessary
to solve counterfactuals under different circumstances. Running these counterfactuals as
full dynamic games is computationally impossible because the state space increases with
the number of products. My approach is to solve the full game as a two-period model and
explore how the game shifts with dynamics with a more limited number of products. I note
with footnotes throughout where I have also computed dynamic versions of the problem.
This section starts by laying out the second stage pricing game and discusses marginal
costs (which fall out of the pricing game). Then, the first stage quality choice game is
formalized and used as the basis of the sunk cost estimation routine.
4.1 Second Stage: Pricing equation
Firms are assumed to make pricing decisions simultaneously with full knowledge of the
structural error terms. Because of the horizontal heterogeneity embedded in the demand
system, a product from AMD and a product from Intel with exactly the same observable
and unobservable characteristics will not be perfect substitutes.
Taking qualities and number of products as given, the profit maximization problem for
firm 1 is:
max
p
1
Π
1
=
M

m=1
J

j=1
p
j

1
MS
mj
1
(p, δ) − C
j
1
(14)
Country: m, Product: j
The maximization problem assumes that prices don’t vary from country to country (arbitrage
would prevent prices from diverging too much) and that marginal costs are the same across
countries.
16
16
The p’s and C’s have product but not country subscripts and the problem for the firm maximizes over
18
This yields a first order condition of:
∂Π
1
∂p
j
1
=
M

m=1
(p
(j−1)
1
− C


(j−1)
1
)
∂S
m(j−1)
1
(p, δ)
∂p
j
1
+ (p
(j+1)
1
− C

(j+1)
1
)
∂S
m(j+1)
1
(p, δ)
∂p
j
1
+ (p
j
1
− C


j
1
)
∂S
mj
1
(p, δ)
∂p
j
1
+ S
mj
1
(p, δ)
(15)
The equilibrium is a fixed point in p for the two firms.
The own price derivative for product j is given by:
∂S
j
1
∂p
j
1
=

1 − G(˜ν(p, δ, ∆
j
1
)|∆

j
1
)

f(∆
j
1
)
∂∆
j
1
∂p
j
1


1 − G(˜ν(p, δ, ∆
j
1
)|∆
j
1
)

f(∆
j
1
)
∂∆
j

1
∂p
j
1


j
2


j
2
g(˜ν(p, δ, α)|α)
∂˜ν(p, δ, α)
∂p
j
1
f(α) dα
(16)
This is a very natural equation. The first two terms quantify the consumers that are lost
to the products above and below in the product space. However, not all consumers are lost:
only those that were already purchasing products from firm 1. If all consumers purchased
from firm 1 so that G(˜ν(p, δ, ∆
j
1
)|∆
j
1
) = 0, then the model collapses back to the vertical
model. The third term quantifies consumers that are lost to firm 2 through a change in ˜ν.

Notice that, while only consumers at the boundary of indifference between products are lost
to neighboring products, consumers throughout the spectrum are lost to firm 2.
Because neighboring products are also owned by firm 1, losing consumers above and
below is internalized by the firm. It’s straightforward to compute the cross price derivatives
in these directions:
∂S
(j+1)
1
∂p
j
1
= −

1 − G(˜ν(p, δ, ∆
(j+1)
1
)|∆
(j+1)
1
)

f(∆
(j+1)
1
)
∂∆
(j+1)
1
∂p
j

1
(17)
the sum of profits from the individual countries (m).
19
4.1.1 Marginal Costs
Marginal costs are assumed to follow a Markov process. A chip’s cost in any given period is
a function of last period’s cost and an idiosyncratic error term:
c
j(t+1)
= β
j
+ ρc
jt
+ 
c
(jt)
(18)
Given the demand parameters, the first order conditions (equation 15) define a set of non-
linear equations that can be used to back out the marginal cost of a chip in any period.
17
Estimation begins by solving these systems of equations period by period, giving estimates
of each chip in each period.
18
These estimated costs are then regressed on last period’s costs
to get an estimate of ρ.
Figure 6 graphs estimated marginal costs for AMD and Intel for two months: May
2006 and May 2007. At the low end of the product spectrum, costs for AMD and Intel
are relatively similar, but AMD’s costs rise faster with quality than Intel’s, resulting in
significantly higher costs for producing high quality chips. This asymmetry in costs is a
large part of Intel’s competitive advantage. Between May 2006 and May 2007, costs for both

companies fell. These reductions came from the introduction of new technology that allowed
for higher quality chips to be produced at lower cost.
[Figure 6 about here.]
Using estimates of marginal costs across all months, I can run equation 18 to get an estimate
of ρ. Doing this yields a reasonable value of .9102 with a standard error of (.04). This
indicates that marginal costs are falling over time, consistent with the earlier graph.
17
While Caplin and Nalebuff (1991) show a unique pricing equilibrium for this class of games with single-
product firms, as far as I know, there is no equivalent proof for multi-product firms. That potential for
multiple equilibria does not affect the estimation because, if there were multiple equilibria, the routine
would simply pick out the one that is played in the data. However, this could be more problematic in
the sunk cost estimation (where I compute counterfactual pricing equilibria). I fall back on the standby of
computing equilibria for a wide variety of starting values and see that they always converge to the same
point, leading me to hypothesize that, at least at the estimated demand parameters, the equilibrium to this
pricing game is unique.
18
In this industry, lower quality chips are sometimes the byproduct of higher quality chips. Because of
variation in the production process, sometimes chips that are designed to be high performance chips don’t
pass quality checks but can be salvaged and used at lower performance levels. To the extent that firms take
into account this process when pricing, my marginal cost estimates will accommodate this behavior. Indeed,
it’s one of the advantages of estimating marginal costs.
20
4.2 First Stage: Quality Choices
I extend the game back to a first stage where firms choose quality levels and number of
products to be offered. When deciding whether to introduce a product of a given quality,
firms have at least three things to consider: 1) Every product introduction incurs a sunk
cost. 2) If they introduce a product with similar characteristics to the competition, then
markups will be lower due to closer substitution patterns. And 3) Firms would like to use
quality to 2nd-degree price discriminate. Factors 1 and 2 push the firm toward having fewer
products in spaces that are uninhabited by their competition. Factor 3 pushes toward having

a broad closely-spaced product line. The counter-balancing of these forces will determine
the equilibrium.
The formal problem for firm 1 is laid out in equations 19 and 20.
max
δ
1
E

π(δ
1t
, δ
2t
, p

1t
, p

2t
,
˜
ξ
t
, 
ct
)

− SC(δ
1(t−1,t)
) (19)
max(δ

1t
) ≤
¯
δ
1t
(20)
Firm 1 chooses its vector of qualities (δ
1
) to maximize the sum of the profit function. Profit
depends on the chips that each firm offers in that period, optimal prices as specified in
equation 14, and the structural error terms. The maximum quality that the firm can produce
in every period is given by
¯
δ
1t
. I assume it evolves exogenously and that firms are aware
of its evolution. Since firms make quality choices without knowledge of the structural error
terms, the expectation operator is over the realization of demand and cost shocks.
19
Firms
are forced to pay a sunk cost to change chip qualities (SC) that depends on their previous
set of products in the market.
4.2.1 Sunk Cost Estimation
I use information on products that were introduced and potential products that were not
introduced to estimate a sunk cost of product introduction. Consider a firm’s decision to
19
See Eizenberg (2008) for a full discussion of the sample selection problems involved in estimating these
kinds of models. I follow him in assuming that firms do not observe the per-period shocks when they make
their product decisions.
21

introduce a new product. If they introduce it, then it must have been the case that it was
more profitable than not introducing it and not paying the sunk cost. Similarly, if they
decide not to introduce a product, then it must have been the case that the firm would
have been worse off introducing the product and paying a sunk cost. These two optimality
conditions allow me to implement an inequality estimator in the style of Pakes, Porter, Ho,
and Ishii (2006).
Firms are assumed to pay a constant sunk cost for introducing a new product irrespective
of where in the quality space that product is located. Letting δ
jp
denoted the location of
products in the previous period, sunk costs are given by:
SC =
J

j=1
I(δ
j
= δ
jp
)θ (21)
This procedure uses the estimated demand system to calculate a pricing equilibrium and
consequent profit for actions that the firms could have taken but decided not to take. On
average, observed actions are assumed to be more profitable than the unobserved potential
actions. The difference between profit from observed and unobserved actions is used to
form moments. Without further assumptions, sunk costs are not point identified, rather the
moment conditions allows me to identify bounds.
One side of the bound comes from looking at products that a firm could have moved (by
adding a new product to the line and removing the old product) but decided not to. If they
moved the product, they would in expectation increase profits but also incur the sunk cost
of product introduction, θ. Letting π(δ


j
) represent profits for a product movement to any
other position, then inequality 22 must hold.
θ ≥ E

π(δ

j
) − π(δ
j
)

(22)
The other side of the bound comes from looking at products that firms did indeed decide to
move. In this case, they could have left the product in its old position and foregone the sunk
cost, θ. That they decided to move the product indicates that the firm expected to make
more profit from its movement than in keeping it in the same place. Inequality 23 formalizes
22
this concept.
θ ≤ E [π(δ
j
) − π(δ
jp
)] (23)
Firms make quality choices in expectation without knowing the realization of demand and
cost shocks. It is also assumed that firms make quality choices without knowing the quality
choices that their competition will make. Let ν
sc
denote the difference between profit expec-

tations and realized profit. I assume that ν
sc
is unobserved by both the econometrician and
the firm.
20
Denoting r(δ
j
) as the estimate of observed profit that comes from the demand
and costs estimates, the relationship between π(δ
j
) and r(δ
j
) is given by:
E [π(δ
j
)] = r(δ
j
) + ν
sc
(24)
As long as firms have correct expectations on average, ν
sc
will go to 0 as the sample size
goes to infinity. This gives:
plim
J→∞
1
J
δ
j

=δ
jp

j
(r(δ
j
) − r(δ
jp
)) ≥ θ ≥ plim
J→∞
1
J
δ
j

jp

j

r(δ

j
) − r(δ
j
)

(25)
For δ

j

, the strategy that a firm could have used but decided not to, I find the best possible
placement for a product by searching across the δ space and recomputing the competitive
pricing equilibrium for each possible spot.
Using this procedure, I estimate that the sunk costs of introducing a product fall in the
range of $1, 236, 000 − $3, 412, 000. This is rather small compared to profits but consistent
with the idea that once a chip generation is introduced, adding an additional product doesn’t
require a whole lot of extra work.
5 Counterfactuals
Putting together the estimates described above with the structure of the model, allows me
to construct counterfactuals that speak to the role that competition plays in this market.
20
Pakes, Porter, Ho, and Ishii allow for a second error that firms know but that is unobserved by the
econometrician. With appropriate instruments, this can be included in the model. I don’t allow for this
second error, implicitly assuming that sunk costs are the same across firms and across time.
23
Key parts of the counterfactuals examine how consumer welfare changes under different
scenarios. To be specific about this, consumer welfare is given by:
CS
intel
=

j
1


j
1
α
i
ν


˜ν

δ
j

1
α
i
p
j
+ ν
i

f(ν
i
)f(α
i
) dν
i

i
(26)
CS
amd
=

j
1



j
1
α
i
˜ν

ν

δ
j

1
α
i
p
j
+ ν
i

f(ν
i
)f(α
i
) dν
i

i
(27)
Where CS

intel
and CS
amd
breakout the consumer surplus from consumers who purchase each
firm’s products. I multiply by the price coefficient, α
i
to translate utility into a dollar value.
21
For many of the counterfactuals, I use the month of May 2007 as a starting point for
the analysis. In May 2007, Intel offered 9 products with differing prices while AMD had
7. Estimated profit for Intel was $186 million and for AMD, $23 million. Total estimated
consumer welfare was $1.63 billion.
Figure 7 shows Intel’s estimated markup for May 2007. The solid line is Intel’s estimated
marginal cost for that month. The dashed line shows prices. Margins are relatively low at
the low-quality end of the spectrum where AMD competes strongly but jump up higher as
quality goes up both in response to consumers with less price sensitivity and because AMD
doesn’t have products that can compete as strongly.
[Figure 7 about here.]
5.1 Monopoly
Moving to a monopoly from an oligopoly has (at least) 3 effects on consumer welfare: 1) If
consumers have a taste for AMD’s products, substituting to Intel’s products or the outside
good will cause a welfare loss for those consumers. 2) Prices will be higher. 3) Monopolists
have fewer incentives to introduce products into the market and for the products that are
21
Because ν
i
is assumed to be distributed normal and therefore has infinite support, the very tail ends of
this distribution massively change the consumer surplus calculation. In order to prevent this, I define ν and
ν as the inverse of the normal distribution at p=.001 and .999. In other words, I chop off the very extreme
tail of the ν

i
distribution.
24
introduced, monopolists will choose different quality levels.
22
The goal of this counterfactual
is to separate out the different components of consumer welfare change and quantify potential
social loss or gain.
The first set of counterfactuals, detailed in table 2, decomposes a shift to monopoly into
the profit gains and welfare losses from each of these mechanisms. Removing AMD but
fixing products and prices doesn’t change profit (up 2.8%) or consumer welfare (down 1.2%)
very much. The relatively small change in consumer welfare indicates that consumer taste
for AMD products is not all that strong. Individuals who were purchasing AMD products
suffer some loss from purchasing an Intel product or substituting to the outside option, but
their lost utility is not large.
[Table 2 about here.]
Next, I solve the monopolist’s profit maximization problem, keeping product qualities fixed
at the competitive level. Figure 8 plots markups as a function of quality (δ). Prices rise
and Intel’s profit goes to $562.2 million. Meanwhile consumer welfare drops 51% to $791.3
million. The increase in prices is telling: It indicates that despite AMD’s relatively small
marketshare, this industry is quite competitive. Removing AMD from the market would
allow Intel to significantly raise prices leading to large consumer welfare losses.
[Figure 8 about here.]
5.1.1 Optimal Product Placement
Consistent with equation 19, a monopolist solves for optimal quality choices knowing the
markov process that costs follow (and the ρ parameter of that process) but without knowing
22
In theory not all consumers are necessarily made worse off by a monopolist: if quality levels change such
that consumer types are served that weren’t served under oligopoly or prices go down on some products in
response to monopolist segmentation, then welfare for those consumers could go up. In practice, it is highly

unlikely that these gains will swamp consumer welfare losses for other consumers, and with the structure
that I impose I haven’t been able to find parameter values for which this happens. Of course, a monopolist
will also be more profitable than an oligopolistic and that profit may offset the consumer welfare change
leading to greater social surplus.
25

×