Introduction to Financial Econometrics
Hypothesis Testing in the Market Model
Eric Zivot
Department of Economics
University of Washington
February 29, 2000
1 Hypothesis Testing in the Market Model
In this chapter, we illustrate how to carry out some simple hypothesis tests concerning
the parameters of the excess returns market model regression.
1.1 A Review of Hypothesis Testing Concepts
To be completed.
1.2 Testing the Restriction α =0.
Using the market model regression,
R
t
= α + βR
Mt
+ ε
t
,t=1,...,T
ε
t
∼ iid N(0, σ
2
ε
), ε
t
is independent of R
Mt
(1)
consider testing the null or maintained hypothesis α = 0 against the alternative that
α 6=0
H
0
: α =0vs. H
1
: α 6=0.
If H
0
is true then the market model regression becomes
R
t
= βR
Mt
+ ε
t
and E[R
t
|R
Mt
= r
Mt
]=βr
Mt
. We will reject the null hypothesis, H
0
: α =0,if
the estimated value of α is either much larger than zero or much smaller than zero.
Assuming H
0
: α = 0 is true, ˆα ∼ N(0,SE(ˆα)
2
) and so is fairly unlikely that ˆα will
1
be more than 2 values of SE(ˆα) from zero. To determine how big the estimated value
of α needs to be in order to reject the null hypothesis we use the t-statistic
t
α=0
=
b
α − 0
d
SE(
b
α)
,
where
b
α is the least squares estimate of α and
d
SE(
b
α) is its estimated standard error.
The value of the t-statistic, t
α=0
, gives the number of estimated standard errors that
b
α is from zero. If the absolute value of t
α=0
is much larger than 2 then the data cast
considerable doubt on the null hypothesis α =0whereasifitislessthan2thedata
are in support of the null hypothesis
1
. To determine how big | t
α=0
| needs to be to
reject the null, we use the fact that under the statistical assumptions of the market
model and assuming the null hypothesis is true
t
α=0
∼ Student− t with T − 2 degrees of freedom
If we set the significance level (the probability that we reject the null given that the
null is true) of our test at, say, 5% then our decision rule is
Reject H
0
: α =0atthe5%levelif|t
α=0
| >t
T −2
(0.025)
where t
T −2
is the 2
1
2
% critical value from a Student-t distribution with T − 2degrees
of freedom.
Example 1 Market Model Regression for IBM
Consider the estimated MM regression equation for IBM using monthly data from
January 1978 through December 1982:
b
R
IBM,t
=−0.0002
(0.0068)
+0.3390
(0.0888)
·R
Mt
,R
2
=0.20,
b
σ
ε
=0.0524
where the estimated standard errors are in parentheses. Here
b
α = −0.0002, which is
very close to zero, and the estimated standard error,
d
SE(ˆα) = 0.0068, is much larger
than
b
α. The t-statistic for testing H
0
: α =0vs. H
1
: α 6=0is
t
α=0
=
−0.0002− 0
0.0068
= −0.0363
so that
b
α is only 0.0363 estimated standard errors from zero. Using a 5% significance
level, t
58
(0.025) ≈ 2and
|t
α=0
| =0.0363 < 2
so we do not reject H
0
: α = 0 at the 5% level.
1
This interpretation of the t-statistic relies on the fact that, assuming the null hypothesis is true
so that α =0, bα is normally distributed with mean 0 and estimated variance
d
SE(bα)
2
.
2
1.3 Testing Hypotheses about β
In the market model regression β measures the contribution of an asset to the vari-
ability of the market index portfolio. One hypothesis of interest is to test if the asset
has the same level of risk as the market index against the alternative that the risk is
different from the market:
H
0
: β =1vs. H
1
: β 6=1.
The data cast doubt on this hypothesis if the estimated value of β is much different
from one. This hypothesis can be tested using the t-statistic
t
β=1
=
b
β − 1
d
SE(
b
β)
which measures how many estimated standard errors the least squares estimate of β
is from one. The null hypothesis is reject at the 5% level, say, if |t
β=1
| >t
T −2
(0.025).
Notice that this is a two-sided test.
Alternatively, one might want to test the hypothesis that the risk of an asset is
strictly less than the risk of the market index against the alternative that the risk is
greater than or equal to that of the market:
H
0
: β =1vs. H
1
: β ≥ 1.
Notice that this is a one-sided test. We will reject the null hypothesis only if the
estimated value of β much greater than one. The t-statistic for testing this null
hypothesisisthesameasbeforebutthedecisionruleisdifferent. Now we reject the
null at the 5% level if
t
β=1
< −t
T −2
(0.05)
where t
T −2
(0.05) is the one-sided 5% critical value of the Student-t distribution with
T − 2 degrees of freedom.
Example 2 MM Regression for IBM cont’d
Continuing with the previous example, consider testing H
0
: β =1vs. H
1
: β 6=1.
Notice that the estimated value of β is 0.3390, with an estimated standard error of
0.0888, and is fairly far from the hypothesized value β =1. The t-statistic for testing
β =1is
t
β=1
=
0.3390 − 1
0.0888
= −7.444
which tells us that
b
β is more than 7 estimated standard errors below one. Since
t
0.025,58
≈ 2 we easily reject the hypothesis that β =1.
Now consider testing H
0
: β =1vs. H
1
: β ≥ 1. The t-statistic is still -7.444
but the critical value used for the test is now −t
58
(0.05) ≈−1.671. Clearly, t
β=1
=
−7.444 < −1.671 = −t
58
(0.05) so we reject this hypothesis.
3
1.4 Testing Joint Hypotheses about α and β
Often it is of interest to formulate hypothesis tests that involve both α and β. For
example, consider the joint hypothesis that α =0andβ =1:
H
0
: α =0andβ =1.
The null will be rejected if either α 6=0, β 6= 1 or both.. Thus the alternative is
formulated as
H
1
: α 6=0, or β 6=1orα 6=0andβ 6=1.
This type of joint hypothesis is easily tested using a so-called F-test. The idea behind
the F-test is to estimate the model imposing the restrictions specified under the null
hypothesis and compare the fit of the restricted model to the fit of the model with
no restrictions imposed.
The fit of the unrestricted (UR) excess return market model is measured by the
(unrestricted) sum of squared residuals (RSS)
SSR
UR
= SSR(ˆα,
ˆ
β)=
T
X
t=1
b
ε
2
t
=
T
X
t=1
(R
t
−
b
α −
b
βR
Mt
)
2
.
Recall, this is the quantity that is minimized during the least squares algorithm. Now,
the market model imposing the restrictions under H
0
is
R
t
=0+1· (R
Mt
− r
f
)+ε
t
= R
Mt
+ ε
t
.
Notice that there are no parameters to be estimated in this model which can be seen
by subtracting R
Mt
from both sides of the restricted model to give
R
t
− R
Mt
=
e
ε
t
The fit of the restricted (R) model is then measured by the restricted sum of squared
residuals
SSR
R
= SSR(α =0, β =1)=
T
X
t=1
e
ε
2
t
=
T
X
t=1
(R
t
− R
Mt
)
2
.
Now since the least squares algorithm works to minimize SSR, the restricted error
sum of squares, SSR
R
, must be at least as big as the unrestricted error sum of squares,
SSR
UR
. If the restrictions imposed under the null are true then SSR
R
≈ SSR
UR
(with SSR
R
always slightly bigger than SSR
UR
) but if the restrictions are not true
then SSR
R
will be quite a bit bigger than SSR
UR
. The F-statistic measures the
(adjusted) percentage difference in fit between the restricted and unrestricted models
and is given by
F =
(SSR
R
− SSR
UR
)/q
SSR
UR
/(T − k)
=
(SSR
R
− SSR
UR
)
q ·
b
σ
2
ε,U R
,
4
where q equals the number of restrictions imposed under the null hypothesis, k denotes
the number of regression coefficients estimated under the unrestricted model and
b
σ
2
ε,UR
denotes the estimated variance of ε
t
under the unrestricted model. Under the
assumption that the null hypothesis is true, the F-statistic is distributed as an F
random variable with q and T − 2 degrees of freedom:
F ∼ F
q,T−2
.
Notice that an F random variable is always positive since SSR
R
>SSR
UR
.Thenull
hypothesis is rejected, say at the 5% significance level, if
F>F
q,T −k
(0.05)
where F
q,T−k
(0.05) is the 95% quantile of the distribution of F
q,T −k
.
For the hypothesis H
0
: α =0andβ = 1 there are q = 2 restrictions under the
null and k = 2 regression coefficients estimated under the unrestricted model. The
F-statistic is then
F
α=0,β=1
=
(SSR
R
− SSR
UR
)/2
SSR
UR
/(T − 2)
Example 3 MM Regression for IBM cont’d
Consider testing the hypothesis H
0
: α =0andβ = 1 for the IBM data. The
unrestricted error sum of squares, SSR
UR
, is obtained from the unrestricted regression
output in figure 2 and is called Sum Square Resid:
SSR
UR
=0.159180.
To form the restricted sum of squared residuals, we create the new variable
e
ε
t
=
R
t
− R
Mt
and form the sum of squares SSR
R
=
P
T
t=1
e
ε
2
t
=0.31476. Notice that
SSR
R
> SSR
UR
. The F-statistic is then
F
α=0,β=1
=
(0.31476 − 0.159180)/2
0.159180/58
=28.34.
The 95% quantile of the F-distribution with 2 and 58 degrees of freedom is about
3.15. Since F
α=0,β=1
=28.34 > 3.15 = F
2,58
(0.05) we reject H
0
: α =0andβ =1at
the 5% level.
1.5 Testing the Stability of α and β over time
In many applications of the MM, α and β are estimated using past data and the
estimated values of α and β are used to make decision about asset allocation and risk
over some future time period. In order for this analysis to be useful, it is assumed that
the unknown values of α and β are constant over time. Since the risk characteristics of
5