Tải bản đầy đủ (.pdf) (459 trang)

digital communications 5th - instructor solution manual

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.44 MB, 459 trang )

Solutions Manual
for
Digital Communications, 5th Edition
(Chapter 2)
1
Prepared by
Kostas Stamatiou
January 11, 2008
1
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this
Manual may be displayed, reproduced or distributed in any form or by any means, without the prior wr itten
permission of the publisher, or used beyond the limited distribution to teachers and educators permitted by
McGraw-Hill for their individual course preparation. If you are a student using this Manual, you are using
it without permission.
2
Problem 2.1
a.
ˆx(t) =
1
π


−∞
x(a)
t −a
da
Hence :
−ˆx(−t) = −
1


π


−∞
x(a)
−t−a
da
= −
1
π

−∞

x(−b)
−t+b
(−db)
= −
1
π


−∞
x(b)
−t+b
db
=
1
π



−∞
x(b)
t−b
db = ˆx(t)
where we have made the change of variables : b = −a and used the relationship : x(b) = x(−b).
b. In exactly the same way as in part (a) we prove :
ˆx(t) = ˆx(−t)
c. x(t) = cos ω
0
t, so its Fourier transform is : X(f) =
1
2
[δ(f − f
0
) + δ(f + f
0
)] , f
0
= 2πω
0
.
Exploiting the phase-shifting property (2-1-4) of the Hilbert transform :
ˆ
X(f ) =
1
2
[−jδ(f − f
0
) + jδ(f + f
0

)] =
1
2j
[δ(f − f
0
) −δ(f + f
0
)] = F
−1
{sin 2πf
0
t}
Hence, ˆx(t) = sin ω
0
t.
d. In a similar way to part (c) :
x(t) = sin ω
0
t ⇒ X(f) =
1
2j
[δ(f − f
0
) −δ(f + f
0
)] ⇒
ˆ
X(f) =
1
2

[−δ(f − f
0
) −δ(f + f
0
)]

ˆ
X(f) = −
1
2
[δ(f − f
0
) + δ(f + f
0
)] = −F
−1
{cos 2πω
0
t} ⇒ ˆx(t) = −cos ω
0
t
e. The positive frequency content of the new signal will be : (−j)(−j)X(f) = −X(f ), f > 0, while
the negative frequency content will be : j · jX(f ) = −X(f), f < 0. Hence, since
ˆ
ˆ
X(f) = −X(f ),
we have :
ˆ
ˆx(t) = −x(t).
f. Since the magnitude response of the Hilbert transformer is characterized by : |H(f )| = 1, we

have that :



ˆ
X(f)



= |H(f)||X(f)| = |X(f)|. Hence :


−∞



ˆ
X(f)



2
df =


−∞
|X(f)|
2
df
PROPRIETARY MATERIAL.

c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
3
and using Parseval’s relationship :


−∞
ˆx
2
(t)dt =


−∞
x
2
(t)dt
g. From parts (a) and (b) above, we note that if x(t) is even, ˆx(t) is odd and vice-versa. Therefore,
x(t)ˆx(t) is always odd and hence :


−∞
x(t)ˆx(t)dt = 0.
Problem 2.2
1. Using relations
X(f) =
1
2

X
l
(f − f
0
) +
1
2
X
l
(−f − f
0
)
Y (f ) =
1
2
Y
l
(f − f
0
) +
1
2
Y
l
(−f − f
0
)
and Parseval’s relation, we have



−∞
x(t)y(t) dt =


−∞
X(f)Y

(f) dt
=


−∞

1
2
X
l
(f − f
0
) +
1
2
X
l
(−f − f
0
)

1
2

Y
l
(f −f
0
) +
1
2
Y
l
(−f − f
0
)


df
=
1
4


−∞
X
l
(f − f
0
)Y

l
(f − f
0

) df +
1
4


−∞
X
l
(−f − f
0
)Y
l
(−f − f
0
) df
=
1
4


−∞
X
l
(u)Y

l
(u) du +
1
4
X


l
(v)Y (v) dv
=
1
2
Re



−∞
X
l
(f)Y

l
(f) df

=
1
2
Re



−∞
x
l
(t)y


l
(t) dt

where we have used th e fact that since X
l
(f − f
0
) and Y
l
(−f − f
0
) do not overlap, X
l
(f −
f
0
)Y
l
(−f − f
0
) = 0 an d similarly X
l
(−f − f
0
)Y
l
(f −f
0
) = 0.
2. Putting y(t) = x(t) we get the d esired result fr om the result of part 1.

PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
4
Problem 2.3
A well-known result in estimation theory based on the minimum mean-squared-error criterion states
that the minimum of E
e
is obtained when the error is orthogonal to each of the fun ctions in the
series expansion. Hence :


−∞

s(t) −
K

k=1
s
k
f
k
(t)

f

n

(t)dt = 0, n = 1, 2, , K (1)
since the functions {f
n
(t)} are orthonormal, only the term with k = n will remain in the sum, so :


−∞
s(t)f

n
(t)dt −s
n
= 0, n = 1, 2, , K
or:
s
n
=


−∞
s(t)f

n
(t)dt n = 1, 2, , K
The corresponding residual error E
e
is :
E
min
=



−∞

s(t) −

K
k=1
s
k
f
k
(t)

s(t) −

K
n=1
s
n
f
n
(t)


dt
=


−∞

|s(t)|
2
dt −


−∞

K
k=1
s
k
f
k
(t)s

(t)dt −

K
n=1
s

n


−∞

s(t) −

K
k=1

s
k
f
k
(t)

f

n
(t)dt
=


−∞
|s(t)|
2
dt −


−∞

K
k=1
s
k
f
k
(t)s

(t)dt

= E
s


K
k=1
|s
k
|
2
where we have exploited relationship (1) to go from the second to the third step in the above
calculation.
Note : Relationship (1) can also be obtained by simple differentiation of the residual err or with
respect to the coefficients {s
n
}. Since s
n
is, in general, complex-valu ed s
n
= a
n
+ jb
n
we have to
differentiate with respect to both real and imaginary parts :
d
da
n
E
e

=
d
da
n


−∞

s(t) −

K
k=1
s
k
f
k
(t)

s(t) −

K
n=1
s
n
f
n
(t)


dt = 0

⇒ −


−∞
a
n
f
n
(t)

s(t) −

K
n=1
s
n
f
n
(t)


+ a

n
f

n
(t)

s(t) −


K
n=1
s
n
f
n
(t)

dt = 0
⇒ −2a
n


−∞
Re

f

n
(t)

s(t) −

K
n=1
s
n
f
n

(t)

dt = 0



−∞
Re

f

n
(t)

s(t) −

K
n=1
s
n
f
n
(t)

dt = 0, n = 1, 2, , K
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a

student using this Manual, you are using it without permission.
5
where we have exploited the identity : (x + x

) = 2Re{x}. Differentiation of E
e
with respect to b
n
will give the corresponding relationship for the imaginary part; combining the two we get (1).
Problem 2.4
The procedure is very similar to the one for the real-valued signals described in the book (pages
33-37). The only difference is that the projections should conform to the complex-valued vector
space :
c
12=


−∞
s
2
(t)f

1
(t)dt
and, in general for the k-th fu nction :
c
ik
=



−∞
s
k
(t)f

i
(t)dt, i = 1, 2, , k − 1
Problem 2.5
The first basis function is :
g
4
(t) =
s
4
(t)

E
4
=
s
4
(t)

3
=



−1/


3, 0 ≤ t ≤ 3
0, o.w.



Then, for the second basis function :
c
43
=


−∞
s
3
(t)g
4
(t)dt = −1/

3 ⇒ g

3
(t) = s
3
(t) − c
43
g
4
(t) =








2/3, 0 ≤ t ≤ 2
−4/3, 2 ≤ t ≤ 3
0, o.w







Hence :
g
3
(t) =
g

3
(t)

E
3
=








1/

6, 0 ≤ t ≤ 2
−2/

6, 2 ≤ t ≤ 3
0, o.w







where E
3
denotes the energy of g

3
(t) : E
3
=

3
0
(g


3
(t))
2
dt = 8/3.
For the th ird basis function :
c
42
=


−∞
s
2
(t)g
4
(t)dt = 0 and c
32
=


−∞
s
2
(t)g
3
(t)dt = 0
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,

repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
6
Hence :
g

2
(t) = s
2
(t) −c
42
g
4
(t) − c
32
g
3
(t) = s
2
(t)
and
g
2
(t) =
g

2
(t)


E
2
=







1/

2, 0 ≤ t ≤ 1
−1/

2, 1 ≤ t ≤ 2
0, o.w







where : E
2
=

2
0

(s
2
(t))
2
dt = 2.
Finally for the fourth basis fun ction :
c
41
=


−∞
s
1
(t)g
4
(t)dt = −2/

3, c
31
=


−∞
s
1
(t)g
3
(t)dt = 2/


6, c
21
= 0
Hence :
g

1
(t) = s
1
(t) −c
41
g
4
(t) − c
31
g
3
(t) −c
21
g
2
(t) = 0 ⇒ g
1
(t) = 0
The last resu lt is expected, since the dimensionality of the vector space generated by these signals
is 3. Based on the basis functions (g
2
(t), g
3
(t), g

4
(t)) the basis representation of the signals is :
s
4
=

0, 0,

3

⇒ E
4
= 3
s
3
=

0,

8/3, −1/

3

⇒ E
3
= 3
s
2
=



2, 0, 0

⇒ E
2
= 2
s
1
=

2/

6, −2/

3, 0

⇒ E
1
= 2
Problem 2.6
Consider the set of signals

φ
nl
(t) = jφ
nl
(t), 1 ≤ n ≤ N , then by definition of lowpass equivalent
signals and by Equations 2.2-49 and 2.2-54, we see that φ
n
(t)’s are


2 times the lowpass equivalents
of φ
nl
(t)’s and

φ
n
(t)’s are

2 times the lowpass equivalents of

φ
nl
(t)’s. We also note that since
φ
n
(t)’s have unit energy, φ
nl
(t),

φ
nl
(t) = φ
nl
(t), jφ
nl
(t) = −j and since the inner product is pure
imaginary, we conclude that φ
n

(t) and

φ
n
(t) are orthogonal. Using the orthon ormality of the set
φ
nl
(t), we have
φ
nl
(t), −jφ
ml
(t) = jδ
mn
and using the result of problem 2.2 we have
φ
n
(t),

φ
m
(t) = 0 for all n, m
We also have
φ
n
(t), φ
m
(t) = 0 f or all n = m
PROPRIETARY MATERIAL.
c

The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
7
and


φ
n
(t),

φ
m
(t) = 0 f or all n = m
Using the fact that the energy in lowpass equivalent signal is twice the energy in the bandpass
signal we conclude that the energy in φ
n
(t)’s and

φ
n
(t)’s is unity an d hence the set of 2N signals

n
(t),

φ
n
(t)} constitute an orthonormal set. The fact that this orthonormal set is s ufficient for

expansion of bandp ass signals follows from Equation 2.2-57.
Problem 2.7
Let x(t) = m(t) cos 2πf
0
t where m(t) is real and lowpass with bandwidth less than f
0
. Then
F[ˆx(t)] = −j sgn(f)

1
2
M(f − f
0
) +
1
2
M(f + f
0
)

and hence F[ˆx(t)] = −
j
2
M(f −f
0
) +
j
2
M(f + f
0

)
where we have used that fact that M(f − f
0
) = 0 for f < 0 and M(f + f
0
) = 0 for f > 0. This
shows that ˆx(t) = m(t) sin 2πf
0
t. Similarly we can show that Hilbert transform of m(t) sin 2πf
0
t is
−m(t) cos 2πf
0
t. From above and Equation 2.2-54 we have
H[φ
n
(t)] =


ni
(t) sin 2πf
0
t +


nq
(t) cos 2πf
0
t = −


φ
n
(t)
Problem 2.8
For real-valued signals the correlation coefficients are given by : ρ
km
=
1

E
k
E
m


−∞
s
k
(t)s
m
(t)dt and
the Euclidean distances by : d
(e)
km
=

E
k
+ E
m

−2

E
k
E
m
ρ
km

1/2
. For th e signals in this problem :
E
1
= 2, E
2
= 2, E
3
= 3, E
4
= 3
ρ
12
= 0 ρ
13
=
2

6
ρ
14

= −
2

6
ρ
23
= 0 ρ
24
= 0
ρ
34
= −
1
3
and:
d
(e)
12
= 2 d
(e)
13
=

2 + 3 −2

6
2

6
= 1 d

(e)
14
=

2 + 3 + 2

6
2

6
= 3
d
(e)
23
=

2 + 3 =

5 d
(e)
24
=

5
d
(e)
34
=

3 + 3 + 2 ∗3

1
3
= 2

2
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
8
Problem 2.9
We know from Fourier transform properties that if a signal x(t) is real-valued then its Fourier
transform satisfies : X(−f ) = X

(f) (Hermitian property). Hence the condition under which s
l
(t)
is real-valued is : S
l
(−f) = S

l
(f) or going back to the bandpass signal s(t) (using 2-1-5):
S
+
(f
c
− f) = S


+
(f
c
+ f)
The last condition s hows that in order to have a real-valued lowpass signal s
l
(t), the positive fre-
quency content of the corresponding bandpass signal must exhibit hermitian symmetry around the
center frequency f
c
. In general, bandpass s ignals do not satisfy this property (they h ave Hermitian
symmetry around f = 0), hence, the lowpass equivalent is gen erally complex-valued.
Problem 2.10
a. To show that the waveforms f
n
(t), n = 1, . . . , 3 are orthogonal we have to prove that:


−∞
f
m
(t)f
n
(t)dt = 0, m = n
Clearly:
c
12
=



−∞
f
1
(t)f
2
(t)dt =

4
0
f
1
(t)f
2
(t)dt
=

2
0
f
1
(t)f
2
(t)dt +

4
2
f
1
(t)f

2
(t)dt
=
1
4

2
0
dt −
1
4

4
2
dt =
1
4
× 2 −
1
4
× (4 −2)
= 0
Similarly:
c
13
=


−∞
f

1
(t)f
3
(t)dt =

4
0
f
1
(t)f
3
(t)dt
=
1
4

1
0
dt −
1
4

2
1
dt −
1
4

3
2

dt +
1
4

4
3
dt
= 0
and :
c
23
=


−∞
f
2
(t)f
3
(t)dt =

4
0
f
2
(t)f
3
(t)dt
=
1

4

1
0
dt −
1
4

2
1
dt +
1
4

3
2
dt −
1
4

4
3
dt
= 0
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.

9
Thus, the signals f
n
(t) are orthogonal. It is also straightforward to prove that the signals have unit
energy :


−∞
|f
i
(t)|
2
dt = 1, i = 1, 2, 3
Hence, th ey are orth on ormal.
b. We first determine the weighting coefficients
x
n
=


−∞
x(t)f
n
(t)dt, n = 1, 2, 3
x
1
=

4
0

x(t)f
1
(t)dt = −
1
2

1
0
dt +
1
2

2
1
dt −
1
2

3
2
dt +
1
2

4
3
dt = 0
x
2
=


4
0
x(t)f
2
(t)dt =
1
2

4
0
x(t)dt = 0
x
3
=

4
0
x(t)f
3
(t)dt = −
1
2

1
0
dt −
1
2


2
1
dt +
1
2

3
2
dt +
1
2

4
3
dt = 0
As it is observed, x(t) is orthogonal to the signal wavaforms f
n
(t), n = 1, 2, 3 and thus it can not
represented as a linear combination of these functions.
Problem 2.11
a. As an orthonorm al set of basis fu nctions we consider the set
f
1
(t) =



1 0 ≤ t < 1
0 o.w
f

2
(t) =



1 1 ≤ t < 2
0 o.w
f
3
(t) =



1 2 ≤ t < 3
0 o.w
f
4
(t) =



1 3 ≤ t < 4
0 o.w
In matrix notation, the four waveforms can be represented as








s
1
(t)
s
2
(t)
s
3
(t)
s
4
(t)







=







2 −1 −1 −1
−2 1 1 0

1 −1 1 −1
1 −2 −2 2














f
1
(t)
f
2
(t)
f
3
(t)
f
4
(t)








Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of the
waveforms is 4
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
10
b. The representation vectors are
s
1
=

2 −1 −1 −1

s
2
=

−2 1 1 0

s
3
=


1 −1 1 −1

s
4
=

1 −2 −2 2

c. T he distance between th e first and the second vector is:
d
1,2
=

|s
1
− s
2
|
2
=





4 −2 −2 −1





2
=

25
Similarly we find that :
d
1,3
=

|s
1
− s
3
|
2
=





1 0 −2 0




2
=


5
d
1,4
=

|s
1
− s
4
|
2
=





1 1 1 −3




2
=

12
d
2,3
=


|s
2
− s
3
|
2
=





−3 2 0 1




2
=

14
d
2,4
=

|s
2
− s
4
|

2
=





−3 3 3 −2




2
=

31
d
3,4
=

|s
3
− s
4
|
2
=






0 1 3 −3




2
=

19
Thus, the minimum distance between any pair of vectors is d
min
=

5.
Problem 2.12
As a set of orthonormal functions we consider the waveforms
f
1
(t) =



1 0 ≤ t < 1
0 o.w
f
2
(t) =




1 1 ≤ t < 2
0 o.w
f
3
(t) =



1 2 ≤ t < 3
0 o.w
The vector representation of th e signals is
s
1
=

2 2 2

s
2
=

2 0 0

s
3
=

0 −2 −2


s
4
=

2 2 0

PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
11
Note that s
3
(t) = s
2
(t) − s
1
(t) and that the dimensionality of the waveforms is 3.
Problem 2.13
1. P (E
2
) = P (R2, R3, R4) = 3/7.
2. P (E
3
|E
2
) =

P (E
3
E
2
)
P (E
2
)
=
P (R2)
3/7
=
1
3
.
3. Here E
4
= {R2, R4, B2, R1, B1} and P (E
2
|E
4
E
3
) =
P (E
2
E
3
E
4

)
P (E
3
E
4
)
=
P (R2)
P (R2,B2,R1,B1)
=
1
4
.
4. E
5
= {R
2
, R
4
, B
2
}. We have P (E
3
E
5
) = P (R
2
, B
2
) =

2
7
and P (E
3
) = P (R1, R2, B1, B2) =
4
7
and P (E
5
) =
3
7
. Obviously P (E
3
E
5
) = P (E
3
)P (E
5
) and the events are not independent.
Problem 2.14
1. P (R) = P (A)P (R|A) + P (B)P (R|B) + P (C)P (R|C) = 0.2 ×0.05 + 0.3 × 0.1 + 0.5 ×0.15 =
0.01 + 0.03 + 0.075 = 0.115.
2. P (A|R) =
P (A)P (R|A)
P (R)
=
0.01
0.115

≈ 0.087.
Problem 2.15
The relationship holds for n = 2 (2-1-34) : p(x
1
, x
2
) = p(x
2
|x
1
)p(x
1
)
Suppose it holds for n = k, i.e : p(x
1
, x
2
, , x
k
) = p(x
k
|x
k−1
, , x
1
)p(x
k−1
|x
k−2
, , x

1
) p(x
1
)
Then for n = k + 1 :
p(x
1
, x
2
, , x
k
, x
k+1
) = p(x
k+1
|x
k
, x
k−1
, , x
1
)p(x
k
, x
k−1
, x
1
)
= p(x
k+1

|x
k
, x
k−1
, , x
1
)p(x
k
|x
k−1
, , x
1
)p(x
k−1
|x
k−2
, , x
1
) p(x
1
)
Hence the relationship holds for n = k + 1, and by induction it holds for any n.
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
12
Problem 2.16

1. Let T and R denote channel inp ut and outputs respectively. Using Bayes rule we have
p(T = 0|R = A) =
p(T = 0)p(R = A|T = 0)
p(T = 0)p(R = A|T = 0) + p(T = 1)p(R = A|T = 1)
=
0.4 ×
1
6
0.4 ×
1
6
+ 0.6 ×
1
3
=
1
4
and th erefore p(T = 1|R = A) =
3
4
, obviously if R = A is observed, the best decision would
be to declare that a 1 was sent, i.e., T = 1, because T = 1 is more probable that T = 0.
Similarly it can be verified that p(T = 0|R = B) =
4
7
and p(T = 0|R = C) =
1
4
. Therefore,
when the outp ut is B, the best decision is 0 and when the output is C, the best decision is

T = 1. Therefore the decision function d can be defin ed as
d(R) =

1, R = A or C
0, R = B
This is the optimal decision scheme.
2. Here we know that a 0 is transmitted, therefore we are looking for p(error|T = 0), this is
the probability that the receiver declares a 1 was sent when actually a 0 was transmitted.
Since by the decision method described in part 1 the receiver declares that a 1 was sent when
R = A or R = C, therefore, p(error|T = 0) = p(R = A|T = 0) + p(R = C|T = 0) =
1
3
.
3. We have p(error|T = 0) =
1
3
, and p(error|T = 1) = p(R = B|T = 1) =
1
3
. Therefore, by the
total probability th eorem
p(error) = p(T = 0)p(error|T = 0) + p(T = 1)p(error|T = 1)
= 0.4 ×
1
3
+ 0.6 ×
1
3
=
1

3
Problem 2.17
Following the same procedure as in example 2-1-1, we prove :
p
Y
(y) =
1
|a|
p
X

y −b
a

PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
13
Problem 2.18
Relationship (2-1-44) gives :
p
Y
(y) =
1
3a [(y − b) /a]
2/3
p

X


y − b
a

1/3

X is a gaussian r.v. with zero mean and unit variance : p
X
(x) =
1


e
−x
2
/2
Hence :
p
Y
(y) =
1
3a

2π [(y − b) /a]
2/3
e

1

2
(
y−b
a
)
2/3
−10 −8 −6 −4 −2 0 2 4 6 8 10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
y
pdf of Y
a=2
b=3
Problem 2.19
1) The random variable X is Gaussian with zero mean and variance σ
2
= 10
−8
. Thus p(X > x) =
Q(
x

σ
) and
p(X > 10
−4
) = Q

10
−4
10
−4

= Q(1) = .159
p(X > 4 × 10
−4
) = Q

4 ×10
−4
10
−4

= Q(4) = 3.17 ×10
−5
p(−2 ×10
−4
< X ≤ 10
−4
) = 1 −Q(1) −Q(2) = .8182
2)
p(X > 10

−4


X > 0) =
p(X > 10
−4
, X > 0)
p(X > 0)
=
p(X > 10
−4
)
p(X > 0)
=
.159
.5
= .318
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
14
Problem 2.20
1) y = g(x) = ax
2
. Assume without loss of generality that a > 0. Then, if y < 0 the equation
y = ax
2

has no r eal solutions and f
Y
(y) = 0. If y > 0 there are two solutions to th e system, namely
x
1,2
=

y/a. Hence,
f
Y
(y) =
f
X
(x
1
)
|g

(x
1
)|
+
f
X
(x
2
)
|g

(x

2
)|
=
f
X
(

y/a)
2a

y/a
+
f
X
(−

y/a)
2a

y/a
=
1

ay

2πσ
2
e

y

2aσ
2
2) The equation y = g(x) has no solutions if y < −b. Thus F
Y
(y) and f
Y
(y) are zero for y < −b. If
−b ≤ y ≤ b, then for a fixed y, g(x) < y if x < y; hence F
Y
(y) = F
X
(y). If y > b then g(x) ≤ b < y
for every x; hence F
Y
(y) = 1. At the points y = ±b, F
Y
(y) is discontinuous and the discontinuities
equal to
F
Y
(−b
+
) −F
Y
(−b

) = F
X
(−b)
and

F
Y
(b
+
) −F
Y
(b

) = 1 −F
X
(b)
The PDF of y = g(x) is
f
Y
(y) = F
X
(−b)δ(y + b) + (1 − F
X
(b))δ(y − b) + f
X
(y)[u
−1
(y + b) − u
−1
(y − b)]
= Q

b
σ


(δ(y + b) + δ(y − b)) +
1

2πσ
2
e

y
2

2
[u
−1
(y + b) − u
−1
(y − b)]
3) In the case of the hard limiter
p(Y = b) = p(X < 0) = F
X
(0) =
1
2
p(Y = a) = p(X > 0) = 1 −F
X
(0) =
1
2
Thus F
Y
(y) is a staircase function and

f
Y
(y) = F
X
(0)δ(y −b) + (1 −F
X
(0))δ(y − a)
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
15
4) The random variable y = g(x) takes the values y
n
= x
n
with probab ility
p(Y = y
n
) = p(a
n
≤ X ≤ a
n+1
) = F
X
(a
n+1
) −F

X
(a
n
)
Thus, F
Y
(y) is a staircase function with F
Y
(y) = 0 if y < x
1
and F
Y
(y) = 1 if y > x
N
. The PDF
is a sequence of impulse functions, that is
f
Y
(y) =
N

i=1
[F
X
(a
i+1
) −F
X
(a
i

)] δ(y − x
i
)
=
N

i=1

Q

a
i
σ

− Q

a
i+1
σ

δ(y − x
i
)
Problem 2.21
For n odd, x
n
is odd and since the zero-mean Gaussian PDF is even their product is odd. Since
the integral of an odd function over the interval [−∞, ∞] is zero, we ob tain E[X
n
] = 0 for n odd.

Let I
n
=


−∞
x
n
exp(−x
2
/2σ
2
)dx. Obviously I
n
is a constant and its derivative with respect to x
is zero, i.e.,
d
dx
I
n
=


−∞

nx
n−1
e

x

2

2

1
σ
2
x
n+1
e

x
2

2

dx = 0
which results in the recursion
I
n+1
= nσ
2
I
n−1
This is true for all n. Now let n = 2k − 1, we will have I
2k
= (2k − 1)σ
2
I
2k− 2

, with the initial
condition I
0
=

2πσ
2
. Substituting we have
I
2
= σ
2

2πσ
2
I
4
= 3σ
2
I
2
= 3σ
4

2πσ
2
I
6
= 5 ×3σ
2

I
4
= 5 ×3σ
6

2πσ
2
I
8
= 7 ×σ
2
I
6
= 7 ×5 ×3σ
8

2πσ
2
.
.
. =
.
.
.
and in general if I
2k
= (2k −1)(2k −3)(2k −5) ×···×3 ×1σ
2k

2πσ

2
, then I
2k+ 2
= (2k + 1)σ
2
I
2k
=
(2k + 1)(2k −1)(2k −3)(2k −5) ×···×3 ×1σ
2k+ 2

2πσ
2
. Using the fact that E[X
2k
] = I
2k
/

2πσ
2
,
we ob tain
I
n
= 1 ×3 ×5 × ··· ×(n −1)σ
n
for n even.
PROPRIETARY MATERIAL.
c

The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
16
Problem 2.22
a. Since (X
r
, X
i
) are statistically independent :
p
X
(x
r
, x
i
) = p
X
(x
r
)p
X
(x
i
) =
1
2πσ
2
e


(
x
2
r
+x
2
i
)
/2σ
2
Also :
Y
r
+ jY
i
= (X
r
+ X
i
)e


X
r
+ X
i
= (Y
r
+ jY

i
) e
−jφ
= Y
r
cos φ + Y
i
sin φ + j(−Y
r
sin φ + Y
i
cos φ) ⇒



X
r
= Y
r
cos φ + Y
i
sin φ
X
i
= −Y
r
sin φ + Y
i
cos φ




The Jacobian of the above transformation is :
J =






∂X
r
∂Y
r
∂X
i
∂Y
r
∂Xr
∂Y
i
∂X
i
∂Y
i







=






cos φ −sin φ
sin φ cos φ






= 1
Hence, by (2-1-55) :
p
Y
(y
r
, y
i
) = p
X
((Y
r
cos φ + Y
i

sin φ) , (−Y
r
sin φ + Y
i
cos φ))
=
1
2πσ
2
e

(
y
2
r
+y
2
i
)
/2σ
2
b. Y = AX and X = A
−1
Y
Now, p
X
(x) =
1
(2πσ
2

)
n/2
e
−x

x/2σ
2
(the covariance matrix M of the random variables x
1
, , x
n
is
M = σ
2
I, since they are i.i.d) and J = 1/|det(A)|. Hen ce :
p
Y
(y) =
1
(2πσ
2
)
n/2
1
|det(A)|
e
−y

(A
−1

)

A
−1
y/2σ
2
For th e pdf’s of X and Y to be identical we require that :
|det(A)| = 1 and (A
−1
)

A
−1
= I =⇒ A
−1
= A

Hence, A must be a unitary (orthogonal) matrix .
Problem 2.23
Since we are dealing with linear comb inations of jointly Gaussian random variables, it is clear
that Y is jointly Gaussian. We clearly h ave m
Y
= E[AX] = Am
X
. This means that Y − m
Y
=
A (X − m
X
). Also note that

C
Y
= E

(Y − m
Y
)(Y − m
Y
)


= E

A (X − m
X
) (X − m
X
) A


PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
17
resulting in C
Y
= AC

X
A

.
Problem 2.24
a.
ψ
Y
(jv) = E

e
jvY

= E

e
jv
P
n
i=1
x
i

= E

n

i=1
e
jvx

i

=
n

i=1
E

e
jvX

=

ψ
X
(e
jv
)

n
But,
p
X
(x) = pδ(x −1) + (1 −p)δ(x) ⇒ ψ
X
(e
jv
) = 1 + p + pe
jv
⇒ ψ

Y
(jv) =

1 + p + pe
jv

n
b.
E(Y ) = −j

Y
(jv)
dv
|
v=0
= −jn(1 − p + pe
jv
)
n−1
jpe
jv
|
v=0
= np
and
E(Y
2
) = −
d
2

ψ
Y
(jv)
d
2
v
|
v=0
= −
d
dv

jn(1 − p + pe
jv
)
n−1
pe
jv

v=0
= np + np(n −1)p
⇒ E(Y
2
) = n
2
p
2
+ np(1 −p)
Problem 2.25
1. In the figure shown below

PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
18
x
x
u
v
R = x

2
let us consider the region u > x, v > x shown as the colored region extending to infinity, call
this region R, and let us integrate e

u
2
+v
2
2
over this r egion. We have

R
e

u
2
+v

2
2
du dv =

R
e

r
2
2
r dr dθ



x

2
re

r
2
2
dr

π
2
0

=
π

2

−e

r
2
2


x

2
=
π
2
e
−x
2
where we have used the fact that region R is included in the region outside the quarter circle
as shown in the figure. On the other hand we have

R
e

u
2
+v
2
2
du dv =



x
e

u
2
2
du


x
e

v
2
2
dv
=



x
e

u
2
2
du


2
=


2πQ(x)

2
= 2π (Q(x))
2
From the above relations we conclude that
2π (Q(x))
2

π
2
e
−x
2
and therefore, Q(x) ≤
1
2
e

x
2
2
.
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,

repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
19
2. In


x
e

y
2
2
dy
y
2
define u = e

y
2
2
and dv =
dy
y
2
and us e the integration by parts relation

u dv =
uv −


v du. We have v = −
1
y
and du = −ye

y
2
2
dy. Therefore


x
e

y
2
2
dy
y
2
=



e

y
2
2
y




x



x
e

y
2
2
dy =
e

x
2
2
x


2πQ(x)
Now note that


x
e

y

2
2
dy
y
2
> 0 which results in
e

x
2
2
x


2πQ(x) > 0 ⇒ Q(x) <
1

2πx
e

x
2
2
On the other hand, note that


x
e

y

2
2
dy
y
2
<
1
x
2


x
e

y
2
2
dy =


x
2
Q(x)
which results in
e

x
2
2
x



2πQ(x) <


x
2
Q(x)
or,


1+x
2
x
2
Q(x) >
e

x
2
2
x
which results in
Q(x) >
x

2π(1 + x
2
)
e


x
2
2
3. From
x

2π(1 + x
2
)
e

x
2
2
< Q(x) <
1

2πx
e

x
2
2
we have
1

2π(
1
x

+ x)
e

x
2
2
< Q(x) <
1

2πx
e

x
2
2
As x becomes large
1
x
in the denominator of the left hand side becomes small and the two
bounds become equal, therefore for large x we have
Q(x) ≈
1

2πx
e

x
2
2
Problem 2.26

PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
20
1. F
Y
n
(y) = P [Y
n
≤ y] = 1 −P [Y
n
> y] = 1 −P [x
1
> y, X
2
> y, . . . , X
n
> y] = 1 −(P [X > y])
n
where we have used the independ en ce of X
i
’s in the last step. But P[X > y] =

A
y
1
A

dy =
A−y
A
.
Therefore, F
Y
n
(y) = 1 −
(A−y)
n
A
n
, and f
Y
n
(y) =
d
dy
F
Y
n
(y) = n
(A−y)
n−1
A
n
, 0 < y < A.
2.
f(y) =
n

A

1 −
y
A

n−1
=
λ
1 −
y
A

1 −
ny
nA

n
=
λ
1 −
y
A

1 −
λy
n

n
→ λe

−λy
y > 0
Problem 2.27
ψ(jv
1
, jv
2
, jv
3
, jv
4
) = E

e
j(v
1
x
1
+v
2
x
2
+v
3
x
3
+v
4
x
4

)

E (X
1
X
2
X
3
X
4
) = (−j)
4

4
ψ(jv
1
, jv
2
, jv
3
, jv
4
)
∂v
1
∂v
2
∂v
3
∂v

4
|
v
1
=v
2
=v
3
=v
4
=0
From (2-1-151) of the text, an d the zero-mean p roperty of th e given rv’s :
ψ(jv) = e

1
2
v

Mv
where v = [v
1
, v
2
, v
3
, v
4
]

, M = [µ

ij
] .
We obtain the desired result by bringing the exponent to a scalar form and then performing
quadruple differentiation. We can simplify the procedure by noting that :
∂ψ(jv)
∂v
i
= −µ

i
ve

1
2
v

Mv
where µ

i
= [µ
i1
, µ
i2
, µ
i3
, µ
i4
] . Also note that :
∂µ


j
v
∂v
i
= µ
ij
= µ
ji
Hence :

4
ψ(jv
1
, jv
2
, jv
3
, jv
4
)
∂v
1
∂v
2
∂v
3
∂v
4
|

V=0
= µ
12
µ
34
+ µ
23
µ
14
+ µ
24
µ
13
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
21
Problem 2.28
1) By Chernov boun d, for t > 0,
P [X ≥ α] ≤ e
−tα
E[e
tX
] = e
−tα
Θ
X

(t)
This is true for all t > 0, hence
ln P[X ≥ α] ≤ m in
t≥0
[−tα + ln Θ
X
(t)] = −max
t≥0
[tα −ln Θ
X
(t)]
2) Here
ln P[S
n
≥ α] = ln P[Y ≥ nα] ≤ −max
t≥0
[tnα −ln Θ
Y
(t)]
where Y = X
1
+ X
2
+ ···+ X
n
, and Θ
Y
(t) = E[e
X
1

+X
2
+···+X
n
] = [Θ
X
(t)]
n
. Hence,
ln P[S
n
≥ α] = −max
t≥0
n [tα −ln Θ
X
(t)] = −nI(α) ⇒
1
n
P [S
n
≥ α] ≤ e
−nI(α)
Θ
X
(t) =


0
e
tx

e
−x
dx =
1
1−t
as long as t < 1. I(α) = max
t≥0
(tα + ln(1 −t)), hence
d
dt
(tα + ln(1 −
t)) = 0 and t

=
α−1
α
. Since α ≥ 0, t

≥ 0 and also obviously t

< 1. I(α) = α −1 + ln

1 −
α−1
α

=
α −1 −ln α, using the large deviation th eorem
ln P[S
n

≥ α] = e
−n(α−1−ln α)+o(n)
= α
n
e
−n(α−1)+o(n)
Problem 2.29
For th e central chi-square with n degress of fr eedom :
ψ(jv) =
1
(1 −j2vσ
2
)
n/2
Now :
dψ(jv)
dv
=
jnσ
2
(1 −j2vσ
2
)
n/2+1
⇒ E (Y ) = −j
dψ(jv)
dv
|
v=0
= nσ

2
d
2
ψ(jv)
dv
2
=
−2nσ
4
(n/2 + 1)
(1 −j2vσ
2
)
n/2+2
⇒ E

Y
2

= −
d
2
ψ(jv)
dv
2
|
v=0
= n(n + 2)σ
2
The variance is σ

2
Y
= E

Y
2

− [E (Y )]
2
= 2nσ
4
For th e non-central chi-square with n degrees of freedom :
ψ(jv) =
1
(1 −j2vσ
2
)
n/2
e
jvs
2
/
(
1−j2vσ
2
)
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the

limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
22
where by definition : s
2
=

n
i=1
m
2
i
.
dψ(jv)
dv
=

jnσ
2
(1 −j2vσ
2
)
n/2+1
+
js
2
(1 −j2vσ
2
)
n/2+2


e
jvs
2
/
(
1−j2vσ
2
)
Hence, E (Y ) = −j
dψ(jv)
dv
|
v=0
= nσ
2
+ s
2
d
2
ψ(jv)
dv
2
=

−nσ
4
(n + 2)
(1 −j2vσ
2

)
n/2+2
+
−s
2
(n + 4)σ
2
− ns
2
σ
2
(1 −j2vσ
2
)
n/2+3
+
−s
4
(1 −j2vσ
2
)
n/2+4

e
jvs
2
/
(
1−j2vσ
2

)
Hence,
E

Y
2

= −
d
2
ψ(jv)
dv
2
|
v=0
= 2nσ
4
+ 4s
2
σ
2
+


2
+ s
2

and
σ

2
Y
= E

Y
2

− [E (Y )]
2
= 2nσ
4
+ 4σ
2
s
2
Problem 2.30
The Cauchy r.v. has : p(x) =
a/π
x
2
+a
2
, −∞ < x < ∞
a.
E (X) =


−∞
xp(x)dx = 0
since p(x) is an even function.

E

X
2

=


−∞
x
2
p(x)dx =
a
π


−∞
x
2
x
2
+ a
2
dx
Note that for large x,
x
2
x
2
+a

2
→ 1 (i.e non-zero value). Hence,
E

X
2

= ∞, σ
2
= ∞
b.
ψ(jv) = E

jvX

=


−∞
a/π
x
2
+ a
2
e
jvx
dx =


−∞

a/π
(x + ja) (x −ja)
e
jvx
dx
This integral can be evaluated by using the residue theorem in complex variable theory. Then, for
v ≥ 0 :
ψ(jv) = 2πj

a/π
x + ja
e
jvx

x=ja
= e
−av
For v < 0 :
ψ(jv) = −2πj

a/π
x −ja
e
jvx

x=−ja
= e
av
v
PROPRIETARY MATERIAL.

c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
23
Therefore :
ψ(jv) = e
−a|v|
Note: an alternative way to find the ch aracteristic function is to use the Fourier transform rela-
tionship between p(x), ψ(jv) and th e Fourier pair :
e
−b|t|

1
π
c
c
2
+ f
2
, c = b/2π, f = 2πv
Problem 2.31
Since R
0
and R
1
are independent f
R
0

,R
1
(r
0
, r
1
) = f
R
0
(r
0
)f
R
1
(r
1
) and
f
R
0
,R
1
(r
0
, r
1
) =




r
0
r
1
σ
4
I
0

µr
1
σ
2

e

µ
2

2
e

r
2
1
+r
2
0

2

, r
0
, r
1
≥ 0
0, otherwise.
Now
P (R
0
> R
1
) =

r
0
>r
1
f(r
0
, r
1
) dr
1
dr
0
=


0
dr

1


r
1
f(r
0
, r
1
) dr
0
=


0
f
R
1
(r
1
)



r
1
f
R
0
(r

0
) dr
0

dr
1
=


0
f
R
1
(r
1
)



r
1
r
0
σ
2
e

r
2
0


2
dr
0

dr
1
=


0
f
R
1
(r
1
)

−e

r
2
0

2


r
1
dr

1
=


0
e

r
2
1

2
f
R
1
(r
1
) dr
1
=


0
r
1
σ
2
I
0


µr
1
σ
2

e

µ
2
+2r
2
1

2
dr
1
Now using the change of variable y =

2r
1
and letting s =
µ

2
we obtain
P (R
0
> R
1
) =



0
y


2
I
0

sy
σ
2

e

2s
2
+y
2

2
dy

2
=
1
2
e


s
2

2


0
y
σ
2
I
0

sy
σ
2

e

s
2
+y
2

2
dy
=
1
2
e


s
2

2
=
1
2
e

µ
2

2
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
24
where we have used the fact that


0
y
σ
2
I
0


sy
σ
2

e

s
2
+y
2

2
dy = 1 because it is the integral of a Rician
pdf.
Problem 2.32
1. The joint pdf of a, b is :
p
ab
(a, b) = p
xy
(a −m
r
, b − m
i
) = p
x
(a −m
r
)p

y
(b −m
i
) =
1
2πσ
2
e

1

2
[
(a−m
r
)
2
+(b−m
i
)
2
]
2. u =

a
2
+ b
2
, φ = tan
−1

b/a ⇒ a = u cos φ, b = u sin φ The Jacobian of th e transformation is
: J(a, b) =






∂a/∂u ∂a/∂φ
∂b/∂u ∂b/∂φ






= u, hence :
p

(u, φ) =
u
2πσ
2
e

1

2
[
(u cos φ−m

r
)
2
+(u sin φ−m
i
)
2
]
=
u
2πσ
2
e

1

2
[
u
2
+M
2
−2uM cos(φ−θ)
]
where we have used the trans formation :



M =


m
2
r
+ m
2
i
θ = tan
−1
m
i
/m
r







m
r
= M cos θ
m
i
= M sin θ



3.
p

u
(u) =


0
p

(u, φ)dφ
=
u
2πσ
2
e

u
2
+M
2

2


0
e

1

2
[−2uM cos(φ−θ)]


=
u
σ
2
e

u
2
+M
2

2
1



0
e
uM cos(φ−θ)/σ
2

=
u
σ
2
e

u
2
+M

2

2
I
o

uM/σ
2

PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.
25
Problem 2.33
a. Y =
1
n

n
i=1
X
i
, ψ
X
i
(jv) = e
−a|v|

ψ
Y
(jv) = E

e
jv
1
n
P
n
i=1
X
i

=
n

i=1
E

e
j
v
n
X
i

=
n


i=1
ψ
X
i
(jv/n) =

e
−a|v|/n

n
= e
−a|v|
b. Since ψ
Y
(jv) = ψ
X
i
(jv) ⇒ p
Y
(y) = p
X
i
(x
i
) ⇒ p
Y
(y) =
a/π
y
2

+a
2
.
c. As n → ∞, p
Y
(y) =
a/π
y
2
+a
2
, which is not Gaussian ; hence, the central limit theorem does not
hold. Th e reason is that the Cauchy distribution does not have a finite variance.
Problem 2.34
Since Z and Ze

have the same pdf, we have E[Z] = E

Ze


= e

E[Z] for all θ. Putting
θ = π gives E[Z] = 0. We also have E

ZZ
t

= E


Ze


Ze


t

or E

ZZ
t

= e
2jθ
E

ZZ
t

, for
all θ. Putting θ =
π
2
gives E

ZZ
t


= 0. Since Z is zero-mean and E

ZZ
t

= 0, we conclude that
it is proper.
Problem 2.35
Using Equation 2.6-29 we n ote that for the zero-mean proper case if W = e

Z, it is suf-
ficient to show that det(C
W
) = det(C
Z
) and w
H
C
−1
W
w = z
H
C
−1
Z
z. But C
W
= [W W
H
] =

E[e

Ze
−jθ
Z
H
] = E[ZZ
H
] = C
Z
, hence det(C
W
) = d et(C
Z
). Similarly, w
H
C
−1
W
w = e
−jθ
z
H
C
−1
Z
ze

=
z

H
C
−1
Z
z. Substituting into Equation 2.6-29, we conclude that p(w) = p(z).
Problem 2.36
Since Z is proper, we have E[(Z − E(Z))(Z − E(Z))
t
] = 0. Let W = AZ + b, then
E[(W −E(W ))(W − E(W ))
t
] = AE[(Z − E(Z))(Z −E(Z))
t
]A
t
= 0
PROPRIETARY MATERIAL.
c
The McGraw-Hill Companies, Inc. All rights reserved. No part of this Manual may be displayed,
repro duced or distributed in any form or by any means, without the prior wri tten permission of the publisher, or used beyond the
limited distribution to teachers and educators permitted by McGraw-Hill for their individual course preparation. If you are a
student using this Manual, you are using it without permission.

×