Tải bản đầy đủ (.pdf) (29 trang)

Modulation and coding course- lecture 12

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (149.69 KB, 29 trang )

Digital Communications I:
Modulation and Coding Course
Period 3 - 2007
Catharina Logothetis
Lecture 12
Lecture 12 2
Last time, we talked about:

How the decoding is performed for
Convolutional codes?

What is a Maximum likelihood decoder?

What are the soft decisions and hard
decisions?

How does the Viterbi algorithm work?
Lecture 12 3
Trellis of an example ½ Conv. code
1 0 1 0 0
11 10 00 10 11
Input bits
Output bits
Tail bits
1/11
0/00
0/10
1/11
1/01
0/00
0/11


0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6
t
1
t
2
t
3
t
4
t
5
t
1/01
Lecture 12 4
Block diagram of the DCS
{
4434421
444344421

dBranch worper outputs
1
dBranch worfor
outputsr Demodulato
sequence received
321

,...),...,,,(
n
nijii
i
i
i
,...,z,...,zzZ
ZZZZ
=
=Z
Information
source
Rate 1/n
Conv. encoder
Modulator
Information
sink
Rate 1/n
Conv. decoder
Demodulator
4434421
sequenceInput
21

,...),...,,(
i
mmm=m
4434421
444344421
bits) coded ( rdBranch wo
1
sequence Codeword
321

,...),...,,,(
n
nijiii
i
,...,u,...,uuU
UUUU
=
=
= G(m)U
,...)
ˆ
,...,
ˆ
,
ˆ
(
ˆ
21 i
mmm=m
Channel

Lecture 12 5
Soft and hard decision decoding

In hard decision:

The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides
no other information for the decoder such that
how reliable the decision is.

In Soft decision:

The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
Lecture 12 6
Soft and hard decision decoding …

ML soft-decisions decoding rule:

Choose the path in the trellis with minimum
Euclidean distance from the received
sequence

ML hard-decisions decoding rule:

Choose the path in the trellis with minimum
Hamming distance from the received
sequence

Lecture 12 7
The Viterbi algorithm

The Viterbi algorithm performs Maximum
likelihood decoding.

It finds a path through trellis with the largest
metric (maximum correlation or minimum
distance).

At each step in the trellis, it compares the partial
metric of all paths entering each state, and keeps
only the path with the largest metric, called the
survivor, together with its metric.
Lecture 12 8
Example of hard-decision Viterbi decoding
)100(
ˆ
=m
)1100111011(
ˆ
=U
)101(=m
)1110001011(=U
)0110111011(=Z
0
2
0
1
2

1
0
1
1
0
1
2
2
1
0
2
1
1
1
6
t
1
t
2
t
3
t
4
t
5
t
1
0
2 3 0 1 2
3

2
3
20
2
30
( )
ii
ttS ),(Γ
Branch metric
Partial metric
Lecture 12 9
Example of soft-decision Viterbi decoding
)101(
ˆ
=m
)1110001011(
ˆ
=U
)101(=m
)1110001011(=U
)1,
3
2
,1,
3
2
,1,
3
2
,

3
2
,
3
2
,
3
2
,1(


−−
=Z
5/3
-5/3
4/3
0
0
1/3
1/3
-1/3
-1/3
5/3
-5/3
1/3
1/3
-1/3
6
t
1

t
2
t
3
t
4
t
5
t
-5/3
0
-5/3 -5/3 10/3 1/3 14/3
2
8/3
10/3
13/33
1/3
5/35/3
( )
ii
ttS ),(Γ
Branch metric
Partial metric
1/3
-4/3
5/3
5/3
-5/3
Lecture 12 10
Today, we are going to talk about:


The properties of Convolutional codes:

Free distance

Transfer function

Systematic Conv. codes

Catastrophic Conv. codes

Error performance

Interleaving

Concatenated codes

Error correction scheme in Compact disc
Lecture 12 11
Free distance of Convolutional codes

Distance properties:

Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the following
approach is used to find the minimum distance between all
pairs of codewords:

Since the code is linear, the minimum distance of the code is
the minimum distance between each of the codewords and the

all-zero codeword.

This is the minimum distance in the set of all arbitrary long
paths along the trellis that diverge and remerge to the all-zero
path.

It is called the minimum free distance or the free distance of
the code, denoted by
ffree
dd or

×