Tải bản đầy đủ (.pdf) (390 trang)

ESSENTIALS OF ERROR CONTROL CODING patrick guy farrell

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (6 MB, 390 trang )


ESSENTIALS OF
ERROR-CONTROL
CODING
Jorge Castiñeira Moreira
University of Mar del Plata, Argentina

Patrick Guy Farrell
Lancaster University, UK



ESSENTIALS OF
ERROR-CONTROL
CODING



ESSENTIALS OF
ERROR-CONTROL
CODING
Jorge Castiñeira Moreira
University of Mar del Plata, Argentina

Patrick Guy Farrell
Lancaster University, UK


Copyright

C



2006

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester,
West Sussex PO19 8SQ, England
Telephone (+44) 1243 779777

Email (for orders and customer service enquiries):
Visit our Home Page on www.wiley.com
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or
transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or
otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a
licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK,
without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the
Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19
8SQ, England, or emailed to , or faxed to (+44) 1243 770620.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and
product names used in this book are trade names, service marks, trademarks or registered trademarks of their
respective owners. The Publisher is not associated with any product or vendor mentioned in this book.
This publication is designed to provide accurate and authoritative information in regard to the subject matter
covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If
professional advice or other expert assistance is required, the services of a competent professional should be sought.

Other Wiley Editorial Offices
John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA
Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA
Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany
John Wiley & Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Australia
John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809
John Wiley & Sons Canada Ltd, 6045 Freemont Blvd, Mississauga, ONT, L5R 4J3, Canada

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may
not be available in electronic books.

British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
ISBN-13 978-0-470-02920-6 (HB)
ISBN-10 0-470-02920-X (HB)
Typeset in 10/12pt Times by TechBooks, New Delhi, India.
Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, England.
This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two
trees are planted for each one used for paper production.


We dedicate this book to
my son Santiago José,
Melisa and Belén,
Maria, Isabel, Alejandra and Daniel,
and the memory of my Father.
J.C.M.
and to all my families and friends.
P.G.F.



Contents
Preface
Acknowledgements

xiii
xv


List of Symbols

xvii

Abbreviations

xxv

1 Information and Coding Theory
1.1 Information
1.1.1 A Measure of Information
1.2 Entropy and Information Rate
1.3 Extended DMSs
1.4 Channels and Mutual Information
1.4.1 Information Transmission over Discrete Channels
1.4.2 Information Channels
1.5 Channel Probability Relationships
1.6 The A Priori and A Posteriori Entropies
1.7 Mutual Information
1.7.1 Mutual Information: Definition
1.7.2 Mutual Information: Properties
1.8 Capacity of a Discrete Channel
1.9 The Shannon Theorems
1.9.1 Source Coding Theorem
1.9.2 Channel Capacity and Coding
1.9.3 Channel Coding Theorem
1.10 Signal Spaces and the Channel Coding Theorem
1.10.1 Capacity of the Gaussian Channel
1.11 Error-Control Coding

1.12 Limits to Communication and their Consequences
Bibliography and References
Problems

1
3
3
4
9
10
10
10
13
15
16
16
17
21
22
22
23
25
27
28
32
34
38
38



viii

Contents

2 Block Codes
2.1 Error-Control Coding
2.2 Error Detection and Correction
2.2.1 Simple Codes: The Repetition Code
2.3 Block Codes: Introduction and Parameters
2.4 The Vector Space over the Binary Field
2.4.1 Vector Subspaces
2.4.2 Dual Subspace
2.4.3 Matrix Form
2.4.4 Dual Subspace Matrix
2.5 Linear Block Codes
2.5.1 Generator Matrix G
2.5.2 Block Codes in Systematic Form
2.5.3 Parity Check Matrix H
2.6 Syndrome Error Detection
2.7 Minimum Distance of a Block Code
2.7.1 Minimum Distance and the Structure of the H Matrix
2.8 Error-Correction Capability of a Block Code
2.9 Syndrome Detection and the Standard Array
2.10 Hamming Codes
2.11 Forward Error Correction and Automatic Repeat ReQuest
2.11.1 Forward Error Correction
2.11.2 Automatic Repeat ReQuest
2.11.3 ARQ Schemes
2.11.4 ARQ Scheme Efficiencies
2.11.5 Hybrid-ARQ Schemes

Bibliography and References
Problems
3
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8

Cyclic Codes
Description
Polynomial Representation of Codewords
Generator Polynomial of a Cyclic Code
Cyclic Codes in Systematic Form
Generator Matrix of a Cyclic Code
Syndrome Calculation and Error Detection
Decoding of Cyclic Codes
An Application Example: Cyclic Redundancy Check Code for the Ethernet Standard
Bibliography and References
Problems

4 BCH Codes
4.1 Introduction: The Minimal Polynomial
4.2 Description of BCH Cyclic Codes
4.2.1 Bounds on the Error-Correction Capability of a BCH Code: The Vandermonde
Determinant


41
41
41
42
43
44
46
48
48
49
50
51
52
54
55
58
58
59
61
64
65
65
68
69
71
72
76
77
81
81

81
83
85
87
89
90
92
93
94
97
97
99
102


Contents

ix

4.3
4.4
4.5
4.6

Decoding of BCH Codes
Error-Location and Error-Evaluation Polynomials
The Key Equation
Decoding of Binary BCH Codes Using the Euclidean Algorithm
4.6.1 The Euclidean Algorithm
Bibliography and References

Problems

104
105
107
108
108
112
112

5
5.1
5.2
5.3
5.4
5.5
5.6

Reed–Solomon Codes
Introduction
Error-Correction Capability of RS Codes: The Vandermonde Determinant
RS Codes in Systematic Form
Syndrome Decoding of RS Codes
The Euclidean Algorithm: Error-Location and Error-Evaluation Polynomials
Decoding of RS Codes Using the Euclidean Algorithm
5.6.1 Steps of the Euclidean Algorithm
Decoding of RS and BCH Codes Using the Berlekamp–Massey Algorithm
5.7.1 B–M Iterative Algorithm for Finding the Error-Location Polynomial
5.7.2 B–M Decoding of RS Codes
5.7.3 Relationship Between the Error-Location Polynomials of the Euclidean and

B–M Algorithms
A Practical Application: Error-Control Coding for the Compact Disk
5.8.1 Compact Disk Characteristics
5.8.2 Channel Characteristics
5.8.3 Coding Procedure
Encoding for RS codes CRS (28, 24), CRS (32, 28) and CRS (255, 251)
Decoding of RS Codes CRS (28, 24) and CRS (32, 28)
5.10.1 B–M Decoding
5.10.2 Alternative Decoding Methods
5.10.3 Direct Solution of Syndrome Equations
Importance of Interleaving
Bibliography and References
Problems

115
115
117
119
120
122
125
127
128
130
133

5.7

5.8


5.9
5.10

5.11

6
6.1
6.2
6.3
6.4

Convolutional Codes
Linear Sequential Circuits
Convolutional Codes and Encoders
Description in the D-Transform Domain
Convolutional Encoder Representations
6.4.1 Representation of Connections
6.4.2 State Diagram Representation
6.4.3 Trellis Representation
6.5 Convolutional Codes in Systematic Form
6.6 General Structure of Finite Impulse Response and Infinite Impulse Response FSSMs
6.6.1 Finite Impulse Response FSSMs
6.6.2 Infinite Impulse Response FSSMs

136
136
136
138
138
139

142
142
145
146
148
152
153
157
158
158
161
166
166
166
168
168
170
170
171


x

Contents

6.7 State Transfer Function Matrix: Calculation of the Transfer Function
6.7.1 State Transfer Function for FIR FSSMs
6.7.2 State Transfer Function for IIR FSSMs
6.8 Relationship Between the Systematic and the Non-Systematic Forms
6.9 Distance Properties of Convolutional Codes

6.10 Minimum Free Distance of a Convolutional Code
6.11 Maximum Likelihood Detection
6.12 Decoding of Convolutional Codes: The Viterbi Algorithm
6.13 Extended and Modified State Diagram
6.14 Error Probability Analysis for Convolutional Codes
6.15 Hard and Soft Decisions
6.15.1 Maximum Likelihood Criterion for the Gaussian Channel
6.15.2 Bounds for Soft-Decision Detection
6.15.3 An Example of Soft-Decision Decoding of Convolutional Codes
6.16 Punctured Convolutional Codes and Rate-Compatible Schemes
Bibliography and References
Problems

172
172
173
175
177
180
181
182
185
186
189
192
194
196
200
203
205


7 Turbo Codes
7.1 A Turbo Encoder
7.2 Decoding of Turbo Codes
7.2.1 The Turbo Decoder
7.2.2 Probabilities and Estimates
7.2.3 Symbol Detection
7.2.4 The Log Likelihood Ratio
7.3 Markov Sources and Discrete Channels
7.4 The BCJR Algorithm: Trellis Coding and Discrete Memoryless Channels
7.5 Iterative Coefficient Calculation
7.6 The BCJR MAP Algorithm and the LLR
7.6.1 The BCJR MAP Algorithm: LLR Calculation
7.6.2 Calculation of Coefficients γi (u , u)
7.7 Turbo Decoding
7.7.1 Initial Conditions of Coefficients αi−1 (u ) and βi (u)
7.8 Construction Methods for Turbo Codes
7.8.1 Interleavers
7.8.2 Block Interleavers
7.8.3 Convolutional Interleavers
7.8.4 Random Interleavers
7.8.5 Linear Interleavers
7.8.6 Code Concatenation Methods
7.8.7 Turbo Code Performance as a Function of Size and Type of Interleaver
7.9 Other Decoding Algorithms for Turbo Codes
7.10 EXIT Charts for Turbo Codes
7.10.1 Introduction to EXIT Charts
7.10.2 Construction of the EXIT Chart
7.10.3 Extrinsic Transfer Characteristics of the Constituent Decoders


209
210
211
211
212
213
214
215
218
221
234
235
236
239
248
249
249
250
250
251
253
253
257
257
257
258
259
261



Contents

8
8.1
8.2
8.3

8.4
8.5
8.6
8.7

8.8

8.9

8.10

xi

Bibliography and References
Problems

269
271

Low-Density Parity Check Codes
Different Systematic Forms of a Block Code
Description of LDPC Codes
Construction of LDPC Codes

8.3.1 Regular LDPC Codes
8.3.2 Irregular LDPC Codes
8.3.3 Decoding of LDPC Codes: The Tanner Graph
The Sum–Product Algorithm
Sum–Product Algorithm for LDPC Codes: An Example
Simplifications of the Sum–Product Algorithm
A Logarithmic LDPC Decoder
8.7.1 Initialization
8.7.2 Horizontal Step
8.7.3 Vertical Step
8.7.4 Summary of the Logarithmic Decoding Algorithm
8.7.5 Construction of the Look-up Tables
Extrinsic Information Transfer Charts for LDPC Codes
8.8.1 Introduction
8.8.2 Iterative Decoding of Block Codes
8.8.3 EXIT Chart Construction for LDPC Codes
8.8.4 Mutual Information Function
8.8.5 EXIT Chart for the SND
8.8.6 EXIT Chart for the PCND
Fountain and LT Codes
8.9.1 Introduction
8.9.2 Fountain Codes
8.9.3 Linear Random Codes
8.9.4 Luby Transform Codes
LDPC and Turbo Codes
Bibliography and References
Problems

277
278

279
280
280
281
281
282
284
297
302
302
302
304
305
306
306
306
310
312
312
314
315
317
317
318
318
320
322
323
324


Appendix A: Error Probability in the Transmission of Digital Signals

327

Appendix B: Galois Fields GF(q)

339

Answers to Problems

351

Index

357



Preface
The subject of this book is the detection and correction of errors in digital information. Such
errors almost inevitably occur after the transmission, storage or processing of information in
digital (mainly binary) form, because of noise and interference in communication channels,
or imperfections in storage media, for example. Protecting digital information with a suitable
error-control code enables the efficient detection and correction of any errors that may have
occurred.
Error-control codes are now used in almost the entire range of information communication,
storage and processing systems. Rapid advances in electronic and optical devices and systems
have enabled the implementation of very powerful codes with close to optimum error-control
performance. In addition, new types of code, and new decoding methods, have recently been
developed and are starting to be applied. However, error-control coding is complex, novel and

unfamiliar, not yet widely understood and appreciated. This book sets out to provide a clear
description of the essentials of the topic, with comprehensive and up-to-date coverage of the
most useful codes and their decoding algorithms. The book has a practical engineering and
information technology emphasis, but includes relevant background material and fundamental
theoretical aspects. Several system applications of error-control codes are described, and there
are many worked examples and problems for the reader to solve.
The book is an advanced text aimed at postgraduate and third/final year undergraduate
students of courses on telecommunications engineering, communication networks, electronic
engineering, computer science, information systems and technology, digital signal processing,
and applied mathematics, and for engineers and researchers working in any of these areas. The
book is designed to be virtually self-contained for a reader with any of these backgrounds.
Enough information and signal theory, and coding mathematics, is included to enable a full
understanding of any of the error-control topics described in the book.
Chapter 1 provides an introduction to information theory and how it relates to error-control
coding. The theory defines what we mean by information, determines limits on the capacity of
an information channel and tells us how efficient a code is at detecting and correcting errors.
Chapter 2 describes the basic concepts of error detection and correction, in the context of the
parameters, encoding and decoding of some simple binary block error-control codes. Block
codes were the first type of error-control code to be discovered, in the decade from about 1940
to 1950. The two basic ways in which error coding is applied to an information system are
also described: forward error correction and retransmission error control. A particularly useful
kind of block code, the cyclic code, is introduced in Chapter 3, together with an example of
a practical application, the cyclic redundancy check (CRC) code for the Ethernet standard. In
Chapters 4 and 5 two very effective and widely used classes of cyclic codes are described,


xiv

Preface


the Bose–Chaudhuri–Hocquenghem (BCH) and Reed–Solomon (RS) codes, named after their
inventors. BCH codes can be binary or non-binary, but the RS codes are non-binary and are
particularly effective in a large number of error-control scenarios. One of the best known of
these, also described in Chapter 5, is the application of RS codes to error correction in the
compact disk (CD).
Not long after the discovery of block codes, a second type of error-control codes emerged,
initially called recurrent and later convolutional codes. Encoding and decoding even a quite
powerful convolutional code involves rather simple, repetitive, quasi-continuous processes,
applied on a very convenient trellis representation of the code, instead of the more complex
block processing that seems to be required in the case of a powerful block code. This makes it
relatively easy to use maximum likelihood (soft-decision) decoding with convolutional codes,
in the form of the optimum Viterbi algorithm (VA). Convolutional codes, their trellis and state
diagrams, soft-decision detection, the Viterbi decoding algorithm, and practical punctured
and rate-compatible coding schemes are all presented in Chapter 6. Disappointingly, however,
even very powerful convolutional codes were found to be incapable of achieving performances
close to the limits first published by Shannon, the father of information theory, in 1948. This
was still true even when very powerful combinations of block and convolutional codes, called
concatenated codes, were devised. The breakthrough, by Berrou, Glavieux and Thitimajshima
in 1993, was to use a special kind of interleaved concatenation, in conjunction with iterative
soft-decision decoding. All aspects of these very effective coding schemes, called turbo codes
because of the supercharging effect of the iterative decoding algorithm, are fully described in
Chapter 7.
The final chapter returns to the topic of block codes, in the form of low-density parity check
(LDPC) codes. Block codes had been found to have trellis representations, so that they could
be soft-decision decoded with performances almost as good as those of convolutional codes.
Also, they could be used in effective turbo coding schemes. Complexity remained a problem,
however, until it was quite recently realized that a particularly simple class of codes, the LDPC
codes discovered by Gallager in 1962, was capable of delivering performances as good or better
than those of turbo codes when decoded by an appropriate iterative algorithm. All aspects of
the construction, encoding, decoding and performance of LDPC codes are fully described in

Chapter 8, together with various forms of LDPC codes which are particularly effective for use
in communication networks.
Appendix A shows how to calculate the error probability of digital signals transmitted over
additive white Gaussian noise (AWGN) channels, and Appendix B introduces various topics
in discrete mathematics. These are followed by a list of the answers to the problems located
at the end of each chapter. Detailed solutions are available on the website associated with this
book, which can be found at the following address:
Control
The website also contains additional material, which will be regularly updated in response
to comments and questions from readers.


Acknowledgements
We are very grateful for all the help, support and encouragement we have had during the writing
of this book, from our colleagues past and present, from many generations of research assistants
and students, from the reviewers and from our families and friends. We particularly thank
Damian Levin and Leonardo Arnone for their contributions to Chapters 7 and 8, respectively;
Mario Blaum, Rolando Carrasco, Evan Ciner, Bahram Honary, Garik Markarian and Robert
McEliece for stimulating discussions and very welcome support; and Sarah Hinton at John
Wiley & Sons, Ltd who patiently waited for her initial suggestion to bear fruit.



List of Symbols
Chapter 1
α
δ, ε
σ
(α)
B

C
c
Cs
d, i, j, k, l, m, n
Eb
E b /N0
H (X )
H (X n )
H (X/y j )
H (X/Y )
H (Y/ X )
Hb (X )
I (xi , y j )
I (X, Y )
Ii
M
n
N0 /2
nf
p
P
P(xi ) = Pi
P(xi /y j )
P(xi , y j )
P(X/Y)
Pij = P(y j /xi )
Pke

probability of occurrence of a source symbol (Chapter 1)
arbitrary small numbers

standard deviation
entropy of the binary source evaluated using logs to base 2
bandwidth of a channel
capacity of a channel, bits per second
code vector, codeword
capacity of a channel, bits per symbol
integer numbers
average bit energy
average bit energy-to-noise power spectral density ratio
entropy in bits per second
entropy of an extended source
a posteriori entropy
equivocation
noise entropy
entropy of a discrete source calculated in logs to base b
mutual information of xi , y j
average mutual information
information of the symbol xi
number of symbols of a discrete source
length of a block of information, block code length
noise power spectral density
large number of emitted symbols
error probability of the BSC or BEC
power of a signal
probability of occurrence of the symbol xi
backward transition probability
joint probability of xi , y j
conditional probability of vector X given vector Y
conditional probability of symbol y j given xi , also transition probability
of a channel; forward transition probability

error probability, in general k identifies a particular index


xviii

PN
Pch
Qi
R
rb
s, r
S/N
T
Ts
W
x
X
x(t), s(t)
xi
xk = x(kTs )
||X||
yj

List of Symbols

noise power
transition probability matrix
a probability
information rate
bit rate

symbol rate
signal-to-noise ratio
signal time duration
sampling period
bandwidth of a signal
variable in general, also a particular value of random variable X
random variable (Chapters 1, 7 and 8), and variable of a polynomial
expression (Chapters 3, 4 and 5)
signals in the time domain
value of a source symbol, also a symbol input to a channel
sample of signal x(t)
norm of vector X
value of a symbol, generally a channel output

Chapter 2
A
Ai
D
d(ci , c j )
Di
dmin
e
F
f (m)
G
gi
gij
GF(q)
H
hj

k, n
l
m
m
N
P(i, n)
P
pij
pprime

amplitude of a signal or symbol
number of codewords of weight i
stopping time (Chapter 2); D-transform domain variable
Hamming distance between two code vectors
set of codewords
minimum distance of a code
error pattern vector
a field
redundancy obtained, code C0 , hybrid ARQ
generator matrix
row vector of generator matrix G
element of generator matrix
Galois or finite field
parity check matrix
row vector of parity check matrix H
message and code lengths in a block code
number of detectable errors in a codeword
random number of transmissions (Chapter 2)
message vector
integer number

probability of i erroneous symbols in a block of n symbols
parity check submatrix
element of the parity check submatrix
prime number


xix

List of Symbols

Pbe
Pret
PU (E)
Pwe
q
q(m)
r
Rc
S
S
si
Sd
t
td
Tw
u = (u 1 , u 2 , . . . u n−1 )
V
Vn
w(c)


bit error rate (BER)
probability of a retransmission in ARQ schemes
probability of undetected errors
word or code vector error probability
power of a prime number pprime
redundancy obtained, code C1 , hybrid ARQ
received vector
code rate
subspace of a vector space V (Chapter 2)
syndrome vector (Chapters 2–5, 8)
component of a syndrome vector (Chapters 2–5, 8)
dual subspace of the subspace S
number of correctable errors in a codeword
transmission delay
duration of a word
vector of n components
a vector space
vector space of dimension n
Hamming weight of code vector c

Chapter 3
αi
βi
c(X )
c(i) (X )
e(X )
g(X )
m(X )
p(X )
pi (X )

r
r (X )
S(X )

primitive element of Galois field GF(q) (Chapters 4 and 5,
Appendix B)
root of minimal polynomial (Chapters 4 and 5, Appendix B)
code polynomial
i-position right-shift rotated version of the polynomial c(X )
error polynomial
generator polynomial
message polynomial
remainder polynomial (redundancy polynomial in systematic form)
(Chapter 3),
primitive polynomial
level of redundancy and degree of the generator polynomial
(Chapters 3 and 4 only)
received polynomial
syndrome polynomial

Chapter 4
βl , α jl
i (X )
μ(X )
σ (X )
τ

error-location numbers
minimal polynomial
auxiliary polynomial in the key equation

error-location polynomial (Euclidean algorithm)
number of errors in a received vector


xx

List of Symbols

e jh
jl
qi , ri , si , ti

value of an error
position of an error in a received vector
auxiliary numbers in the Euclidean algorithm (Chapters 4
and 5)
auxiliary polynomials in the Euclidean algorithm (Chapters 4
and 5)
error-evaluation polynomial

ri (X ), si (X ), ti (X )
W (X )

Chapter 5
ρ

a previous step with respect to μ in the Berlekamp–Massey
(B–M) algorithm
error-location polynomial, B–M algorithm, μth iteration
μth discrepancy, B–M algorithm

(μ)
degree of the polynomial σBM (X ), B–M algorithm
estimate of a message vector
number of shortened symbols in a shortened RS code
polynomial for determining error values in the B–M algorithm

(μ)

σBM (X )


m
ˆ
sRS
Z (X )

Chapter 6
Ai
Ai, j,l
bi (T )
C(D)
ci
ci
C m (D)
c ji
C ( j) (D)
( j)

( j)


( j)

( j)

ci = (c0 , c1 , c2 , . . .)
df
dH
G(D)
G(D)
( j)
G i (D)
( j)

( j)

( j)

( j)

gi = (gi0 , gi1 , gi2 , . . .)
[GF(q)]n

number of sequences of weight i (Chapter 6)
number of paths of weight i, of length j, which result from
an input of weight l
sampled value of bi (t), the noise-free signal, at time instant T
code polynomial expressions in the D domain
ith branch of code sequence c
n-tuple of coded elements
multiplexed output of a convolutional encoder in the D domain

jth code symbol of ci
output sequence of the jth branch of a convolutional encoder,
in the D domain
output sequence of the jth branch of a convolutional encoder
minimum free distance of a convolutional code
Hamming distance
rational transfer function of polynomial expressions in the D
domain
rational transfer function matrix in the D domain
impulse response of the jth branch of a convolutional
encoder, in the D domain
impulse response of the jth branch of a convolutional encoder
extended vector space


xxi

List of Symbols

H0
H1
J
K
K +1
Ki
L
M(D)
mi
nA
Pp

si (k)
si (t)
Si (D)
S j = (s0 j , s1 j , s2 j , . . .)
sr
sri
sr, ji
T (X )
T (X, Y, Z )
ti
Tp

hypothesis of the transmission of symbol ‘0’
hypothesis of the transmission of symbol ‘1’
decoding length
number of memory units of a convolutional encoder
constraint length of a convolutional code
length of the ith register of a convolutional encoder
length of a sequence
message polynomial expressions in the D domain
k-tuple of message elements
constraint length of a convolutional code, measured in bits
puncturing matrix
S(D) state transfer function
state sequences in the time domain
a signal in the time domain
state sequences in the D domain
state vectors of a convolutional encoder
received sequence
ith branch of received sequence s r

jth symbol of sri
generating function of a convolutional code
modified generating function
time instant
puncturing period

Chapter 7
αi (u)
βi (u)
λi (u), σi (u, u ), γi (u , u)
μ(x)
μ(x, y)
μMAP (x)
μML (x)
μY
π(i)
σY2
A
D
E
E (i)
hist E (ξ/ X = x)
I {.} interleaver permutation
I A , I (X ; A)
I E , I (X ; E)

forward recursion coefficients of the BCJR algorithm
backward recursion coefficients of the BCJR algorithm
quantities involved in the BCJR algorithm
measure or metric of the event x

joint measure for a pair of random variables X and Y
maximum a posteriori measure or metric of the event x
maximum likelihood measure or metric of the event x
mean value of random variable Y
permutation
variance of a random variable Y
random variable of a priori estimates
random variable of extrinsic estimates of bits
random variable of extrinsic estimates
extrinsic estimates for bit i
histogram that represents the probability density function
p E (ξ/ X = x)
mutual information between the random variables A and X
mutual information between the random variables E and X


xxii

I E = Tr (I A , E b /N0 )
J (σ )
JMTC
J −1 (I A )
L(x)
L(bi )
L(bi /Y), L(bi /Y1n )
Lc
L c Y ( j)
L e (bi )
( j)
L e (bi )

L ( j) (bi /Y)
MI × N I
nY
p(x)
p(X j )
p A (ξ/ X = x)
p E (ξ/ X = x)
pMTC
R j (Y j / X j )
sMTC
j
Si = {Si , Si+1 , . . . , S j }
u
u
X = X 1n = {X 1 , X 2 , . . . , X n }
j
X i = {X i , X i+1 , . . . , X j }

List of Symbols

extrinsic information transfer function
mutual information function
number of encoders in a multiple turbo code
inverse of the mutual information function
metric of a given event x
log likelihood ratio for bit bi
conditioned log likelihood ratio given the received
sequence Y, for bit bi
measure of the channel signal-to-noise ratio
channel information for a turbo decoder, jth iteration

extrinsic log likelihood ratio for bit bi
extrinsic log likelihood ratio for bit bi , j-th iteration
conditioned log likelihood ratio given the received
sequence Y, for bit bi , jth iteration
size of a block interleaver
random variable with zero mean value and variance σY2
probability distribution of a discrete random variable
source marginal distribution function
probability density function of a priori estimates A for X = x
probability density function of extrinsic estimates E for
X=x
the angular coefficient of a linear interleaver
channel transition probability
linear shift of a linear interleaver
generic vector or sequence of states of a Hidden Markov
source
current state value
previous state value
vector or sequence of n random variables
generic vector or sequence of random variables

Chapter 8
δ Q ij
δ Rij
A and B
A(it)
ij
d

dj

dc(i)
( j)

dv

difference of coefficients Q ijx
difference of coefficients Rijx
sparse submatrices of the parity check matrix H (Chapter 8)
a posteriori estimate in iteration number it
decoded vector
estimated decoded vector
symbol nodes
number of symbol nodes or bits related to parity check
node h i
number of parity check equations in which the bit or
symbol d j participates


×