Tải bản đầy đủ (.ppt) (13 trang)

Principles of communications

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (201 KB, 13 trang )

Principles of Communications
By: Vinh Dang Quang


Course information





Lecturer: Msc. Dang Quang Vinh
Mail:
Mobile:0983692806
Duration:30 hrs


Outline







Basic concepts
Information
Entropy
Joint and Conditional Entropy
Channel Representations
Channel Capacity



Basic concepts
What is Information Theory?






Information Theory: how much information
 … is contained in a signal?
 … can a system generate?
 … can a channel transmit?
Used in many fields: Communications, Computer
Science, Economics,…
Examples: Barcelona 0-3 SLNA


Information


Let xj be an event with p(xj)



If xj occurred, we have

1
I ( x j ) = log a
= − log a p( x j )

p( x j )

units of information
 The base of the logarithm






10 →the measure of information is hartley
e →the measure of information is nat
2 →the measure of information is bit

Examples 10.1 (page 669)


Entropy






H(X) = - ∑ p(x) log p(x)
Entropy = information = uncertainty
If a signal is completely predictable, it has
zero entropy and no information
Entropy = average number of bits required to
transmit the signal



Entropy example 1











Random variable with uniform distribution
over 32 outcomes
H(X) = - ∑ 1/32 log 1/32 = log 32 = 5
# bits required = log 32 = 5 bits!
Therefore H(X) = number of bits required to
represent a random event
How many bits are needed for:
Outcome of a coin toss
“tomorrow is a Wednesday”
US tops Winter Olympics tally”


Entropy example 2


Horse race with 8 horses, with winning

probabilities
½, ¼, 1/8, 1/16, 1/64, 1/64, 1/64, 1/64

Entropy H(X) = 2 bits
 How many bits do we need?
 (a) Index each horse  log8 = 3 bits
 (b) Assign shorter codes to horses with higher
probability:
0, 10, 110, 1110, 111100, 111101, 111110,
111111
 average description length = 2 bits!



Entropy





Need at least H(X) bits to represent X
H(X) is a lower bound on the required
descriptor length
Entropy = uncertainty of a random variable


Joint and conditional entropy
Joint entropy:
H(X,Y) = ∑x ∑y p(x,y) log p(x,y)
 simple extension of entropy to 2 RVs

 Conditional Entropy:
H(Y|X) = ∑x p(x) H(Y|X=x)
= ∑x ∑y p(x,y) log p(y|x)
“What is uncertainty of Y if X is known?”
 Easy to verify:







If X, Y independent, then H(Y|X) = H(Y)
If Y = X, then H(Y|X) = 0

H(Y|X) = extra information between X & Y
Fact: H(X,Y) = H(X) + H(Y|X)


Mutual Information


I(X;Y) = H(X) – H(X|Y)
= reduction of uncertainty due to another variable










I(X;Y) = ∑x ∑y p(x,y) log p(x,y)/{p(x)p(y)}
“How much information about Y is contained
in X?”
If X,Y independent, then I(X;Y) = 0
If X,Y are same, then I(X;Y) = H(X) = H(Y)

Symmetric and non-negative


Mutual Information
Relationship between
entropy, joint and
mutual information


Mutual Information







I(X;Y) is a great measure of similarity
between X and Y
Widely used in image/signal processing
Medical imaging example:

 MI based image registration
Why? MI is insensitive to gain and bias



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×