Tải bản đầy đủ (.pdf) (17 trang)

Introduction to Thermodynamics and Statistical Physics phần 1 ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (266.47 KB, 17 trang )

Eyal Buks
Introduction to Thermodynamics
and Statistical Physics (114016) -
Lecture Notes
April 13, 2011
Technion

Preface
to be written

Con tents
1. The Principle of Larg est Uncertain ty 1
1.1 EntropyinInformationTheory 1
1.1.1 Example- TwoStatesSystem 1
1.1.2 SmallestandLargest Entropy 2
1.1.3 Thecompositionproperty 5
1.1.4 Alternative Definitionof Entropy 8
1.2 LargestUncertaintyEstimator 9
1.2.1 Useful Relations 11
1.2.2 TheFreeEntropy 13
1.3 The Principle of Largest Uncertainty in Statistical Mechanics 14
1.3.1 Microcanonical Distribution 14
1.3.2 CanonicalDistribution 15
1.3.3 Grandcanonical Distribution 16
1.3.4 TemperatureandChemicalPotential 17
1.4 Time Evolutionof EntropyofanIsolatedSystem 18
1.5 Thermal Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.5.1 ExternallyAppliedPotential Energy 20
1.6 FreeEntropyandFreeEnergies 21
1.7 ProblemsSet1 21
1.8 SolutionsSet1 29


2. Ideal Gas 45
2.1 AParticleinaBox 45
2.2 GibbsParadox 48
2.3 FermionsandBosons 50
2.3.1 Fermi-DiracDistribution 51
2.3.2 Bose-EinsteinDistribution 52
2.3.3 ClassicalLimit 52
2.4 IdealGasintheClassicalLimit 53
2.4.1 Pressure 55
2.4.2 Useful Relations 56
2.4.3 HeatCapacity 57
2.4.4 InternalDegrees of Freedom 57
2.5 ProcessesinIdealGas 60
Contents
2.5.1 IsothermalProcess 62
2.5.2 IsobaricProcess 62
2.5.3 IsochoricProcess 63
2.5.4 Isentropic Process 63
2.6 CarnotHeatEngine 64
2.7 Limits Imposed Upon the Efficiency 66
2.8 ProblemsSet2 71
2.9 SolutionsSet2 79
3. Bosonic and Fermionic Systems 97
3.1 ElectromagneticRadiation 97
3.1.1 ElectromagneticCavity 97
3.1.2 PartitionFunction 100
3.1.3 CubeCavity 100
3.1.4 AverageEnergy 102
3.1.5 Stefan-Boltzmann Radiation Law 103
3.2 PhononsinSolids 105

3.2.1 OneDimensionalExample 105
3.2.2 The3DCase 107
3.3 FermiGas 110
3.3.1 OrbitalPartitionFunction 110
3.3.2 PartitionFunctionof theGas 110
3.3.3 EnergyandNumberofParticles 112
3.3.4 Example: ElectronsinMetal 112
3.4 Semiconductor Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.5 ProblemsSet3 115
3.6 SolutionsSet3 117
4. Classical Limit of Statistical M echanics 127
4.1 ClassicalHamiltonian 127
4.1.1 Hamilton-Jacobi Equations 128
4.1.2 Example 128
4.1.3 Example 129
4.2 Density Function 130
4.2.1 EquipartitionTheorem 130
4.2.2 Example 131
4.3 NyquistNoise 132
4.4 ProblemsSet4 136
4.5 SolutionsSet4 138
5. Exam Wint er 2010 A 147
5.1 Problems 147
5.2 Solutions 148
Eyal Buks Thermodynamics and Statistical Physics 6
Con tents
6. Exam Wint er 2010 B 155
6.1 Problems 155
6.2 Solutions 156
References 163

Index 165
Eyal Buks Thermodynamics and Statistical Physics 7

1. The Principle of Largest Uncertainty
In this chapter w e discuss relations between information theory and statistical
mechanics. We show that the canonical and grand canonical distributions
can be obtained from Shannon’s principle of maximum uncertainty [1, 2, 3].
Moreover, the tim e evolution of the entropy of an isolated system and the H
theorem are discussed.
1.1 Entropy in Information Theory
The possible states of a given system are denoted as e
m
,wherem =1, 2, 3, ,
and the prob ability that s tate e
m
is occupied is denoted by p
m
. The normal-
ization condition reads
X
m
p
m
=1. (1.1)
For a giv en probability distribution {p
m
} the entropy is defined as
σ = −
X
m

p
m
log p
m
. (1.2)
Below we show that this quantity characterizes the uncertaint y in the knowl-
edge of the state of the syste m.
1.1.1 Example - Two States System
Consider a system which can occupy either state e
1
with probability p,or
state e
2
with probability 1 − p,where0≤ p ≤ 1. The entropy is given by
σ = −p log p −(1 − p)log(1− p) . (1.3)
Chapter 1. The Principle o f Largest Un certainty
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 0.2 0.4 0.6 0.8 1
x
−p log p − (1 − p)log(1− p)
As expected, the entropy vanishes at p =0andp = 1, since in bo th cases
there is no uncertainty in wha t is the state which is occupied by the system.
The largest uncertainty is obtained at p =0.5, for which σ =log2=0.69.
1.1.2 Smallest and Largest Entropy

Smallest value. The term −p log p in the range 0 ≤ p ≤ 1 is plotted in
the figure belo w. Note that the value of −p log p in the limit p → 0canbe
calculated using L’H ospital’s rule
lim
p→0
(−p log p) = lim
p→0
Ã

dlogp
dp
d
dp
1
p
!
=0. (1.4)
From this figure , which shows that −p log p ≥ 0 in the range 0 ≤ p ≤ 1, it is
easy to infer that the smallest possible value of the entropy is zero. Moreover,
since −p log p =0iff p =0orp =1,itisevidentthatσ =0iff the system
occupies one of the states with probability one and all the other states with
probability zero. In this case ther e is no uncertainty in what is the state which
is occupied by the system.
Eyal Buks Thermodynamics and Statistical Physics 2
1.1. Entropy in Information Theory
0
0.05
0.1
0.15
0.2

0.25
0.3
0.35
-
p*log(p
)
0.2 0.4 0.6 0.8 1
p
−p log p
Largest value. We seek a maximum point of the entropy σ with respect to
all pro bability distr ibutio ns {p
m
} which satisfy the n ormalization condition.
This constrain, which is given by Eq. ( 1.1), is expressed as
0=g
0
(¯p)=
X
m
p
m
− 1 , (1.5)
where ¯p denotes the vector of probabilities
¯p =(p
1
,p
2
, ) . (1.6)
A small chan ge in σ (denoted as δσ) due to a small c hange in ¯p (denoted as
δ¯p =(δp

1
,δp
2
, )) can be expressed as
δσ =
X
m
∂σ
∂p
m
δp
m
, (1.7)
or in terms of the gradient of σ (denoted as
¯
∇σ)as
δσ =
¯
∇σ · δ¯p. (1.8)
In addition the v ariables (p
1
,p
2
, ) are subjected to the constrain (1 .5). Sim-
ilarly to Eq. (1.8) we have
δg
0
=
¯
∇g

0
· δ ¯p. (1.9)
Both vectors
¯
∇σ and δ¯p can be decomposed as
¯
∇σ =
¡
¯
∇σ
¢
k
+
¡
¯
∇σ
¢

, (1.10)
δ¯p =(δ¯p)
k
+(δ¯p)

, (1.11)
where
¡
¯
∇σ
¢
k

and (δ¯p)
k
are parallel t o
¯
∇g
0
,andwhere
¡
¯
∇σ
¢

and (δ¯p)

are
orthogonal to
¯
∇g
0
. Using this notation Eq. (1.8) can be expressed as
Eyal Buks Thermodynamics and Statistical Physics 3
Chapter 1. The Principle o f Largest Un certainty
δσ =
¡
¯
∇σ
¢
k
· (δ¯p)
k

+
¡
¯
∇σ
¢

· (δ¯p)

. (1.12)
Given that the constrain g
0
(¯p)=0issatisfied at a given point ¯p,onehas
g
0
(¯p + δ ¯p)=0tofirst order in δ ¯p provided that δ ¯p, is orthogonal to
¯
∇g
0
,
namely, provided that ( δ¯p)
k
= 0. Thu s, a stationary (maximum or minim um
or saddle point) point of σ occurs iff for every small change δ¯p,whichis
orthogonal to
¯
∇g
0
(namely, δ¯p ·
¯
∇g

0
=0)onehas0=δσ =
¯
∇σ · δ¯p.As
can be seen from Eq. (1.12), this condition is fulfilled only whe n
¡
¯
∇σ
¢

=0,
namely only when the vectors
¯
∇σ and
¯
∇g
0
are parallel to each other. In other
words, only when
¯
∇σ = ξ
0
¯
∇g
0
, (1.13)
where ξ
0
is a constant. This constant is called Lagrange multiplier . Using
Eqs. (1.2) a nd (1.5) the condition (1.13) is expressed as

log p
m
+1=ξ
0
. (1.14)
Let M be the number of available states. From Eq. (1.14) we find that all
probabilities are eq ual. Thus using Eq. (1.5), one finds that
p
1
= p
2
= =
1
M
. (1.15)
After finding this stationary point it is necessary to determine whether it
isamaximumorminimumorsaddlepoint.Todothisweexpandσ to second
order in δ¯p
σ (¯p + δ¯p)=exp
¡
δ¯p ·
¯

¢
σ (¯p)
=
Ã
1+δ¯p ·
¯
∇+

¡
δ¯p ·
¯

¢
2
2!
+
!
σ (¯p)
= σ (¯p)+δ¯p ·
¯
∇σ +
¡
δ¯p ·
¯

¢
2
2!
σ +
= σ (¯p)+
X
m
∂σ
∂p
m
δp
m
+

1
2
X
m,m
0
δp
m
δp
m
0

2
σ
∂p
m
∂p
m
0
+
(1.16)
Using Eq. (1.2) one finds that

2
σ
∂p
m
∂p
m
0
= −

1
p
m
δ
m,m
0
. (1.17)
Since the probabilities p
m
are non-negative one concludes that any stationary
point of σ is a local maximum point. Moreover, since only a single stationary
point was found, one concludes that the entropy σ obtains its largest value,
which is denoted as Λ (M),andwhichisgivenby
Eyal Buks Thermodynamics and Statistical Physics 4
1.1. Entropy in Information Theory
Λ (M)=σ
µ
1
M
,
1
M
, ,
1
M

=logM, (1.18)
for the probability distribution given by Eq. (1.15). For this probability dis-
tribution that ma xim izes σ, as expected, the state which is occupied by the
system is most uncertain.

1.1.3 The composition property
The composition property is best illustrated u sing a n example.
Example - A Three States System. A system can occupy one of the
states e
1
, e
2
or e
3
with probabilities p
1
, p
2
and p
3
respectively. The uncer-
tain ty associated with this probability dis tribution can be es timated in two
w ays, directly and indir ectly. Directly, it is simply giv en by the definition of
en tropy in Eq. (1.2)
σ (p
1
,p
2
,p
3
)=−p
1
log p
1
− p

2
log p
2
− p
3
log p
3
. (1.19)
Alternatively [see Fig. 1.1], the uncertainty can be decomposed as follows:
(a) the system can eith er occupy state e
1
with probability p
1
,ornotoccupy
state e
1
with probability 1 − p
1
; (b) given that the system does not occupy
state e
1
, it ca n either occupy state e
2
with probability p
2
/ (1 − p
1
)oroccupy
state e
3

with probability p
3
/ (1 − p
1
). Assuming that uncertaint y (entrop y)
is additive, the total uncertainty (entropy) is giv en by
σ
i
= σ (p
1
, 1 − p
1
)+(1−p
1
) σ
µ
p
2
1 − p
1
,
p
3
1 − p
1

. (1.20)
The factor (1 − p
1
) in the second term is included since the uncertainty asso-

ciated with distinction between states e
2
and e
3
contributes only when state
e
1
is not occupied, an event which occurs with probability 1 −p
1
.Usingthe
definition (1.2) and the normaliza tion condition
p
1
+ p
2
+ p
3
=1, (1.21)
one finds
σ
i
= −p
1
log p
1
− (1 − p
1
)log(1− p
1
)

+(1− p
1
)
·

p
2
1 − p
1
log
p
2
1 − p
1

p
3
1 − p
1
log
p
3
1 − p
1
¸
= −p
1
log p
1
− p

2
log p
2
− p
3
log p
3
− (1 − p
1
− p
2
− p
3
)log(1−p
1
)
= σ (p
1
,p
2
,p
3
) ,
(1.22)
that is, for this example the entropy satisfies the decomposition propert y.
Eyal Buks Thermodynamics and Statistical Physics 5
Chapter 1. The Principle o f Largest Un certainty
1
p
1

1 p−
1
e
32
,ee
1
2
1 p
p

1
3
1 p
p

2
e
3
e
1
p
1
1 p−
1
e
32
,ee
1
2
1 p

p

1
3
1 p
p

2
e
3
e
Fig. 1.1. The composition property - three states system.
The general case. The compo sition pro pert y in th e gener al case can
be defined as follows. Consider a system which can occupy one of the
states {e
1
,e
2
, , e
M
0
} with proba bilities q
1
,q
2
, , q
M
0
respectively. This set
of states is grouped as follows. The first group includes the first M

1
states
{e
1
,e
2
, , e
M
1
}; the second group includes the next M
2
states {e
M
1
+1
,e
M
1
+2
, , e
M
1
+M
2
},
etc., where M
1
+ M
2
+ = M

0
. The probabilit y that one of the states i n the
first group is occupied is p
1
= q
1
+q
2
+ +q
M
1
, the probability that one of the
states in the second group is occupied is p
2
= q
M
1
+1
+ q
M
1
+2
+ + q
M
1
+M
2
,
etc., where
p

1
+ p
2
+ =1. (1.23)
The composition property requires that the following holds [see Fig. 1.2]
σ (q
1
,q
2
, , q
M
0
)=σ (p
1
,p
2
, )
+ p
1
σ
µ
q
1
p
1
,
q
2
p
1

, ,
q
M
1
p
1

+ p
2
σ
µ
q
M
1
+1
p
2
,
q
M
1
+2
p
2
, ,
q
M
1
+M
2

p
2

+
(1.24)
Using the definition (1.2) the following holds
σ (p
1
,p
2
, )=−p
1
log p
1
− p
2
log p
2
− , (1.25)
Eyal Buks Thermodynamics and Statistical Physics 6
1.1. Entropy in Information Theory
1
p
1
e
1
, ,,
21 M
eee
1

1
p
q
2
e
1
M
e

211
, ,
1 MMM
ee
++
2
p

1
2
p
q
1
1
p
q
M
1
1
+M
e

2
1
p
q
M
2
1
+M
e
21
MM
e
+

2
2
1
p
q
M +
2
21
p
q
MM +
1

211 M
qqqp +++=
211


12 MMM
qqp
++
++=
1
p
1
e
1
, ,,
21 M
eee
1
1
p
q
2
e
1
M
e

211
, ,
1 MMM
ee
++
2
p


1
2
p
q
1
1
p
q
M
1
1
+M
e
2
1
p
q
M
2
1
+M
e
21
MM
e
+

2
2

1
p
q
M +
2
21
p
q
MM +
1

211 M
qqqp +++=
211

12 MMM
qqp
++
++=
Fig. 1.2. The composition propert y - the general case.
p
1
σ
µ
q
1
p
1
,
q

2
p
1
, ,
q
M
1
p
1

= p
1
µ

q
1
p
1
log
q
1
p
1

q
2
p
1
log
q

2
p
1
− −
q
M
1
p
1
log
q
M
1
p
1

= −q
1
log q
1
− q
2
log q
2
− −q
M
1
log q
M
1

+ p
1
log p
1
,
(1.26)
p
2
σ
µ
q
M
1
+1
p
2
,
q
M
1
+2
p
2
, ,
q
M
1
+M
2
p

2

= −q
M
1
+1
log q
M
1
+1
− q
M
1
+2
log q
M
1
+2
− −q
M
1
+M
2
log q
M
1
+M
2
+ p
2

log p
2
,
(1.27)
etc., thus it is evident that condition (1 . 24) is indeed satisfied.
Eyal Buks Thermodynamics and Statistical Physics 7
Chapter 1. The Principle o f Largest Un certainty
1.1.4 Alternative Definition of Entrop y
Follo wing Shannon [ 1, 2], the entropy function σ (p
1
,p
2
, , p
N
)canbealter-
natively defined as follows:
1. σ (p
1
,p
2
, , p
N
) is a con tinuous function of its arguments p
1
,p
2
, , p
N
.
2. If all probabilities are equal, namely if p

1
= p
2
= = p
N
=1/N ,then
the quantity Λ(N)=σ (1/N, 1 /N, , 1/N ) is a mo notonic increasing
function of N.
3. The function σ (p
1
,p
2
, , p
N
)satisfies the composition property given by
Eq. (1.24).
Exercise 1.1.1. Show that the abov e definition leads to the en tropy given
b y Eq. (1.2) up to multiplication by a positive constant.
Solution 1.1.1. The 1st property allows approximating the probabilities
p
1
,p
2
, , p
N
using rational num bers, namely p
1
= M
1
/M

0
, p
2
= M
2
/M
0
,
etc., where M
1
,M
2
, are integers and M
0
= M
1
+ M
2
+ + M
N
.Usingthe
composition property (1.24) one finds
Λ (M
0
)=σ (p
1
,p
2
, , p
N

)+p
1
Λ (M
1
)+p
2
Λ (M
2
)+ (1.28)
In particular, consider the case w ere M
1
= M
2
= = M
N
= K.Forthis
case one finds
Λ (NK)=Λ (N)+Λ (K) . (1.29)
Taking K = N =1yields
Λ (1) = 0 . (1.30)
Taking N =1+x yields
Λ (K + Kx) − Λ (K)
Kx
=
1
K
Λ (1 + x)
x
. (1.31)
Taking the limit x → 0 yields


dK
=
C
K
, (1.32)
where
C = lim
x→0
Λ (1 + x)
x
. (1.33)
In tegrating Eq. (1.32) and using the initial condition (1.30) yields
Λ (K)=C log K. (1.34)
Eyal Buks Thermodynamics and Statistical Physics 8
1.2. Lar g est Uncertainty Estimator
Moreover, the second property requires that C>0. Choosing C =1and
using Eq. (1.28) yields
σ (p
1
,p
2
, , p
N
)=Λ (M
0
) − p
1
Λ (M
1

) − p
2
Λ (M
2
) −
= −p
1
log
M
1
M
0
− p
2
log
M
2
M
0
− −p
M
log
M
N
M
0
= −p
1
log p
1

− p
2
log p
2
− −p
N
log p
N
,
(1.35)
in agreement with the definition (1.2).
1.2 Largest Uncertainty Estimator
As before, the possible states of a given system are denoted as e
m
,where
m =1, 2, 3, , and the probability that state e
m
is occupied is denoted by
p
m
.LetX
l
(l =1, 2, , L) be a set of variables characterizing the system
(e.g., energy, number of particles, etc.). Let X
l
(m)bethevaluewhichthe
variable X
l
takes when the system is in state e
m

. Consider the case where
the expectation values of the variables X
l
are given
hX
l
i =
X
m
p
m
X
l
(m) , (1.36)
where l =1, 2, , L. However, the probability distribution {p
m
} is not given.
Clearly, in the genera l case the knowledge of hX
1
i , hX
2
i , , hX
L
i is not
sufficient to obtain the probability distribution because there are in general
many different possibilities for choosing a probability distribution which is
consistent with the contrarians (1.36) and the normalization condition (1 .1).
For each such probability distribution the entropy can be calculated according
to the definition (1.2). The probability distributio n {p
m

}, which is consistent
with these conditions, and has the largest possible entropy is called the largest
uncertainty estimator (LUE).
The LUE is found by seeking a stationary point of the entropy σ with
respect to all probability distributions {p
m
} which satisfy the no rm alizat ion
constrain (1.5) in addition to the constrains (1.3 6), which can be expressed
as
0=g
l
(¯p)=
X
m
p
m
X
l
(m) − hX
l
i , (1.37)
where l =1, 2, L.Tofirst order one has
δσ =
¯
∇σ · δ¯p, (1.38a)
δg
l
=
¯
∇g

l
· δ ¯p, (1 .38b)
Eyal Buks Thermodynamics and Statistical Physics 9

×