Tải bản đầy đủ (.pdf) (153 trang)

Introduction to Thermodynamics and Statistical Physics (114016) - Lecture Notes potx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (2.55 MB, 153 trang )

Eyal Buks
Introduction to Thermodynamics
and Statistical Physics (114016) -
Lecture Notes
April 13, 2011
Technion
Simpo PDF Merge and Split Unregistered Version -
Simpo PDF Merge and Split Unregistered Version -
Preface
to be written
Simpo PDF Merge and Split Unregistered Version -
Simpo PDF Merge and Split Unregistered Version -
Con tents
1. The Principle of Larg est Uncertain ty 1
1.1 EntropyinInformationTheory 1
1.1.1 Example- TwoStatesSystem 1
1.1.2 SmallestandLargest Entropy 2
1.1.3 Thecompositionproperty 5
1.1.4 Alternative Definitionof Entropy 8
1.2 LargestUncertaintyEstimator 9
1.2.1 Useful Relations 11
1.2.2 TheFreeEntropy 13
1.3 The Principle of Largest Uncertainty in Statistical Mechanics 14
1.3.1 Microcanonical Distribution 14
1.3.2 CanonicalDistribution 15
1.3.3 Grandcanonical Distribution 16
1.3.4 TemperatureandChemicalPotential 17
1.4 Time Evolutionof EntropyofanIsolatedSystem 18
1.5 Thermal Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.5.1 ExternallyAppliedPotential Energy 20
1.6 FreeEntropyandFreeEnergies 21


1.7 ProblemsSet1 21
1.8 SolutionsSet1 29
2. Ideal Gas 45
2.1 AParticleinaBox 45
2.2 GibbsParadox 48
2.3 FermionsandBosons 50
2.3.1 Fermi-DiracDistribution 51
2.3.2 Bose-EinsteinDistribution 52
2.3.3 ClassicalLimit 52
2.4 IdealGasintheClassicalLimit 53
2.4.1 Pressure 55
2.4.2 Useful Relations 56
2.4.3 HeatCapacity 57
2.4.4 InternalDegrees of Freedom 57
2.5 ProcessesinIdealGas 60
Simpo PDF Merge and Split Unregistered Version -
Contents
2.5.1 IsothermalProcess 62
2.5.2 IsobaricProcess 62
2.5.3 IsochoricProcess 63
2.5.4 Isentropic Process 63
2.6 CarnotHeatEngine 64
2.7 Limits Imposed Upon the Efficiency 66
2.8 ProblemsSet2 71
2.9 SolutionsSet2 79
3. Bosonic and Fermionic Systems 97
3.1 ElectromagneticRadiation 97
3.1.1 ElectromagneticCavity 97
3.1.2 PartitionFunction 100
3.1.3 CubeCavity 100

3.1.4 AverageEnergy 102
3.1.5 Stefan-Boltzmann Radiation Law 103
3.2 PhononsinSolids 105
3.2.1 OneDimensionalExample 105
3.2.2 The3DCase 107
3.3 FermiGas 110
3.3.1 OrbitalPartitionFunction 110
3.3.2 PartitionFunctionof theGas 110
3.3.3 EnergyandNumberofParticles 112
3.3.4 Example: ElectronsinMetal 112
3.4 Semiconductor Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.5 ProblemsSet3 115
3.6 SolutionsSet3 117
4. Classical Limit of Statistical M echanics 127
4.1 ClassicalHamiltonian 127
4.1.1 Hamilton-Jacobi Equations 128
4.1.2 Example 128
4.1.3 Example 129
4.2 Density Function 130
4.2.1 EquipartitionTheorem 130
4.2.2 Example 131
4.3 NyquistNoise 132
4.4 ProblemsSet4 136
4.5 SolutionsSet4 138
5. Exam Wint er 2010 A 147
5.1 Problems 147
5.2 Solutions 148
Eyal Buks Thermodynamics and Statistical Physics 6
Simpo PDF Merge and Split Unregistered Version -
Con tents

6. Exam Wint er 2010 B 155
6.1 Problems 155
6.2 Solutions 156
References 163
Index 165
Eyal Buks Thermodynamics and Statistical Physics 7
Simpo PDF Merge and Split Unregistered Version -
Simpo PDF Merge and Split Unregistered Version -
1. The Principle of Largest Uncertainty
In this chapter w e discuss relations between information theory and statistical
mechanics. We show that the canonical and grand canonical distributions
can be obtained from Shannon’s principle of maximum uncertainty [1, 2, 3].
Moreover, the tim e evolution of the entropy of an isolated system and the H
theorem are discussed.
1.1 Entropy in Information Theory
The possible states of a given system are denoted as e
m
,wherem =1, 2, 3, ,
and the prob ability that s tate e
m
is occupied is denoted by p
m
. The normal-
ization condition reads
X
m
p
m
=1. (1.1)
For a giv en probability distribution {p

m
} the entropy is defined as
σ = −
X
m
p
m
log p
m
. (1.2)
Below we show that this quantity characterizes the uncertaint y in the knowl-
edge of the state of the syste m.
1.1.1 Example - Two States System
Consider a system which can occupy either state e
1
with probability p,or
state e
2
with probability 1 − p,where0≤ p ≤ 1. The entropy is given by
σ = −p log p −(1 − p)log(1− p) . (1.3)
Simpo PDF Merge and Split Unregistered Version -
Chapter 1. The Principle o f Largest Un certainty
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 0.2 0.4 0.6 0.8 1

x
−p log p − (1 − p)log(1− p)
As expected, the entropy vanishes at p =0andp = 1, since in bo th cases
there is no uncertainty in wha t is the state which is occupied by the system.
The largest uncertainty is obtained at p =0.5, for which σ =log2=0.69.
1.1.2 Smallest and Largest Entropy
Smallest value. The term −p log p in the range 0 ≤ p ≤ 1 is plotted in
the figure belo w. Note that the value of −p log p in the limit p → 0canbe
calculated using L’H ospital’s rule
lim
p→0
(−p log p) = lim
p→0
Ã

dlogp
dp
d
dp
1
p
!
=0. (1.4)
From this figure , which shows that −p log p ≥ 0 in the range 0 ≤ p ≤ 1, it is
easy to infer that the smallest possible value of the entropy is zero. Moreover,
since −p log p =0iff p =0orp =1,itisevidentthatσ =0iff the system
occupies one of the states with probability one and all the other states with
probability zero. In this case ther e is no uncertainty in what is the state which
is occupied by the system.
Eyal Buks Thermodynamics and Statistical Physics 2

Simpo PDF Merge and Split Unregistered Version -
1.1. Entropy in Information Theory
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
-
p*log(p
)
0.2 0.4 0.6 0.8 1
p
−p log p
Largest value. We seek a maximum point of the entropy σ with respect to
all pro bability distr ibutio ns {p
m
} which satisfy the n ormalization condition.
This constrain, which is given by Eq. ( 1.1), is expressed as
0=g
0
(¯p)=
X
m
p
m
− 1 , (1.5)
where ¯p denotes the vector of probabilities

¯p =(p
1
,p
2
, ) . (1.6)
A small chan ge in σ (denoted as δσ) due to a small c hange in ¯p (denoted as
δ¯p =(δp
1
,δp
2
, )) can be expressed as
δσ =
X
m
∂σ
∂p
m
δp
m
, (1.7)
or in terms of the gradient of σ (denoted as
¯
∇σ)as
δσ =
¯
∇σ · δ¯p. (1.8)
In addition the v ariables (p
1
,p
2

, ) are subjected to the constrain (1 .5). Sim-
ilarly to Eq. (1.8) we have
δg
0
=
¯
∇g
0
· δ ¯p. (1.9)
Both vectors
¯
∇σ and δ¯p can be decomposed as
¯
∇σ =
¡
¯
∇σ
¢
k
+
¡
¯
∇σ
¢

, (1.10)
δ¯p =(δ¯p)
k
+(δ¯p)


, (1.11)
where
¡
¯
∇σ
¢
k
and (δ¯p)
k
are parallel t o
¯
∇g
0
,andwhere
¡
¯
∇σ
¢

and (δ¯p)

are
orthogonal to
¯
∇g
0
. Using this notation Eq. (1.8) can be expressed as
Eyal Buks Thermodynamics and Statistical Physics 3
Simpo PDF Merge and Split Unregistered Version -
Chapter 1. The Principle o f Largest Un certainty

δσ =
¡
¯
∇σ
¢
k
· (δ¯p)
k
+
¡
¯
∇σ
¢

· (δ¯p)

. (1.12)
Given that the constrain g
0
(¯p)=0issatisfied at a given point ¯p,onehas
g
0
(¯p + δ ¯p)=0tofirst order in δ ¯p provided that δ ¯p, is orthogonal to
¯
∇g
0
,
namely, provided that ( δ¯p)
k
= 0. Thu s, a stationary (maximum or minim um

or saddle point) point of σ occurs iff for every small change δ¯p,whichis
orthogonal to
¯
∇g
0
(namely, δ¯p ·
¯
∇g
0
=0)onehas0=δσ =
¯
∇σ · δ¯p.As
can be seen from Eq. (1.12), this condition is fulfilled only whe n
¡
¯
∇σ
¢

=0,
namely only when the vectors
¯
∇σ and
¯
∇g
0
are parallel to each other. In other
words, only when
¯
∇σ = ξ
0

¯
∇g
0
, (1.13)
where ξ
0
is a constant. This constant is called Lagrange multiplier . Using
Eqs. (1.2) a nd (1.5) the condition (1.13) is expressed as
log p
m
+1=ξ
0
. (1.14)
Let M be the number of available states. From Eq. (1.14) we find that all
probabilities are eq ual. Thus using Eq. (1.5), one finds that
p
1
= p
2
= =
1
M
. (1.15)
After finding this stationary point it is necessary to determine whether it
isamaximumorminimumorsaddlepoint.Todothisweexpandσ to second
order in δ¯p
σ (¯p + δ¯p)=exp
¡
δ¯p ·
¯


¢
σ (¯p)
=
Ã
1+δ¯p ·
¯
∇+
¡
δ¯p ·
¯

¢
2
2!
+
!
σ (¯p)
= σ (¯p)+δ¯p ·
¯
∇σ +
¡
δ¯p ·
¯

¢
2
2!
σ +
= σ (¯p)+

X
m
∂σ
∂p
m
δp
m
+
1
2
X
m,m
0
δp
m
δp
m
0

2
σ
∂p
m
∂p
m
0
+
(1.16)
Using Eq. (1.2) one finds that


2
σ
∂p
m
∂p
m
0
= −
1
p
m
δ
m,m
0
. (1.17)
Since the probabilities p
m
are non-negative one concludes that any stationary
point of σ is a local maximum point. Moreover, since only a single stationary
point was found, one concludes that the entropy σ obtains its largest value,
which is denoted as Λ (M),andwhichisgivenby
Eyal Buks Thermodynamics and Statistical Physics 4
Simpo PDF Merge and Split Unregistered Version -
1.1. Entropy in Information Theory
Λ (M)=σ
µ
1
M
,
1

M
, ,
1
M

=logM, (1.18)
for the probability distribution given by Eq. (1.15). For this probability dis-
tribution that ma xim izes σ, as expected, the state which is occupied by the
system is most uncertain.
1.1.3 The composition property
The composition property is best illustrated u sing a n example.
Example - A Three States System. A system can occupy one of the
states e
1
, e
2
or e
3
with probabilities p
1
, p
2
and p
3
respectively. The uncer-
tain ty associated with this probability dis tribution can be es timated in two
w ays, directly and indir ectly. Directly, it is simply giv en by the definition of
en tropy in Eq. (1.2)
σ (p
1

,p
2
,p
3
)=−p
1
log p
1
− p
2
log p
2
− p
3
log p
3
. (1.19)
Alternatively [see Fig. 1.1], the uncertainty can be decomposed as follows:
(a) the system can eith er occupy state e
1
with probability p
1
,ornotoccupy
state e
1
with probability 1 − p
1
; (b) given that the system does not occupy
state e
1

, it ca n either occupy state e
2
with probability p
2
/ (1 − p
1
)oroccupy
state e
3
with probability p
3
/ (1 − p
1
). Assuming that uncertaint y (entrop y)
is additive, the total uncertainty (entropy) is giv en by
σ
i
= σ (p
1
, 1 − p
1
)+(1−p
1
) σ
µ
p
2
1 − p
1
,

p
3
1 − p
1

. (1.20)
The factor (1 − p
1
) in the second term is included since the uncertainty asso-
ciated with distinction between states e
2
and e
3
contributes only when state
e
1
is not occupied, an event which occurs with probability 1 −p
1
.Usingthe
definition (1.2) and the normaliza tion condition
p
1
+ p
2
+ p
3
=1, (1.21)
one finds
σ
i

= −p
1
log p
1
− (1 − p
1
)log(1− p
1
)
+(1− p
1
)
·

p
2
1 − p
1
log
p
2
1 − p
1

p
3
1 − p
1
log
p

3
1 − p
1
¸
= −p
1
log p
1
− p
2
log p
2
− p
3
log p
3
− (1 − p
1
− p
2
− p
3
)log(1−p
1
)
= σ (p
1
,p
2
,p

3
) ,
(1.22)
that is, for this example the entropy satisfies the decomposition propert y.
Eyal Buks Thermodynamics and Statistical Physics 5
Simpo PDF Merge and Split Unregistered Version -
Chapter 1. The Principle o f Largest Un certainty
1
p
1
1 p−
1
e
32
,ee
1
2
1 p
p

1
3
1 p
p

2
e
3
e
1

p
1
1 p−
1
e
32
,ee
1
2
1 p
p

1
3
1 p
p

2
e
3
e
Fig. 1.1. The composition property - three states system.
The general case. The compo sition pro pert y in th e gener al case can
be defined as follows. Consider a system which can occupy one of the
states {e
1
,e
2
, , e
M

0
} with proba bilities q
1
,q
2
, , q
M
0
respectively. This set
of states is grouped as follows. The first group includes the first M
1
states
{e
1
,e
2
, , e
M
1
}; the second group includes the next M
2
states {e
M
1
+1
,e
M
1
+2
, , e

M
1
+M
2
},
etc., where M
1
+ M
2
+ = M
0
. The probabilit y that one of the states i n the
first group is occupied is p
1
= q
1
+q
2
+ +q
M
1
, the probability that one of the
states in the second group is occupied is p
2
= q
M
1
+1
+ q
M

1
+2
+ + q
M
1
+M
2
,
etc., where
p
1
+ p
2
+ =1. (1.23)
The composition property requires that the following holds [see Fig. 1.2]
σ (q
1
,q
2
, , q
M
0
)=σ (p
1
,p
2
, )
+ p
1
σ

µ
q
1
p
1
,
q
2
p
1
, ,
q
M
1
p
1

+ p
2
σ
µ
q
M
1
+1
p
2
,
q
M

1
+2
p
2
, ,
q
M
1
+M
2
p
2

+
(1.24)
Using the definition (1.2) the following holds
σ (p
1
,p
2
, )=−p
1
log p
1
− p
2
log p
2
− , (1.25)
Eyal Buks Thermodynamics and Statistical Physics 6

Simpo PDF Merge and Split Unregistered Version -
1.1. Entropy in Information Theory
1
p
1
e
1
, ,,
21 M
eee
1
1
p
q
2
e
1
M
e

211
, ,
1 MMM
ee
++
2
p

1
2

p
q
1
1
p
q
M
1
1
+M
e
2
1
p
q
M
2
1
+M
e
21
MM
e
+

2
2
1
p
q

M +
2
21
p
q
MM +
1

211 M
qqqp +++=
211

12 MMM
qqp
++
++=
1
p
1
e
1
, ,,
21 M
eee
1
1
p
q
2
e

1
M
e

211
, ,
1 MMM
ee
++
2
p

1
2
p
q
1
1
p
q
M
1
1
+M
e
2
1
p
q
M

2
1
+M
e
21
MM
e
+

2
2
1
p
q
M +
2
21
p
q
MM +
1

211 M
qqqp +++=
211

12 MMM
qqp
++
++=

Fig. 1.2. The composition propert y - the general case.
p
1
σ
µ
q
1
p
1
,
q
2
p
1
, ,
q
M
1
p
1

= p
1
µ

q
1
p
1
log

q
1
p
1

q
2
p
1
log
q
2
p
1
− −
q
M
1
p
1
log
q
M
1
p
1

= −q
1
log q

1
− q
2
log q
2
− −q
M
1
log q
M
1
+ p
1
log p
1
,
(1.26)
p
2
σ
µ
q
M
1
+1
p
2
,
q
M

1
+2
p
2
, ,
q
M
1
+M
2
p
2

= −q
M
1
+1
log q
M
1
+1
− q
M
1
+2
log q
M
1
+2
− −q

M
1
+M
2
log q
M
1
+M
2
+ p
2
log p
2
,
(1.27)
etc., thus it is evident that condition (1 . 24) is indeed satisfied.
Eyal Buks Thermodynamics and Statistical Physics 7
Simpo PDF Merge and Split Unregistered Version -
Chapter 1. The Principle o f Largest Un certainty
1.1.4 Alternative Definition of Entrop y
Follo wing Shannon [ 1, 2], the entropy function σ (p
1
,p
2
, , p
N
)canbealter-
natively defined as follows:
1. σ (p
1

,p
2
, , p
N
) is a con tinuous function of its arguments p
1
,p
2
, , p
N
.
2. If all probabilities are equal, namely if p
1
= p
2
= = p
N
=1/N ,then
the quantity Λ(N)=σ (1/N, 1 /N, , 1/N ) is a mo notonic increasing
function of N.
3. The function σ (p
1
,p
2
, , p
N
)satisfies the composition property given by
Eq. (1.24).
Exercise 1.1.1. Show that the abov e definition leads to the en tropy given
b y Eq. (1.2) up to multiplication by a positive constant.

Solution 1.1.1. The 1st property allows approximating the probabilities
p
1
,p
2
, , p
N
using rational num bers, namely p
1
= M
1
/M
0
, p
2
= M
2
/M
0
,
etc., where M
1
,M
2
, are integers and M
0
= M
1
+ M
2

+ + M
N
.Usingthe
composition property (1.24) one finds
Λ (M
0
)=σ (p
1
,p
2
, , p
N
)+p
1
Λ (M
1
)+p
2
Λ (M
2
)+ (1.28)
In particular, consider the case w ere M
1
= M
2
= = M
N
= K.Forthis
case one finds
Λ (NK)=Λ (N)+Λ (K) . (1.29)

Taking K = N =1yields
Λ (1) = 0 . (1.30)
Taking N =1+x yields
Λ (K + Kx) − Λ (K)
Kx
=
1
K
Λ (1 + x)
x
. (1.31)
Taking the limit x → 0 yields

dK
=
C
K
, (1.32)
where
C = lim
x→0
Λ (1 + x)
x
. (1.33)
In tegrating Eq. (1.32) and using the initial condition (1.30) yields
Λ (K)=C log K. (1.34)
Eyal Buks Thermodynamics and Statistical Physics 8
Simpo PDF Merge and Split Unregistered Version -
1.2. Lar g est Uncertainty Estimator
Moreover, the second property requires that C>0. Choosing C =1and

using Eq. (1.28) yields
σ (p
1
,p
2
, , p
N
)=Λ (M
0
) − p
1
Λ (M
1
) − p
2
Λ (M
2
) −
= −p
1
log
M
1
M
0
− p
2
log
M
2

M
0
− −p
M
log
M
N
M
0
= −p
1
log p
1
− p
2
log p
2
− −p
N
log p
N
,
(1.35)
in agreement with the definition (1.2).
1.2 Largest Uncertainty Estimator
As before, the possible states of a given system are denoted as e
m
,where
m =1, 2, 3, , and the probability that state e
m

is occupied is denoted by
p
m
.LetX
l
(l =1, 2, , L) be a set of variables characterizing the system
(e.g., energy, number of particles, etc.). Let X
l
(m)bethevaluewhichthe
variable X
l
takes when the system is in state e
m
. Consider the case where
the expectation values of the variables X
l
are given
hX
l
i =
X
m
p
m
X
l
(m) , (1.36)
where l =1, 2, , L. However, the probability distribution {p
m
} is not given.

Clearly, in the genera l case the knowledge of hX
1
i , hX
2
i , , hX
L
i is not
sufficient to obtain the probability distribution because there are in general
many different possibilities for choosing a probability distribution which is
consistent with the contrarians (1.36) and the normalization condition (1 .1).
For each such probability distribution the entropy can be calculated according
to the definition (1.2). The probability distributio n {p
m
}, which is consistent
with these conditions, and has the largest possible entropy is called the largest
uncertainty estimator (LUE).
The LUE is found by seeking a stationary point of the entropy σ with
respect to all probability distributions {p
m
} which satisfy the no rm alizat ion
constrain (1.5) in addition to the constrains (1.3 6), which can be expressed
as
0=g
l
(¯p)=
X
m
p
m
X

l
(m) − hX
l
i , (1.37)
where l =1, 2, L.Tofirst order one has
δσ =
¯
∇σ · δ¯p, (1.38a)
δg
l
=
¯
∇g
l
· δ ¯p, (1 .38b)
Eyal Buks Thermodynamics and Statistical Physics 9
Simpo PDF Merge and Split Unregistered Version -
Chapter 1. The Principle o f Largest Un certainty
where l =0, 1, 2, L. A stationary point of σ occurs iff for every small change
δ¯p, which is orthogonal to all vectors
¯
∇g
0
,
¯
∇g
1
,
¯
∇g

2
, ,
¯
∇g
L
one has
0=δσ =
¯
∇σ · δ¯p. (1.39)
This condition is fulfilled only when the vector
¯
∇σ belongs to the subspace
spanned by t he vectors
©
¯
∇g
0
,
¯
∇g
1
,
¯
∇g
2
, ,
¯
∇g
L
ª

[see also the discussion be-
low Eq. (1.12) above]. In other words, only when
¯
∇σ = ξ
0
¯
∇g
0
+ ξ
1
¯
∇g
1
+ ξ
2
¯
∇g
2
+ + ξ
L
¯
∇g
L
, (1.40)
where the n umbers ξ
0

1
, , ξ
L

, which are called Lagrange m ultipliers, are
constants. Using Eqs. (1.2), (1.5) and (1.37) the condition (1.40) can be
expressed as
−log p
m
− 1=ξ
0
+
L
X
l=1
ξ
l
X
l
(m) . (1.41)
From Eq. (1.41) one obtains
p
m
=exp(−1 − ξ
0
)exp
Ã

L
X
l=1
ξ
l
X

l
(m)
!
. (1.42)
The Lagrange multipliers ξ
0

1
, , ξ
L
can be determined from Eqs. (1.5) and
(1.37)
1=
X
m
p
m
=exp(−1 − ξ
0
)
X
m
exp
Ã

L
X
l=1
ξ
l

X
l
(m)
!
, (1.43)
hX
l
i =
X
m
p
m
X
l
(m)
=exp(−1 − ξ
0
)
X
m
exp
Ã

L
X
l=1
ξ
l
X
l

(m)
!
X
l
(m) .
(1.44)
Using Eqs. (1.42) and (1.43) one finds
p
m
=
exp
µ

L
P
l=1
ξ
l
X
l
(m)

P
m
exp
µ

L
P
l=1

ξ
l
X
l
(m)

. (1.45)
In terms of the partition function Z, which is de fined as
Eyal Buks Thermodynamics and Statistical Physics 10
Simpo PDF Merge and Split Unregistered Version -
1.2. Lar g est Uncertainty Estimator
Z =
X
m
exp
Ã

L
X
l=1
ξ
l
X
l
(m)
!
, (1.46)
one finds
p
m

=
1
Z
exp
Ã

L
X
l=1
ξ
l
X
l
(m)
!
. (1.47)
Using the same arguments as in section 1.1.2 above [see Eq . (1.16)] it is easy to
show that at the stationar y point that occurs for the probability distribution
given by Eq. (1.47) the entropy obtains its largest value.
1.2.1 Useful Relations
The expectation value hX
l
i can be expressed as
hX
l
i =
X
m
p
m

X
l
(m)
=
1
Z
X
m
exp
Ã

L
X
l=1
ξ
l
X
l
(m)
!
X
l
(m)
= −
1
Z
∂Z
∂ξ
l
= −

∂ log Z
∂ξ
l
.
(1.48)
Similarly,

X
2
l
®
can be expressed as

X
2
l
®
=
X
m
p
m
X
2
l
(m)
=
1
Z
X

m
exp
Ã

L
X
l=1
ξ
l
X
l
(m)
!
X
2
l
(m)
=
1
Z

2
Z
∂ξ
2
l
.
(1.49)
Using Eqs. (1.48) and (1.49) one finds that the variance of the variable X
l

is
given by
D
(∆X
l
)
2
E
=
D
(X
l
− hX
l
i)
2
E
=
1
Z

2
Z
∂ξ
2
l

µ
1
Z

∂Z
∂ξ
l

2
. (1.50)
However, using the following identity
Eyal Buks Thermodynamics and Statistical Physics 11
Simpo PDF Merge and Split Unregistered Version -
Chapter 1. The Principle o f Largest Un certainty

2
log Z
∂ξ
2
l
=

∂ξ
l
1
Z
∂Z
∂ξ
l
=
1
Z

2

Z
∂ξ
2
l

µ
1
Z
∂Z
∂ξ
l

2
, (1.51)
one finds
D
(∆X
l
)
2
E
=

2
log Z
∂ξ
2
l
. (1.52)
Note that the above results Eqs. (1.4 8) and (1.52) are valid only when Z is

expressed as a function of the the Lagrange multipliers, namely
Z = Z (ξ
1

2
, , ξ
L
) . (1.53)
Using the definition of entropy (1.2) and Eq. (1.47) one finds
σ = −
X
m
p
m
log p
m
= −
X
m
p
m
log
Ã
1
Z
exp
Ã

L
X

l=1
ξ
l
X
l
(m)
!!
=
X
m
p
m
Ã
log Z +
L
X
l=1
ξ
l
X
l
(m)
!
=logZ +
L
X
l=1
ξ
l
X

m
p
m
X
l
(m) ,
(1.54)
thus
σ =logZ +
L
X
l=1
ξ
l
hX
l
i . (1.55)
Using the abo ve relations one can also evaluate the partial deriv ative of
the entropy σ when it is expressed as a function of the expe ctation values,
namely
σ = σ (hX
1
i , hX
2
i , , hX
L
i) . (1.56)
Using Eq. (1.55) on e has
∂σ
∂ hX

l
i
=
∂ log Z
∂ hX
l
i
+
L
X
l
0
=1
hX
l
0
i
∂ξ
l
0
∂ hX
l
i
+
L
X
l
0
=1
ξ

l
0
∂ hX
l
0
i
∂ hX
l
i
=
∂ log Z
∂ hX
l
i
+
L
X
l
0
=1
hX
l
0
i
∂ξ
l
0
∂ hX
l
i

+ ξ
l
=
L
X
l
0
=1
∂ log Z
∂ξ
l
0
∂ξ
l
0
∂ hX
l
i
+
L
X
l
0
=1
hX
l
0
i
∂ξ
l

0
∂ hX
l
i
+ ξ
l
,
(1.57)
Eyal Buks Thermodynamics and Statistical Physics 12
Simpo PDF Merge and Split Unregistered Version -
1.2. Lar g est Uncertainty Estimator
thus using Eq. (1.48) one finds
∂σ
∂ hX
l
i
= ξ
l
. (1.58)
1.2.2 The Free En tropy
The free entropy σ
F
is defined as the term log Z in Eq. (1.54)
σ
F
=logZ
= σ −
L
X
l=1

ξ
l
X
m
p
m
X
l
(m)
= −
X
m
p
m
log p
m

L
X
l=1
ξ
l
X
m
p
m
X
l
(m) .
(1.59)

ThefreeentropyiscommonlyexpressedasafunctionoftheLagrangemul-
tipliers
σ
F
= σ
F

1

2
, , ξ
L
) . (1.60)
We have seen above that the LUE maximizes σ for given values of expecta-
tion values hX
1
i , hX
2
i , , hX
L
i. We show below tha t a similar result can be
obtained for the free energy σ
F
with respect to given values of the Lagrange
multipliers.
Claim. The LUE maximizes σ
F
for given values of the Lagrange m ultipliers
ξ
1


2
, , ξ
L
.
Proof. As before, the normalization condition is expres sed as
0=g
0
(¯p)=
X
m
p
m
− 1 . (1.61)
A t a stationary point of σ
F
, as w e ha ve seen previously, the following holds
¯
∇σ
F
= η
¯
∇g
0
, (1.62)
where η is a Lagrange multiplier. Thus
−(log p
m
+1)−
L

X
l=1
ξ
l
X
l
(m)=η, (1.63)
or
p
m
=exp(−η − 1) exp
Ã

L
X
l=1
ξ
l
X
l
(m)
!
. (1.64)
Eyal Buks Thermodynamics and Statistical Physics 13
Simpo PDF Merge and Split Unregistered Version -
Chapter 1. The Principle o f Largest Un certainty
Table 1 . 1 . The micro canonical, c anonical and grandcanonical distributions.
energy number of particles
microcanonical distribution constrained U (m)=U constrained N (m)=N
canonical distribution average is given hU i constrained N (m)=N

grandcanonical distribution average is given hU i average is given hNi
This result is the same as the one given by Eq. (1.42). Taking into account
the normalization condition (1.61) one obtains the same expression for p
m
as
the one given b y Eq. (1.47). Namely, the stationary point of σ
F
corre sponds
to the LUE pro bability distribution. Since

2
σ
F
∂p
m
∂p
m
0
= −
1
p
m
δ
m,m
0
< 0 , (1.65)
one concludes that this stationary point is a maximum point [see Eq. (1.16)].
1.3 The Principle of Largest Uncertainty in Statistical
Mechanics
The energy and number of particles of state e

m
are denoted by U (m)and
N (m) respectively. The probability that state e
m
is occupied is denoted as
p
m
. We consider below three cas es (see table 1.1). In the first case (micro-
canonical distribution) the system is isolated and its total energy U and num-
ber of particles N are constrained , that is for all accessible states U (m)=U
and N (m)=N. In the second case (canonical distribution) the system is
allowed to exchange energy with the environment, and we assume that its
average energy hUi is given. However, its num ber of particles is constrained
,thatisN (m)=N. In the third case (grandcanonical distribution) the sys-
tem is allowed to exchange both energy and particles with the environment,
and we assume that both the av erage energy hUi and the average number
of particles hNi are given. However, in all cases, the probability distribution
{p
m
} is not given.
According to the principle of largest uncertaint y in statistical mechanics
the LUE is employed to estima te the probability d istribution {p
m
},namely,
w e will seek a probability distribution which is consistent with the norma l-
ization condition (1.1) a nd with the given expectation values (energy, in the
second case, and both energy and number of particles, in the third case),
which maximizes the entropy.
1.3.1 Microcanonical Distribution
In t h is case no expectation values are given. Thus we seek a probability

distribution which is consistent with the normalization condition (1.1), and
Eyal Buks Thermodynamics and Statistical Physics 14
Simpo PDF Merge and Split Unregistered Version -
1.3. The Pr inciple of Largest Uncertaint y in Statistical Mechanics
which maximizes the entropy. The desired pro bability distr ibution is
p
1
= p
2
= =1/M , (1.66)
where M is the number of accessible states of the system [see also Eq. (1.18)].
Using Eq. (1.2) the entropy for this case is given by
σ =logM. (1.67)
1.3.2 Canonical Distribution
Using Eq. (1.47) on e finds that the probability distribution is given by
p
m
=
1
Z
c
exp (−βU (m)) , (1.68)
where β is the Lagrange multiplier associated with the given expectation
value hUi, and the partition function is given by
Z
c
=
X
m
exp (−βU (m)) . (1.69)

The term exp (−βU (m)) is called Boltzmann factor.
Moreover, Eq. (1.48) yields
hUi = −
∂ log Z
c
∂β
, (1.70)
Eq. (1.52 ) yields
D
(∆U)
2
E
=

2
log Z
c
∂β
2
, (1.71)
and Eq. (1.55) yields
σ =logZ
c
+ β hUi . (1.72)
Using Eq. (1.58) one can expressed the Lagrange multiplier β as
β =
∂σ
∂U
. (1.73a)
The temperature τ =1/β is defined as

1
τ
= β. (1.74)
Exercise 1.3.1. Consider a system that can be in one of two states having
energies ±ε/2. Calculate the av erage energy hUi and the variance
D
(∆U)
2
E
in thermal equilibrium at temperature τ .
Eyal Buks Thermodynamics and Statistical Physics 15
Simpo PDF Merge and Split Unregistered Version -
Chapter 1. The Principle o f Largest Un certainty
Solution: The partition function is given by Eq. (1.69)
Z
c
=exp
µ
βε
2

+exp
µ

βε
2

=2cosh
µ
βε

2

, (1 .75)
thus using Eqs. (1.70) and (1.71) one finds
hUi = −
ε
2
tanh
µ
βε
2

, (1.76)
and
D
(∆U)
2
E
=
³
ε
2
´
2
1
cosh
2 βε
2
, (1.77)
where β =1/τ.

-1
-0.8
-0.6
-0.4
-0.2
0
-tanh(1/x)
12345
x
1.3.3 Grandcanonical Distribution
Using Eq. (1.47) on e finds that the probability distribution is given by
p
m
=
1
Z
gc
exp (−βU (m) − ηN (m)) , (1.78)
where β and η are the Lagrange multipliers associated with the given expec-
tation values hUi and hNi respectively, and the partition function is given
by
Z
gc
=
X
m
exp (−βU (m) − ηN (m)) . (1.79)
The term exp (−βU (m) − ηN (m)) is called Gibbs facto r.
Moreover, Eq. (1.48) yields
Eyal Buks Thermodynamics and Statistical Physics 16

Simpo PDF Merge and Split Unregistered Version -
1.3. The Pr inciple of Largest Uncertaint y in Statistical Mechanics
hUi = −
µ
∂ log Z
gc
∂β

η
, (1 .80)
hNi = −
µ
∂ log Z
gc
∂η

β
(1.81)
Eq. (1.52 ) yields
D
(∆U)
2
E
=
µ

2
log Z
gc
∂β

2

η
, (1.82)
D
(∆N)
2
E
=
µ

2
log Z
gc
∂η
2

β
, (1.83)
and Eq. (1.55) yields
σ =logZ
gc
+ β hUi + η hNi . (1.84)
1.3.4 Temperature and Chemical Potential
Probability distributions in statistical mecha nics of macros copic par ameters
are typically extrem ely sharp and narrow. Consequently, in many cases no
distinction is made between a param eter a nd its expectation valu e. That is,
the expression for the en trop y in Eq. (1.72) can be rewritten as
σ =logZ
c

+ βU , (1.85)
andtheoneinEq.(1.84)as
σ =logZ
gc
+ βU + ηN . (1.86)
Using Eq. (1.58) one ca n expressed the Lagrange multipliers β and η as
β =
µ
∂σ
∂U

N
, (1.87)
η =
µ
∂σ
∂N

U
. (1.88)
The chemical potential µ is defined as
µ = −τη . (1.89)
In the definition (1.2) the en tropy σ is dimensionless. Historically, the entropy
was defined as
S = k
B
σ, (1.90)
where
Eyal Buks Thermodynamics and Statistical Physics 17
Simpo PDF Merge and Split Unregistered Version -

×