Tải bản đầy đủ (.pdf) (517 trang)

Applications of Random Matrices in Physics doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.28 MB, 517 trang )

Applications of Random Matrices in Physics
NATO Science Series
A Series presenting the results of scientific meetings supported under the NATO Science
Programme.
The Series is published by IOS Press, Amsterdam, and Springer
Sub-Series
I. Life and Behavioural Sciences IOS Press
II. Mathematics, Physics and Chemistry
III. Computer and Systems Science IOS Press
IV. Earth and Environmental Sciences
The NATO Science Series continues the series of books published formerly as the NATO ASI Series.
Advanced Study Institutes are high-level tutorial courses offering in-depth study of latest advances
in a field.
Advanced Research Workshops are expert meetings aimed at critical assessment of a field, and
identification of directions for future action.
As a consequence of the restructuring of the NATO Science Programme in 1999, the NATO Science
Series was re-organized to the four sub-series noted above. Please consult the following web sites for
information on previous volumes published in the Series.
http://www
.nato
.int/science
http://www
.iospress
.nl

The NATO Science Programme offers support for collaboration in civil science between scientists of
countries of the Euro-Atlantic Partnership Council. The types of scientific meeting generally supported
are “Advanced Study Institutes” and “Advanced Research Workshops”, and the NATO Science Series
collects together the results of these meetings. The meetings are co-organized by scientists from
NATO countries and scientists from NAT
O


s Partner countries countries of the CIS and Central and
Eastern Europe.
,
in conjunction with the NATO Public
Diplomacy Division.
Springer
Springer

Series II: Mathematics, Physics and Chemistry – Vol. 221
Applications of Random Matrices
in Physics
edited by
zin
Vladimir Kazakov
Universit Paris-VI, Paris, France
Didina Serban
Service de Physique Th orique, CEA Saclay,
Gif-sur-Yvette Cedex, France
Paul Wiegmann
and
Anton Zabrodin
Institute of Biochemical Physics, Moscow, Russia
É
é
é
and ITEP, Moscow, Russia
Ecole Normale Sup ri ure, Paris, France
é
ThLaboratoire de Physique orique,
é

ThLaboratoire de Physique orique
,
douard Br
de l Ecole Normale Sup rieure,
é
University of Chicago, Chicago, IL, U.S.A.
James Frank Institute,
é
é
e
A C.I.P.Catalogue record for this book is available from the Library of Congress.
ISBN-10 1-4020-4530-1 (PB)
ISBN-13 978-1-4020-4530-1 (PB)
ISBN-10 1-4020-4529-8 (HB)
ISBN-10 1-4020-4531-X (e-book)
Published by Springer,
P.O. Box 17, 3300 AA Dordrecht, The Netherlands.
www.springer.com
Printed on acid-free paper
All Rights Reserved
© 2006 Springer
No part of this work may be reproduced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, microfilming,
recording or otherwise, without written permission from the Publisher, with the exception
of any material supplied specifically for the purpose of being entered
and executed on a computer system, for exclusive use by the purchaser of the work.
Printed in the Netherlands.
Proceedings of the NATO Advanced Study Institute on
Applications of Random
ISBN-13 978-1-4020-4531-8 (e-book)

ISBN-13 978-1-4020-4529-5 (HB)
6
-
25 June 2004
Les Houches,
France
Matrices in Physics
Contents
Preface ix
1
J. P. Keating
1 Introduction 1
2
ζ(
1
2
+ it) and log ζ(
1
2
+ it) 9
3 Characteristic polynomials of random unitary matrices 12
417
5
19
6 Asymptotic expansions 25
References 30
2D Quantum Gravity, Matrix Models and Graph Combinatorics 33
P. Di Francesco
1 Introduction 33
2 Matrix models for 2D quantum gravity 35

3 The one-matrix model I: large
N limit and the enumeration of planar
graphs 45
4 The trees behind the graphs 54
5 The one-matrix model II: topological expansions and quantum gravity
6 The combinatorics beyond matrix models: geodesic distance in
pla
nar graphs
7 Planar graphs as spatial branching processes
8 Conclusion
References
Joakim Arnlind, Jens Hoppe
References
K.B. Efetov
1 Supersymmetry method
2 Wave functions fluctuations in a finite volume. Multifractality
3 Recent and possible future developments
4
Summary
Random Matrices and Number Theory
Other compact groups
Eigenvalue Dynamics, Follytons and Large
N Limits of Matrices
References
58
69
76
85
86
89

93
95
104
118
126
134
134
134
Families of
L-functions and symmetry
Acknowledgements
Random Matrices and Supersymmetry in Disordered Systems
vi APPLICATIONS OF RANDOM MATRICES IN PHYSICS
Alexander G. Abanov
1 Introduction
2 Instanton or rare fluctuation method
3 Hydrodynamic approach
4 Linearized hydrodynamics or bosonization
5 EFP through an asymptotics of the solution
9 Conclusion
Appendix: Hydrodynamic approach to non-Galilean invariant systems
Appendix: Exact results for EFP in some integrable models
References
J.J.M. Verbaarschot
1 Summary
2 Introduction
3QCD
4
5
6

7
8
9
10 Conclusions
References
Giorgio Parisi
1 Introduction
2 Basic definitions
3 Physical motivations
4 Field theory
5 The simplest case
6 Phonons
Hydrodynamics of Correlated Systems
QCD, Chiral Random Matrix Theory and Integrability
Euclidean Random Matrices: Solved and Open Problems
The Dirac spectrum in QCD
Low energy limit of QCD
Integrability and the QCD partition function
Chiral RMT and the QCD Dirac spectrum
QCD at finite baryon density
Full QCD at nonzero chemical potential
References
139
139
142
143
147
145
6 Free fermions 148
7 Calogero-Sutherland model 150

8 Free fermions on the lattice 152
157
156
157
158
160
163
163
163
166
174
176
182
188
200
211
212
213
219
219
222
224
226
230
240
257
A. Zabrodin
1 Introduction
2 Some ensembles of random matrices with complex eigenvalues
Matrix Models and Growth Processes

261
261
264
214
Acknowledgements
Acknowledgements
Contents vii
3 Exact results at finite N
4LargeN limit
5 The matrix model as a growth problem
References
Marcos Mari
~
no
1 Introduction
2 Matrix models
3 Type B topological strings and matrix models
4 Type A topological strings, Chern-Simons theory and matrix models
References
Matrix Models of Moduli Space
Sunil Mukhi
1 Introduction
2
3
4 The Penner model
5
6 The Kontsevich Model
7
8 Conclusions
References

Matrix Models and 2D String Theory
Emil J. Martinec
1 Introduction
2 An overview of string theory
3 Strings in D-dimensional spacetime
4 Discretized surfaces and 2D string theory
5 An overview of observables
6 Sample calculation: the disk one-point function
7 Worldsheet description of matrix eigenvalues
8 Further results
9 Open problems
Matrix Models and Topological Strings
Quadratic differentials and fatgraphs
Penner model and matrix gamma function
Applications to string theory
References
274
282
298
316
319
319
323
345
366
374
379
379
380
383

388
389
390
394
398
400
403
403
408
413
421
425
434
441
406
446
Matrix Models as Conformal Field Theories
Ivan K. Kostov
1 Introduction and historical notes
2 Hermitian matrix integral: saddle points and hyperelliptic curves
3 The hermitian matrix model as a chiral CFT
4 Quasiclassical expansion: CFT on a hyperelliptic Riemann surface
5 Generalization to chains of random matrices
References
459
459
461
470
477
483

486
Moduli space of Riemann surfaces and its topology
452
B. Eynard
1 Introduction
2 Definitions
3 Orthogonal polynomials
4
5 Riemann-Hilbert problems and isomonodromies
6 WKB–like asymptotics and spectral curve
7 Orthogonal polynomials as matrix integrals
8
(0)
9
10 Solution of the saddlepoint equation
11 Asymptotics of orthogonal polynomials
12 Conclusion
References
489
489
489
490
Differential equations and integrability 491
492
493
494
495
496
497
507

511
viii
APPLICATIONS OF RANDOM MATRICES IN PHYSICS
Saddle point method
511
Large N Asymptotics of Orthogonal Polynomials from Integrability to
Algebraic
Computation of derivatives of
F
Geometry
Preface
Random matrices are widely and successfully used in physics for almost
60-70 years, beginning with the works of Wigner and Dyson. Initially pro-
posed to describe statistics of excited levels in complex nuclei, the Random
Matrix Theory has grown far beyond nuclear physics, and also far beyond just
level statistics. It is constantly developing into new areas of physics and math-
ematics, and now constitutes a part of the general culture and curriculum of a
theoretical physicist.
Mathematical methods inspired by random matrix theory have become pow-
erful and sophisticated, and enjoy rapidly growing list of applications in seem-
ingly disconnected disciplines of physics and mathematics.
A few recent, randomly ordered, examples of emergence of the Random
Matrix Theory are:
- universal correlations in the mesoscopic systems,
- disordered and quantum chaotic systems;
- asymptotic combinatorics;
- statistical mechanics on random planar graphs;
- problems of non-equilibrium dynamics and hydrodynamics, growth mod-
els;
- dynamical phase transition in glasses;

- low energy limits of QCD;
- advances in two dimensional quantum gravity and non-critical string the-
ory, are in great part due to applications of the Random Matrix Theory;
- superstring theory and non-abelian supersymmetric gauge theories;
- zeros and value distributions of Riemann zeta-function, applications in
modular forms and elliptic curves;
- quantum and classical integrable systems and soliton theory.
x APPLICATIONS OF RANDOM MATRICES IN PHYSICS
In these fields the Random Matrix Theory sheds a new light on classical prob-
lems.
On the surface, these subjects seem to have little in common. In depth the
subjects are related by an intrinsic logic and unifying methods of theoretical
physics. One important unifying ground, and also a mathematical basis for the
Random Matrix Theory, is the concept of integrability. This is despite the fact
that the theory was invented to describe randomness.
The main goal of the school was to accentuate fascinating links between
different problems of physics and mathematics, where the methods of the Ran-
dom Matrix Theory have been successfully used.
We hope that the current volume serves this goal. Comprehensive lectures
and lecture notes of seminars presented by the leading researchers bring a
reader to frontiers of a broad range of subjects, applications, and methods of
the Random Matrix Universe.
We are gratefully indebted to Eldad Bettelheim for his help in preparing the
volume.
EDITORS
RANDOM MATRICES AND NUMBER THEORY
J. P. Keating
School of Mathematics,
University of Bristol,
Bristol, BS8 1TW

UK
1. Introduction
My purpose in these lecture notes is to review and explain some recent re-
sults concerning connections between random matrix theory and number the-
ory. Specifically, I will focus on how random matrix theory has been used to
shed new light on some classical problems relating to the value distributions
of the Riemann zeta-function and other L-functions, and on applications to
modular forms and elliptic curves.
This may all seem rather far from Physics, but, as I hope to make clear, the
questions I shall be reviewing are rather natural from the random-matrix point
of view, and attempts to answer them have stimulated significant developments
within that subject. Moreover, analogies between properties of the Riemann
zeta function, random matrix theory, and the semiclassical theory of quantum
chaotic systems have been the subject of considerable interest over the past 20
years. Indeed, the Riemann zeta function might be viewed as one of the best
testing grounds for those theories.
In this introductory chapter I shall attempt to paint the number-theoretical
background needed to follow these notes, give some history, and set some
context from the point of view of Physics. The calculations described in the
later chapters are, as far as possible, self-contained.
1.1 Number-theoretical background
The Riemann zeta function is defined by
ζ(s)=


n=1
1
n
s
=


p

1 −
1
p
s

−1
(1)
for Res>1,wherep labels the primes, and then by analytic continuation to
the rest of the complex plane. It has a single simple pole at s =1, zeros at
s = −2, −4, −6, etc., and infinitely many zeros, called the non-trivial zeros,
E. Brezin et al. (eds.), Applications of Random Matrices in Physics, 1–32.
© 2006 Springer. Printed in the Netherlands.
1
APPLICATIONS OF RANDOM MATRICES IN PHYSICS
in the critical strip 0 < Res<1. It satisfies the functional equation
π
−s/2
Γ

s
2

ζ(s)=π
−(1−s)/2
Γ

1 − s

2

ζ(1 − s). (2)
The Riemann Hypothesis states that all of the non-trivial zeros lie on the
critical line Res =1/2 (i.e. on the symmetry line of the functional equation);
that is, ζ(1/2+it)=0has non-trivial solutions only when t = t
n
∈ R [33].
This is known to be true for at least 40% of the non-trivial zeros [6], for the
first 100 billion of them [36], and for batches lying much higher [29].
In these notes I will, for ease of presentation, assume the Riemann Hypoth-
esis to be true. This is not strictly necessary – it simply makes some of the
formulae more transparent.
The mean density of the non-trivial zeros increases logarithmically with
height t up the critical line. Specifically, the unfolded zeros
w
n
= t
n
1

log
|t
n
|

(3)
satisfy
lim
W →∞

1
W
# {w
n
∈ [0,W]} =1; (4)
that is, the mean of w
n+1
− w
n
is 1.
The zeta function is central to the theory of the distribution of the prime
numbers. This fact follows directly from the representation of the zeta function
as a product over the primes, known as the Euler product. Essentially the
nontrivial zeros and the primes may be thought of as Fourier-conjugate sets of
numbers. For example, the number of primes less than X can be expressed
as a harmonic sum over the zeros, and the number, N (T), of non-trivial zeros
with heights 0 <t
n
≤ T can be expressed as a harmonic sum over the primes.
Such connections are examples of what are generally called explicit formulae.
Ignoring niceties associated with convergence, the second takes the form
N(T )=
N(T ) −
1
π

p


r=1

1
rp
r/2
sin(rT log p), (5)
where
N(T )=
T

log
T


T

+
7
8
+ O

1
T

(6)
as T →∞. This follows from integrating the logarithmic derivative of ζ(s)
around a rectangle, positioned symmetrically with respect to the critical line
and passing through the points s =1/2 and s =1/2+iT , using the functional
equation. (Formulae like this can be made to converge by integrating both sides
against a smooth function with sufficiently fast decay as |T|→∞.)
2
It will be a crucial point for us that the Riemann zeta-function is but one

example of a much wider class of functions known as L-functions. These
L-functions all have an Euler product representation; they all satisfy a func-
tional equation like the one satisfied by the Riemann zeta-function; and in each
case their non-trivial zeros are subject to a generalized Riemann hypothesis
(i.e. they are all conjectured to lie on the symmetry axis of the corresponding
functional equation).
To give an example, let
χ
d
(p)=

d
p

=



+1 if p  d and x
2
≡ d (mod p)solvable
0ifp|d
−1ifp  d and x
2
≡ d (mod p) not solvable
(7)
denote the Legendre symbol. Then define
L
D
(s, χ

d
)=

p

1 −
χ
d
(p)
p
s

−1
=


n=1
χ
d
(n)
n
s
, (8)
where the product is over the prime numbers. These functions form a family of
L-functions parameterized by the integer index d. The Riemann zeta-function
is itself a member of this family.
There are many other ways to construct families of L-functions. It will be
particularly important to us that elliptic curves also provide a route to doing
this. I will give an explicit example in the last chapter of these notes.
1.2 History

The connection between random matrix theory and number theory was first
made in 1973 in the work of Montgomery [28], who conjectured that
lim
W →∞
1
W
#{w
n
,w
m
∈ [0,W]:α ≤ w
n
− w
m
<β} =

β
α

δ(x)+1−
sin
2
(πx)
π
2
x
2

dx. (9)
This conjecture was motivated by a theorem Montgomery proved in the same

paper that may be restated as follows:
lim
N→∞
1
N

n,m≤N
f(w
n
−w
m
)=


−∞
f(x)

δ(x)+1−
sin
2
(πx)
π
2
x
2

dx (10)
Random Matrices and Number Theory 3
APPLICATIONS OF RANDOM MATRICES IN PHYSICS
for all test functions f(x) whose Fourier transforms


f(τ )=


−∞
f(x)exp(2πixτ)dx (11)
have support in the range (-1, 1) and are such that the sum and integral in (10)
converge. The generalized form of the Montgomery conjecture is that (10)
holds for all test functions such that the sum and integral converge, without
any restriction on the support of

f(τ). The form of the conjecture (9) then
corresponds to the particular case in which f(x) is taken to be the indicator
function on the interval [α, β) (and so does not fall within the class of test
functions covered by the theorem).
The link with random matrix theory follows from the observation that the
pair correlation of the nontrivial zeros conjectured by Montgomery coincides
precisely with that which holds for the eigenvalues of random matrices taken
from either the Circular Unitary Ensemble (CUE) or the Gaussian Unitary En-
semble (GUE) of random matrices [27] (i.e. random unitary or hermitian ma-
trices) in the limit of large matrix size. For example, let A be an N × N
unitary matrix, so that A(A
T
)

= AA

= I. The eigenvalues of A lie on the
unit circle; that is, they may be expressed in the form e


n
, θ
n
∈ R. Scaling the
eigenphases θ
n
so that they have unit mean spacing,
φ
n
= θ
n
N

, (12)
the two-point correlation function for a given matrix A maybedefinedas
R
2
(A; x)=
1
N
N

n=1
N

m=1


k=−∞
δ(x + kN − φ

n
+ φ
m
), (13)
so that
1
N

n,m
f(φ
n
− φ
m
)=

N
0
R
2
(A; x)f (x)dx. (14)
R
2
(A; x) is clearly periodic in x, so can be expressed as a Fourier series:
R
2
(A; x)=
1
N
2



k=−∞
|TrA
k
|
2
e
2πikx/N
. (15)
The CUE corresponds taking matrices from U(N ) with a probability mea-
sure given by the normalized Haar measure on the group (i.e. the unique mea-
sure that is invariant under all unitary transformations). It follows from (15)
that the CUE average of R
2
(A; x) may be evaluated by computing the corre-
sponding average of the Fourier coefficients |TrA
k
|
2
. This was done by Dyson
4
[14]:

U(N)
|TrA
k
|
2

Haar

(A)=



N
2
k =0
|k||k|≤N
N |k| >N.
(16)
There are several methods for proving this. One reasonably elementary
proof involves using Heine’s identity

U(N)
f
c

1
, ,θ
N
)dµ
Haar
(A)
=
1
(2π)
N


0

···


0
f
c

1
, ,θ
N
)det(e

n
(n−m)
)dθ
1
···dθ
N
(17)
for class functions f
c
(A)=f
c

1

2
, ,θ
N
) (i.e. functions f

c
that are sym-
metric in all of their variables) to give

U(N)
|TrA
k
|
2

Haar
(A)=
1
(2π)
N


0
···


0

j

l
e
ik(θ
j
−θ

l
)
×









1 e
−iθ
1
··· e
−i(N−1)θ
1
e

2
1 ··· e
−i(N−2)θ
2
.
.
.
.
.
.

.
.
.
.
.
.
e
i(N−1)θ
N
e
i(N−2)θ
N
··· 1










1
···dθ
N
. (18)
The net contribution from the diagonal (j = l) terms in the double sum is N,
because the measure is normalized and there are N diagonal terms. Using the
fact that

1



0
e
inθ
dθ =

1 n =0
0 n =0
, (19)
if k ≥ N then the integral of the off-diagonal terms is zero, because, for
example, when the determinant is expanded out and multiplied by the prefactor
there is no possibility of θ
1
cancelling in the exponent. If k = N − s, s =
1, ,N − 1, then the off-diagonal terms contribute −s; for example, when
s =1only one non-zero term survives when the determinant is expanded
out, multiplied by the prefactor, and integrated term-by-term – this is the term
coming from multiplying the bottom-left entry by the top-right entry and all
of the diagonal entries on the other rows. Thus the combined diagonal and
off-diagonal terms add up to give the expression in (16), bearing in mind that
when k =0the total is just N
2
, the number of terms in the sum over j and l.
Random Matrices and Number Theory 5
APPLICATIONS OF RANDOM MATRICES IN PHYSICS
Heine’s identity itself may be proved using the Weyl Integration Formula
[35]


U(N)
f
c
(A)dµ
Haar
(A)=
1
(2π)
N
N!


0
···


0
f
c

1
, ,θ
N
)
×

1≤j<k≤N
|e


j
− e

k
|
2

1
···dθ
N
(20)
for class functions f
c
(A), the Vandermonde identity

1≤j<k≤N
|e

j
− e

k
|
2
=det

MM


(21)

where
M =





11···
e

1
e

2
···
.
.
.
.
.
.
···
e
i(N−1)θ
1
e
i(N−1)θ
2
···






, (22)
the fact that
det

MM


=det

N

=1
e


(n−m)

, (23)
and then by performing elementary manipulations of the rows in this determi-
nant.
The Weyl Integration formula will play a central role in these notes. One
way to understand it is to observe that, by definition, dµ
Haar
(A) is invariant
under A →
˜

UA
˜
U

where
˜
U is any N × N unitary matrix, and that A can
always be diagonalized by a unitary transformation; that is, it can be written as
A = U



e

1
··· 0
.
.
.
.
.
.
.
.
.
0 ··· e

N




U

, (24)
where U is an N × N unitary matrix. Therefore the integral over A can be
written as an integral over the matrix elements of U and the eigenphases θ
n
.
Because the measure is invariant under unitary transformations, the integral
over the matrix elements of U can be evaluated straightforwardly, leaving the
integral over the eigenphases (20).
Henceforth, to simplify the notation, I shall drop the subscript on the mea-
sure dµ(A) – in all integrals over compact groups the measure may be taken to
be the Haar measure on the group.
6
It follows from Dyson’s theorem (16) that

U(N)
R
2
(A; x)dµ(A)=
1
N
2


k=−∞
e
2πikx/N




N
2
k =0
|k||k| <N
N |k|≥N
(25)
=


j=−∞
δ(x −jN)+1−
sin
2
(πx)
N
2
sin
2
(
πx
N
)
. (26)
Hence, for test functions f such that f (x) → 0 as |x|→∞,
lim
N→∞

U(N)



−∞
f(x)R
2
(A; x)dxdµ(A)
=


−∞
f(x)

δ(x)+1−
sin
2
(πx)
π
2
x
2

dx. (27)
For example,
lim
N→∞

U(N)
1
N
#{φ

n

m
: α ≤ φ
n
− φ
m
≤ β}dµ(A)
=

β
α

δ(x)+1−
sin
2
(πx)
π
2
x
2

dx. (28)
The key point is now that the right-hand sides of (27) and (28) coincide
precisely with those of (10) and (9) respectively. That is, the pair correlation
of the Riemann zeros, in the limit as the height up the critical line tends to
infinity, is, conjecturally, the same as that of the eigenphases of random unitary
(or hermitian) matrices in the limit as the matrix size tends to infinity.
It is important to note that the proof of Montgomery’s theorem does not in-
volve any of the steps in the derivation of the CUE pair correlation function. It

is instead based entirely on the connection between the Riemann zeros and the
primes. In outline, the proof involves computing the pair correlation function
of the derivative of N(T ). Using the explicit formula (5), this pair correlation
function can be expressed as a sum over pairs of primes, p and q. The diago-
nal terms, for which p = q, obviously involve only single primes. Their sum
can then be evaluated using the Prime Number Theorem, which governs the
asymptotic density of primes. (Roughly speaking, the Prime Number Theo-
rem guarantees that prime sums

p
F (p) may, for appropriate functions F,
be approximated by

F (x)/ log xdx.) The off-diagonal terms (p = q) cannot
be summed rigorously. However, it can be shown that these terms do not con-
tribute to the limiting form of the pair correlation function for test functions
f(x) in (10) whose Fourier transforms have support in (-1, 1). This follows
from the fact that the separation between the primes is bounded from below
Random Matrices and Number Theory 7
APPLICATIONS OF RANDOM MATRICES IN PHYSICS
(by one!), which in turn means that the off-diagonal terms oscillate sufficiently
quickly that they are killed for test functions satisfying the support condition
by the averaging inherent in the definition of the correlation function.
In order to prove Montgomery’s conjecture for all test function f(x) it
would be necessary to evaluate the off-diagonal terms in the sum over prime
pairs, and this would require significantly more information about the pair cor-
relation of the primes than is currently available rigorously. Nevertheless, there
are conjectures about correlations between the primes due to Hardy and Little-
wood [17] which can be used to provide a heuristic verification [22].
Perhaps the most compelling evidence in support of Montgomery’s conjec-

ture comes, however, from Odlyzko’s numerical computations of large num-
bers of zeros very high up on the critical line [29]. The pair correlation of these
zeros is in striking agreement with (9).
Montgomery’s conjecture and theorem generalize immediately to higher or-
der correlations between the Riemann zeros. The most general theorem, which
holds for all n-point correlations and for test functions whose Fourier trans-
forms are supported on restricted sets, is due to Rudnick and Sarnak [31].
Again, the conjectures are supported by Odlyzko’s numerical computations [29]
and by heuristic calculations for all n-point correlations based on the Hardy-
Littlewood conjectures and which make no assumptions on the test functions [2,
3].
The results and conjectures described above extend straightforwardly to
other L-functions – the zeros of each individual L-function are, assuming the
generalized Riemann Hypothesis, believed to be correlated along the critical
line in the same way as the eigenvalues of random unitary matrices in the
limit of large matrix size [31]. They extend in a much more interesting way,
however, when one considers families. It was suggested by Katz and Sar-
nak [20, 21] that statistical properties of the zeros of L-functions computed by
averaging over a family, rather than along the critical line, should coincide with
those of the eigenvalues of matrices from one of the classical compact groups
(e.g. the unitary, orthogonal or symplectic groups); which group depends on
the particular symmetries of the family in question. For example, the family
defined in (7) is believed to have symplectic symmetry. I will give an example
later in these notes which has orthogonal symmetry. In both these examples,
the zeros of the L-functions come in pairs, symmetrically distributed around
the centre of the critical strip (where the critical line intersects the real axis),
just as the eigenvalues of orthogonal and symplectic matrices come in complex
conjugate pairs. In the case of the L-functions, this pairing is a consequence
of the functional equation. The differences between the various groups show
up, for example, when one looks at the zero/eigenvalue distribution close to

the respective symmetry points [30].
8
I mention in passing that one can define analogues of the L-functions over
finite fields. These are polynomials. In this case Katz and Sarnak were able
to prove the connection between the distribution of the zeros and that of the
eigenvalues of random matrices associated with the classical compact groups,
in the limit as the field size tends to infinity.
1.3 Physics
Much of the material in these lectures may seem rather far removed from
Physics, but in fact there are a number of remarkable similarities and analogies
that hint at a deep connection. Many of these similarities have been reviewed
elsewhere [22, 1], and so I shall not discuss them in detail here. However, I
shall make a few brief comments that I hope may help orient some readers in
the following sections.
Underlying the connectionbetween the theory of the zeta functionandPhysics
is a suggestion, due originally to Hilbert and Polya, that one strategy to prove
the Riemann Hypothesis would be to identify the zeros t
n
with the eigenvalues
of a self-adjoint operator. The Riemann Hypothesis would then follow imme-
diately from the fact that these eigenvalues are real. One might thus speculate
that the numbers t
n
are the energy levels of some quantum mechanical system.
In quantum mechanics there is a semiclassical formula due to Gutzwiller
[15] that relates the counting function of the energy levels to a sum over the
periodic orbits of the corresponding classical system. In the case of strongly
chaotic systems that do not possess time-reversal symmetry, this formula is
very closely analogous to (5), the primes being associated with periodic orbits.
The fact that the analogy is with chaotic systems that are not time-reversal

invariant is consistent with the conjecture that the energy level statistics of such
systems should, generically, in the semiclassical limit, coincide with those of
the eigenvalues of random matrices from one of the ensembles that are invari-
ant under unitary transformations, such as the CUE or the GUE, in the limit of
large matrix size [4].
The appearance of random matrices associated with the orthogonal and sym-
plectic groups in the statistical description of the zeros statistics within families
of L-functions is analogous to the appearance of these groups in Zirnbauer’s
extension of Dyson’s three-fold way to include systems of disordered fermions
(see, for example, [37]).
2. ζ(
1
2
+ it) and log ζ(
1
2
+ it)
The background reviewed in the introduction relates to connections between
the statistical distributions of the zeros of the Riemann zeta function and other
L-functions and those of the eigenvalues of random matrices associated with
the classical compact groups, on the scale of the mean zero/eigenvalue spac-
Random Matrices and Number Theory 9
APPLICATIONS OF RANDOM MATRICES IN PHYSICS
ing. My goal in the remainder of these notes is to focus on more recent de-
velopments that concern the value distribution of the functions ζ(
1
2
+ it) and
log ζ(
1

2
+ it) as t varies. I will then go on to describe the value distribution
of L-functions within families, and applications of these results to some other
important questions in number theory.
The basic ideas I shall be reviewing were introduced in [24], [25], and [7].
The theory was substantially developed in [8, 9]. The applications I shall de-
scribe later were initiated in [12] and [13]. Details of all of the calculations I
shall outline can be found in these references.
I shall start by reviewing what is known about the value distribution of
log ζ(1/2+it). The most important general result, due originally to Selberg,
is that this function satisfies a central limit theorem [33]: for any rectangle B
in the complex plane,
lim
T →∞
1
T
meas.{T ≤ t ≤ 2T :
log ζ(
1
2
+ it)

1
2
log log
t

∈ B}
=


B
e

1
2
(x
2
+y
2
)
dxdy. (29)
Odlyzko has investigated the value distribution of log ζ(1/2+it) numer-
ically for values of t around the height of the 10
20
th zero. Surprisingly, he
found a distribution that differs markedly from the limiting Gaussian. His data
are plotted in Figures 1 and 2. The CUE curves will be discussed later.
In order to quantify the discrepancy illustrated in Figures 1 and 2, I list
in Table 1 the moments of Re log ζ(1/2+it), normalized so that the second
moment is equal to one, calculated numerically by Odlyzko in [29]. The data
in the second and third columns relate to two different ranges near the height
of the 10
20
th zero. The difference between them is therefore a measure of the
fluctuations associated with computing over a finite range near this height. The
data labelled U(42) will be explained later.
Next let us turn to the value distribution of ζ(1/2+it) itself. Its moments
satisfy the long-standing and important conjecture that
lim
T →∞

1
(log
T

)
λ
2
1
T

T
0
|ζ(
1
2
+ it|

dt
= f
ζ
(λ)

p

(1 −
1
p
)
λ
2



m=0

Γ(λ + m)
m!Γ(λ)

2
p
−m

(30)
This can be viewed in the following way. It asserts that the moments grow
like (log
T

)
λ
2
as T →∞. Treating the primes as being statistically indepen-
dent of each other would give the right hand side with f
ζ
(λ)=1. f
ζ
(λ) thus
10
-6
-4
-2
2

4
0.1
0.2
0.3
0.4
CUE
Riemann Zeta
Gaussian
Figure 1. Odlyzko’s data for the value distribution of Re log ζ(1/2+it) near the 10
20
th zero
(taken from [29]), the value distribution of Re log Z with respect to matrices taken from U(42),
and the standard Gaussian, all scaled to have unit variance. (Taken from [24].)
Table 1. Moments of Re log ζ(1/2+it), calculated by Odlyzko over two ranges (labelled a
and b) near the 10
20
th zero (t  1.520 ×10
19
) (taken from [29]), compared with the moments
of Re log Z for U(42) and the Gaussian (normal) moments, all scaled to have unit variance.
Moment ζ a) ζ b) U(42) Normal
1 0.0 0.0 0.0 0
2 1.0 1.0 1.0 1
3 -0.53625 -0.55069 -0.56544 0
4 3.9233 3.9647 3.89354 3
5 -7.6238 -7.8839 -7.76965 0
6 38.434 39.393 38.0233 15
7 -144.78 -148.77 -145.043 0
8 758.57 765.54 758.036 105
9 -4002.5 -3934.7 -4086.92 0

10 24060.5 22722.9 25347.77 945
quantifies deviations from this simple-minded ansatz. Assuming that the mo-
ments do indeed grow like (log
T

)
λ
2
, the problem is then to determine f
ζ
(λ).
Random Matrices and Number Theory
11
APPLICATIONS OF RANDOM MATRICES IN PHYSICS
-6
-4
-2
2
4
5
10
15
CUE
Zeta
Gaussian
Figure 2. The logarithm of the inverse of the value distribution plotted in Figure 1. (Taken
from [24].)
The conjecture is known to be correct in only two non-trivial cases, when
λ =1and λ =2. It was shown by Hardy and Littlewood in 1918 that f
ζ

(1) =
1 [16] and by Ingham in 1926 that f
ζ
(2) =
1
12
[18]. On number-theoretical
grounds, Conrey and Ghosh have conjectured that f
ζ
(3) =
42
9!
[10] and Conrey
and Gonek that f
ζ
(4) =
24024
16!
[11].
We shall now look to random matrix theory to see what light, if any, it can
shed on these issues.
3. Characteristic polynomials of random unitary matrices
Our goal is to understand the value distribution of ζ(1/2+it). Recalling
that the zeros of this function are believed to be correlated like the eigenvalues
of random unitary matrices, we take as our model the functions whose zeros
are these eigenvalues, namely the characteristic polynomials of the matrices in
question.
12
Let us define the characteristic polynomial of a matrix A by
Z(A, θ)=det(I − Ae

−iθ
)
=

n
(1 − e
i(θ
n
−θ)
). (31)
Consider first the function
P
N
(s, t)=

U(N)
|Z(A, θ)|
t
e
isIm log Z(A,θ)
dµ(A). (32)
This is the moment generating function of log Z: the joint moments of Re log Z
and Im log Z are obtained from derivatives of P at s =0and t =0,and

U(N)
δ(x − Re log Z)δ(y − Im log Z)dµ(A) (33)
=
1

2



−∞


−∞
e
−itx−isy
P (s, it)dsdt. (34)
Written in terms of the eigenvalues,
P
N
(s, t)=

U(N)
N

n=1
|1 − e
i(θ
n
−θ)
|
t
e
−is
P

m=1
sin[(θ

n
−θ)m]
m
dµ(A). (35)
Since the integrand is a class function, we can use Weyl’s integration formula
(20) to write
P
N
(s, t)=
1
(2π)
N
N!


0
···


0
N

n=1
|1 − e
i(θ
n
−θ)
|
t
× e

−is
P

m=1
sin[(θ
n
−θ)m]
m

1≤j<k≤N
|e

j
− e

k
|
2

1
···dθ
N
. (36)
This integral can then be evaluated using a form of Selberg’s integral described
in [27], giving [24]
P
N
(s, t)=
N


j=1
Γ(j)Γ(t + j)
Γ(j +
t
2
+
s
2
)Γ(j +
t
2

s
2
)
. (37)
Note that the result is independent of θ. This is because the average over
U(N ) includes rotations of the spectrum and is itself therefore rotationally
invariant.
Random Matrices and Number Theory 13
APPLICATIONS OF RANDOM MATRICES IN PHYSICS
3.1 Value distribution of log Z
Consider first the Taylor expansion
P
N
(s, t)=e
α
00

10

t+α
01
s+α
20
t
2
/2+α
11
ts+α
02
s
2
/2+···
. (38)
The α
m0
are the cumulants of Re log Z and the α
0n
are i
n
times the cumulants
of Im log Z. Expanding (37) gives:
α
10
= α
01
= α
11
=0; (39)
α

20
= −α
02
=
1
2
log N +
1
2
(γ +1)+O(
1
N
2
); (40)
α
mn
= O(1) for m + n ≥ 3; (41)
and more specifically,
α
m0
=(−1)
m
(1 −
1
2
m−1
)Γ(m)ζ(m − 1) + O(
1
N
m−2

), for m ≥ 3. (42)
This leads to the following theorem [24]: for any rectangle B in the complex
plane
lim
N→∞
meas.



A ∈ U(N):
log Z(A, θ)

1
2
log N
∈ B



=
1


B
e

1
2
(x
2

+y
2
)
dxdy. (43)
Comparing this result to (29), one sees that log ζ(1/2+it) and log Z both
satisfy a central limit theorem when, respectively, t →∞and N →∞.Note
that the scalings in (29) and (43), corresponding to the asymptotic variances,
are the same if we make the identification
N =log
t

. (44)
This is the same as identifying the mean eigenvalue density with the mean zero
density; c.f. the unfolding factors in (12) and (3).
The identification (44) provides a connection betweenmatrix sizes andheights
up the critical line. The central limit theorems imply that when both of these
quantities tend to infinity log ζ(1/2+it) and log Z have the same limit distri-
bution. This supports the choice of Z as a model for the value distribution of
ζ(1/2+it) when t →∞. It is natural then to ask if it also constitutes a useful
model when t is large but finite; that is, whether it can explain the deviations
from the limiting Gaussian seen in Odlyzko’s data.
The value of t corresponding to the height of the 10
20
th zero should be
associated, via (44), to a matrix size of about N =42. The moments and
14
value distribution of log Z for any size of matrix can be obtained directly from
the formula for the moment generating function (37). The value distribution
when N =42is the CUE curve plotted in Figures 1 and 2. Values of the
moments are listed in Table 1. The obvious agreement between the results

for random 42 × 42 unitary matrices and Odlyzko’s data provides significant
further support for the model. It suggests that random matrix theory models
not just the limit distribution of log ζ(1/2+it), but the rate of approach to the
limit as t →∞.
3.2 Moments of |Z|
We now turn to the more important problem of the moments of |ζ(1/2+it)|.
It is natural to expect these moments to be related to those of the modulus of
the characteristic polynomial Z, which are defined as

U(N)
|Z(A, θ)|

dµ(A)=P (0, 2λ)
=
N

j=1
Γ(j)Γ(j +2λ)
(Γ(j + λ))
2
(45)
= e
P

m=0
α
m0
(2λ)
n
/n!

. (46)
Therefore
lim
N→∞
1
N
λ
2

U(N)
|Z(A, θ)|

dµ(A)=
e
λ
2
(γ+1)+
P

m=3
(−2λ)
m
2
m−1
−1
2
m−1
ζ(m−1)
m
, (47)

for |λ| <
1
2
. Note that since we are identifying Z with ζ(1/2+it) and N
with log
t

, the expression on the left-hand side of (47) corresponds precisely
to that in (30).
We now recall some properties of the Barnes’ G-function. This is an entire
function of order 2 defined by
G(1 + z)=(2π)
z/2
e
−[(1+γ)z
2
+z]/2


n=1

(1 + z/n)
n
e
−z+z
2
/(2n)

. (48)
It satisfies

G(1) = 1, (49)
G(z +1)=Γ(z)G(z) (50)
and
log G(1 + z)=(log2π −1)
z
2
−(1 + γ)
z
2
2
+


n=3
(−1)
n−1
ζ(n −1)
z
n
n
. (51)
Random Matrices and Number Theory 15

×