Tải bản đầy đủ (.pdf) (389 trang)

Mathematical Methods for Physical and Analytical Chemistry pot

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (23.13 MB, 389 trang )

Mathematical Methods
for Physical and
Analytical Chemistry
Mathematical Methods
for Physical and
Analytical Chemistry
David Z. Goodson
Department of Chemistry
&
Biochemistry
University of Massachusetts Dartmouth
WILEY
A JOHN WILEY & SONS, INC., PUBLICATION
The text was typeset by the author using LaTex (copyright 1999, 2002-2008, LaTex3 Project) and the
figures were created by the author using gnuplot (copyright 1986-1993, 1998, 2004, Thomas Williams
and Colin Kelley).
Copyright © 2011 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to
the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax
(978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should
be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ
07030, (201) 748-6011, fax (201) 748-6008, or online at
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in
preparing this book, they make no representation or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales


representatives or written sales materials. The advice and strategies contained herein may not be
suitable for your situation. You should consult with a professional where appropriate. Neither the
publisher nor author shall be liable for any loss of profit or any other commercial damages, including
but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services please contact our Customer Care
Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or
fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print,
however, may not be available in electronic formats. For more information about Wiley products, visit
our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data is available.
ISBN 978-0-470-47354-2
Printed in the United States of America.
10 987654321
To Betsy
Contents
Preface xiii
List of Examples xv
Greek Alphabet xix
Part I. Calculus
1 Functions: General Properties 3
1.1 Mappings 3
1.2 Differentials and Derivatives 4
1.3 Partial Derivatives 7
1.4 Integrals 9
1.5 Critical Points 14
2 Functions: Examples 19
2.1 Algebraic Functions 19
2.2 Transcendental Functions 21
2.2.1 Logarithm and Exponential 21

2.2.2 Circular Functions 24
2.2.3 Gamma and Beta Functions 26
2.3 Functional 31
3 Coordinate Systems 33
3.1 Points in Space 33
3.2 Coordinate Systems for Molecules 35
3.3 Abstract Coordinates 37
3.4 Constraints 39
3.4.1 Degrees of Freedom 39
3.4.2 Constrained Extrema* 40
3.5 Differential Operators in Polar Coordinates 43
4 Integration 47
4.1 Change of Variables in Integrands 47
4.1.1 Change of Variable: Examples 47
4.1.2 Jacobian Determinant 49
4.2 Gaussian Integrals 51
4.3 Improper Integrals 53
4.4 Dirac Delta Function 56
4.5 Line Integrals 57
5 Numerical Methods 61
5.1 Interpolation 61
5.2 Numerical Differentiation 63
5.3 Numerical Integration 65
5.4 Random Numbers 70
5.5 Root Finding 71
5.6 Minimization* 74
"This section treats an advanced topic. It can be skipped without loss of continuity.
VII
viii CONTENTS
6 Complex Numbers 79

6.1 Complex Arithmetic 79
6.2 Fundamental Theorem of Algebra 81
6.3 The Argand Diagram 83
6.4 Functions of a Complex Variable* 87
6.5 Branch Cuts* 89
7 Extrapolation 93
7.1 Taylor Series 93
7.2 Partial Sums 97
7.3 Applications of Taylor Series 99
7.4 Convergence 102
7.5 Summation Approximants* 104
Part II. Statistics
8 Estimation 111
8.1 Error and Estimation Ill
8.2 Probability Distributions 113
8.2.1 Probability Distribution Functions 113
8.2.2 The Normal Distribution 115
8.2.3 The Poisson Distribution 119
8.2.4 The Binomial Distribution* 120
8.2.5 The Boltzmann Distribution* 121
8.3 Outliers 124
8.4 Robust Estimation 126
9 Analysis of Significance 131
9.1 Confidence Intervals 131
9.2 Propagation of Error 136
9.3 Monte Carlo Simulation of Error 139
9.4 Significance of Difference 140
9.5 Distribution Testing* 144
10 Fitting 151
10.1 Method of Least Squares 151

10.1.1 Polynomial Fitting 151
10.1.2 Weighted Least Squares 154
10.1.3 Generalizations of the Least-Squares Method* 155
10.2 Fitting with Error in Both Variables 157
10.2.1 Uncontrolled Error in ж 157
10.2.2 Controlled Error in ж 160
10.3 Nonlinear Fitting 162
CONTENTS ix
11 Quality of Fit 165
11.1 Confidence Intervals for Parameters 165
11.2 Confidence Band for a Calibration Line 168
11.3 Outliers and Leverage Points ' 171
11.4 Robust Fitting* 173
11.5 Model Testing 176
12 Experiment Design 181
12.1 Risk Assessment 181
12.2 Randomization 185
12.3 Multiple Comparisons 188
12.3.1 ANOVA* 189
12.3.2 Post-Hoc Tests* 191
12.4 Optimization* 195
Part III. Differential Equations
13 Examples of Differential Equations 203
13.1 Chemical Reaction Rates 203
13.2 Classical Mechanics 205
13.2.1 Newtonian Mechanics 205
13.2.2 Lagrangian and Hamiltonian Mechanics 208
13.2.3 Angular Momentum 211
13.3 Differentials in Thermodynamics 212
13.4 Transport Equations 213

14 Solving Differential Equations, I 217
14.1 Basic Concepts 217
14.2 The Superposition Principle 220
14.3 First-Order ODE's 222
14.4 Higher-Order ODE's 225
14.5 Partial Differential Equations 228
15 Solving Differential Equations, II 231
15.1 Numerical Solution 231
15.1.1 Basic Algorithms 231
15.1.2 The Leapfrog Method* 234
15.1.3 Systems of Differential Equations 235
15.2 Chemical Reaction Mechanisms 236
15.3 Approximation Methods 239
15.3.1 Taylor Series* 239
15.3.2 Perturbation Theory* 242
x CONTENTS
Part IV. Linear Algebra
16 Vector Spaces 247
16.1 Cartesian Coordinate Vectors 247
16.2 Sets 248
16.3 Groups 249
16.4 Vector Spaces 251
16.5 Functions as Vectors 252
16.6 Hilbert Spaces 253
16.7 Basis Sets 256
17 Spaces of Functions 261
17.1 Orthogonal Polynomials 261
17.2 Function Resolution 267
17.3 Fourier Series 270
17.4 Spherical Harmonics 275

18 Matrices 279
18.1 Matrix Representation of Operators 279
18.2 Matrix Algebra 282
18.3 Matrix Operations 284
18.4 Pseudoinverse* 286
18.5 Determinants 288
18.6 Orthogonal and Unitary Matrices 290
18.7 Simultaneous Linear Equations 292
19 Eigenvalue Equations 297
19.1 Matrix Eigenvalue Equations 297
19.2 Matrix Diagonalization 301
19.3 Differential Eigenvalue Equations 305
19.4 Hermitian Operators 306
19.5 The Variational Principle* 309
20 Schrödinger's Equation 313
20.1 Quantum Mechanics 313
20.1.1 Quantum Mechanical Operators 313
20.1.2 The Wavefunction 316
20.1.3 The Basic Postulates* 317
20.2 Atoms and Molecules 319
20.3 The One-Electron Atom 321
20.3.1 Orbitals 321
20.3.2 The Radial Equation* 323
20.4 Hybrid Orbitals 325
20.5 Antisymmetry* 327
20.6 Molecular Orbitals* 329
CONTENTS xi
21 Fourier Analysis 333
21.1 The Fourier Transform 333
21.2 Spectral Line Shapes* 336

21.3 Discrete Fourier Transform* 339
21.4 Signal Processing 342
21.4.1 Noise Filtering* 342
21.4.2 Convolution* 345
A Computer Programs 351
A.l Robust Estimators 351
A.2 FREML 352
A.3 Neider-Mead Simplex Optimization 352
В Answers to Selected Exercises 355
С Bibliography 367
Index 373
Preface
This is an intermediate level post-calculus text on mathematical and statisti-
cal methods, directed toward the needs of chemists. It has developed out of a
course that I teach at the University of Massachusetts Dartmouth for third-
year undergraduate chemistry majors and, with additional assignments, for
chemistry graduate students. However, I have designed the book to also serve
as a supplementary text to accompany undergraduate physical and analyti-
cal chemistry courses and as a resource for individual study by students and
professionals in all subfields of chemistry and in related fields such as envi-
ronmental science, geochemistry, chemical engineering, and chemical physics.
I expect the reader to have had one year of physics, at least one year of
chemistry, and at least one year of calculus at the university level. While
many of the examples are taken from topics treated in upper-level physical
and analytical chemistry courses, the presentation is sufficiently self contained
that almost all the material can be understood without training in chemistry
beyond a first-year general chemistry course.
Mathematics courses beyond calculus are no longer a standard part of
the chemistry curriculum in the United States. This is despite the fact that
advanced mathematical and statistical methods are steadily becoming more

and more pervasive in the chemistry literature. Methods of physical chemistry,
such as quantum chemistry and spectroscopy, have become routine tools in
all subfields of chemistry, and developments in statistical theory have raised
the level of mathematical sophistication expected for analytical chemists. This
book is intended to bridge the gap from the point at which calculus courses end
to the level of mathematics needed to understand the physical and analytical
chemistry professional literature.
Even in the old days, when a chemistry degree required more formal math-
ematics training than today, there was a mismatch between the intermediate-
level mathematics taught by mathematicians (in the one or two additional
math courses that could be fit into the crowded undergraduate chemistry
curriculum) and the kinds of mathematical methods relevant to chemists. In-
deed, to cover all the topics included in this book, a student would likely
have needed to take separate courses in linear algebra, differential equations,
numerical methods, statistics, classical mechanics, and quantum mechanics.
Condensing six semesters of courses into just one limits the depth of cov-
erage, but it has the advantage of focusing attention on those ideas and tech-
niques most likely to be encountered by chemists. In a work of such breadth
yet of such relatively short length it is impossible to provide rigorous proofs
of all results, but I have tried to provide enough explanation of the logic
and underlying strategies of the methods to make them at least intuitively
reasonable. An annotated bibliography is provided to assist the reader in-
terested in additional detail. Throughout the book there are sections and
examples marked with an asterisk (*) to indicate an advanced or specialized
topic.
These starred sections can be skipped without loss of continuity.
xiii
XIV
PREFACE
Part I provides a review of calculus. The first four chapters provide a

brief overview of elementary calculus while the next three chapters treat, in
relatively more detail, topics that tend to be shortchanged in a typical intro-
ductory calculus course: numerical methods, complex numbers, and Taylor
series.
Parts II (Statistics), III (Differential Equations), and IV (Linear Al-
gebra) can for the most part be read in any order. The only exceptions are
some of the starred sections, and most of Chapter 20 (Schrödinger's Equa-
tion),
which draws significantly on Part III as well as Part IV. The treatment
of statistics is somewhat novel for a presentation at this level in that significant
use is made of Monte Carlo simulation of random error. Also, an emphasis
is placed on robust methods of estimation. Most chemists are unaware of
this relatively new development in statistical theory that allows for a more
satisfactory treatment of outliers than does the more familiar Q-test.
Exercises are included with each chapter, and answers to many of them
are provided in an appendix. Many of the exercises require the use of a
computer algebra system. The convenience and power of modern computer
algebra software systems is such that they have become an invaluable tool
for physical scientists. However, considering that there are various different
software systems in use, each with its own distinctive syntax and its own
enthusiastic corps of users, I have been reluctant to make the main body of
the text too dependent on computer algebra examples. Occasionally, when
discussing topics such as statistical estimation, Monte Carlo simulation, or
Fourier transform that particularly require the use of a computer, I have
presented examples in Mathematica. I apologize to users of other systems,
but I trust you will be able to translate to your system of choice without too
much trouble.
I thank my students at UMass Dartmouth who have been subjected to
earlier versions of these chapters over the past several years. Their comments
(and complaints) have significantly shaped the final result. I thank vari-

ous friends and colleagues who have suggested topics to include and/or have
read and commented on parts of the manuscript—in particular, Dr. Steven
Adler-Golden, Professor Bernice Auslander, Professor Gerald Manning, and
Professor Michele Mandrioli. Also, I gratefully acknowledge the efforts of
the anonymous reviewers of the original proposal to Wiley. Their insightful
and thorough critiques were extremely helpful. I have followed almost all of
their suggestions. Finally, I thank my wife Betsy Martin for her patience and
wisdom.
DAVID Z. GOODSON
Newton, Massachusetts
May, 2010
List of Examples
1.1 Contrasting the concepts of
function and operator.
1.2 Numerical approximation of a
derivative.
1.3 The derivative of x
2
.
1.4 The chain rule.
1.5 Differential of Gibbs free energy of
reaction.
1.6 Demonstration of the triple
product rule.
1.7 Integrals of x.
1.8 Integration by parts.
1.9 Dummy variables.
1.10 The critical temperature.
1.11 A saddle point.
2.1 Derivation of a derivative formula.

2.2 The cube root of -125.
2.3 Noninteger powers.
2.4 Solve φ = arctan(—1).
2.5 Integral representation of the
gamma function.
2.6 The kinetic molecular theory of
gases.
3.1 Kinetic energy in spherical polar
coordinates.
3.2 Center of mass of a diatomic
molecule.
3.3 Center of mass of a planar
molecule.
3.4 Coordinates for a bent triatomic
molecule.
3.5 The triple point.
3.6 Number of degrees of freedom for a
mixture of liquids.
3.7 Extrema of a two-coordinate func-
tion on a circle: Using the con-
straint to reduce the number of
degrees of freedom.
3.8 Extrema of a two-coordinate func-
tion on a circle: Using the method
of undetermined multipliers.
4.1 Integrals involving linear
polynomials.
4.2 Integral of reciprocal of a product
of linear polynomials.
4.3 An integral involving a product of

an exponential and an algebraic
function.
4.4 Integration by parts with change of
variable.
4.5 Cauchy principal value.
4.6 A divergent integral.
4.7 Another example of a Cauchy
principal value.
4.8 Quantum mechanical applications
of the Dirac delta function.
5.1 Cubic splines algorithm.
5.2 Derivatives of spectra.
5.3 A simple random number
generator.
5.4 Monte Carlo integration.
5.5 Using Brent's method to determine
the bond distance of the nitrogen
molecule.
6.1 Real and imaginary parts.
6.2 Calculate (2 + Зг)
2
.
6.3 Absolute value of complex numbers.
6.4 Real roots.
6.5 Complex numbers of unit length.
6.6 Calculating a noninteger power.
6.7 Integrals involving circular
functions.
6.8 Calculate the logarithm of 7 + 4г.
6.9 Residues of poles.

6.10 Applying the residue theorem.
7.1 Taylor series related to (1

x)~
l
.
7.2 Taylor series of y/1 + x.
7.3 Taylor series related to e
x
.
7.4 Multiplication of Taylor series.
7.5 Expanding the expansion variable.
7.6 Multivariate Taylor series.
XVI
LIST OF EXAMPLES
7.7 Laurent series.
7.8 Expansion about infinity.
7.9 Stirling's formula as an expansion
about infinity.
7.10 Comparison of extrapolation and
interpolation.
7.11 Simplifying a functional form.
7.12 Harmonic approximation for
diatomic potential energy.
7.13 Buffers.
7.14 Harmonic-oscillator partition
function.
7.15 Exponential of the first-derivative
operator.
7.16 Padé approximant.

8.1 Mean and median.
8.2 Expectation value of a function.
8.3 An illustration of the central limit
theorem.
8.4 Radioactive decay probability.
8.5 Computer simulation of data
samples.
8.6 The
Q-test.
8.7 Determining the breakdown point.
8.8 Median absolute deviation.
8.9 Huber estimators.
8.10 Breakdown of Huber estimation.
9.1 Solving for z
a
/2-
9.2 Solving for a.
9.3 A 95% confidence interval.
9.4 Using σ to estimate σ.
9.5 Standard error.
9.6 Rules for significant figures.
9.7 Monte Carlo determination of 95%
confidence interval of the mean.
9.8 Bootstrap resampling.
9.9 Testing significance of difference.
9.10 Monte Carlo test of significance
of difference.
9.11 Histogram of a normally distri-
buted data set.
9.12 Probability plots.

9.13 Shapiro-Wilk test.
10.1 Fitting with a straight line.
10.2 Experimental determination of a
reaction rate law.
10.3 Exponential fit.
10.4 Effect of error assumption on
least-squares fit.
10.5 Controlled vs. uncontrolled
variables.
10.6 Enzyme kinetics: The Eadie-
Hofstee plot.
10.7 Linearization.
10.8 Dose-response curve.
11.1 Designing an optimal procedure
for estimating an unknown
concentration.
11.2 Least median of squares as point
estimation
method.
11.3 Algorithm for LMS point
estimation.
11.4 LMS straight-line fitting.
11.5 Choosing between models.
12.1 Type II error for one-way
comparison with a control.
12.2 Multiple comparisons.
12.3 Contour plot of a chemical
synthesis.
12.4 Optimization using the
Nelder-Mead simplex algorithm.

12.5 Polishing the optimization with
local modeling.
13.1 Empirical determination of a
reaction rate.
13.2 Expressing the rate law in terms
of the extent of reaction.
13.3 A free particle.
13.4 Lagrange 's equation in one
dimension.
LIST OF EXAMPLES
XVll
13.5 Hamilton's equations in one
dimension.
13.6 Rigid-body rotation.
13.7 Water pollution.
13.8 Groundwater flow.
13.9 Solute transport.
14.1 Solutions to the differential
equation of the exponential.
14.2 Constant of integration for a
reaction rate law.
14.3 Constants of integration for a
trajectory.
14.4 Linear superpositions of p
orbitale.
14.5 Integrated rate laws.
14.6 Classical mechanical harmonic
oscillator.
14.7 Separation of variables in Fick's
second law.

15.1 Euler's
method.
15.2 Coupled differential equations for
a reaction mechanism.
15.3 Steady-state analysis of a
reaction mechanism.
15.4 Taylor-series integration of rate
laws.
15.5 Perturbation theory of harmonic
oscillator with friction.
16.1 A molecular symmetry group.
16.2 Some function spaces that qualify
as vector spaces.
16.3 A function space that is not a
vector space.
16.4 The dot product qualifies as an
inner product.
16.5 An inner product for functions.
16.6 Linear dependence.
16.7 Bases for R
3
.
16.8 A basis for P°°.
16.9 Using inner products to deter-
mine coordinates.
16.10 Vector resolution in R
3
.
17.1 Resolution of a polynomial.
17.2 Chebyshev approximation of a

discontinuous function.
17.3 Fourier analysis over an arbi-
trary range.
17.4 Solute transport boundary
conditions.
18.1 Rotation in xy-plane.
18.2 Moment of intertia tensor.
18.3 The product of a Ay. 2 matrix
and a 2 x 3 matrix is 4 x 3.
18.4 Transpose of a sum.
18.5 Inverse of a square matrix.
18.6 Inverse of a narrow matrix.
18.7 The method of least squares as a
matrix computation.
18.8 Determinants.
18.9 The determinant of the two-
dimensional rotation matrix.
18.10 Linear equations with no unique
solution.
19.1 A 2 x 2 matrix eigenvalue
equation.
19.2 Characteristic polynomial from a
determinant.
19.3 Eigenvalues of similar matrices.
19.4 Rigid-body moments of inertia.
19.5 Principal axes of rotation for
formyl chloride.
19.6 Functions of Hermitian matrices.
19.7 Quantum mechanical particle on
a ring.

19.8 Quantum mechanical harmonic
oscillator.
19.9 Matrix formulation of the
variational principle for a basis
of dimension 2.
xviii
LIST OF EXAMPLES
20.1 Constants of motion for a free
particle.
20.2 Calculating an expectation value.
20.3 Antisymmetry of electron
exchange.
20.4 Slater determinant for helium.
21.1 Fourier analysis of a wave packet.
21.2 Discrete Fourier transform of a
Lorentzian signal.
21.3 Savitzky-Golay filtering.
21.4 Time-domain filtering.
21.5 Simultaneous noise filtering and
resolution of overlapping peaks.
Greek Alphabet
/etters
A, a
B,
ß
Г,
7
Δ, δ
E,
e

z,
С
H,
η
Θ,

I,
L
К, к
Л,
λ
Μ, μ
Name
alpha
beta
gamma
delta
epsilon
zeta
età
theta
iota
kappa
lambda
mu
Trans-
literation
a
b
g

d
e
z
e
th
i
k
1
m
Letters
N,
v
Ξ, ξ
О, о
Π, π
Ρ, Ρ
Σ,
σ, ς
Τ, τ
Τ, υ
Φ, φ
X, χ
Φ,
φ
Ω, ω
Name
mi
xi
omicron
Pi

rho
sigma
tau
upsilon
phi
chi
psi
omega
Trans-
literation
n
X
о
Р
r
s
t
u
ph
kh
ps
0
XIX
Part I
Calculus
1.
Functions: General Properties
2.
Functions: Examples
3. Coordinate Systems

4. Integration
5. Numerical Methods
6. Complex Numbers
7. Extrapolation
Mathematical Methods for Physical and Analytical Chemistry
by David Z. Goodson
Copyright © 2011 John Wiley & Sons, Inc.
Chapter 1
Functions: General Properties
This chapter provides
a
brief review of some basic ideas and terminology from
calculus.
1.1 Mappings
A function is
a
mapping of some given number into another number.
The
function
f(x)
=
x
2
,
for example, maps the number
3
into the number
9,
3^+
9.

The function
is a
rule that indicates the destination
of
the mapping.
An
operator is
a
mapping of
a
function into another function.
Example 1.1. Contrasting
the
concepts of function
and
operator. The operator -^~
maps
f(x) = x
2
into
f'(x) = 2x,
x
2
^ 2x.
The first-derivative function
f'(x) = 2x
applied,
for
example,
to

the number
3
gives
3
-ί—►
6. In
contrast, the operator
4-
applied
to
the number
3
gives
A.
3-^
0,
as
it
treats "3"
as a
function
f(x) = 3
and "0" as
a
function
f(x) = 0.
In principle,
a
mapping can have an inverse, which undoes its effect. Suppose
q is the inverse of

/.
Then
g(f(x))=x. (i-i)
For the example
f(x)

x
2
we have the mappings 3
—►
9 —>
3.
The effect
of performing
a
mapping and then performing its inverse mapping is
to
map
the value of
x
back to
itself.
For the function
x
2
the inverse is the square root function, g(y)
=
y/y.
To
prove this, we simply note that

if
we
let у be
the result
of
the mapping
/
(that is,
у

x
2
),
then
9(f(x))
= vx
2
=
x.
Graphs of x
2
and y/y are compared in Fig. 1.1. Note that the graph of yfy can
be obtained by reflecting
1
the graph of
x
2
through the diagonal line
y


x.
x
The reflection
of a
point through
a
line
is a
mapping to the point on the opposite side
such that the new point
is
the same distance from the line as was the original point.
Mathematical Methods for Physical and Analytical Chemistry
by David Z. Goodson
Copyright © 2011 John Wiley & Sons, Inc.
4
CHAPTER 1. FUNCTIONS: GENERAL PROPERTIES
Figure 1.1: Graph of у = x
2
and its inverse, y/y. Reflection about the dashed line
(y = x) interchanges the function and its inverse.
Fig. 1.1 illustrates an interesting fact: An inverse mapping can in some
cases be multiple valued, x
2
maps 2 to 4, but it also maps —2 to 4. The
mapping / in this case is unique, in the sense that we can say with certainty
what value of f{x) corresponds to any value x. The inverse mapping g in this
case is not unique; given у = 4, g could map this to +2 or to —2. In Fig. 1.1,
values of the variable у for у > 0 each correspond to two different values of
y/y.

This function has two branches. On one branch, g(y) = \y/y\. On the
other, g(y) = -\y/y\.
The inverse of f(u) is designated by the symbol
/
_1
(u).
This can be
confusing. Often, the indication of the variable,
"(u),"
is omitted to make
the notation less cumbersome. Then, /
_1
can be the inverse,
/
_1
(u),
or the
reciprocal, /(w)
_1
= l//(u). Usually these are not equivalent. If f(u) = u
2
,
the inverse is/
_1
= /
_1
(u) = л/й while the reciprocal is /
_1
= /(it)
-1

= u~
2
.
Which meaning is intended must be determined from the context.
1.2 Differentials and Derivatives
A function f(x) is said to be continuous at a specified point
XQ
if the limit
x —» xo of f(x) is finite and has the same value whether it is approached
from one direction or the other. Calculus is the study of continuous change.
It was developed by Newton
2
to describe the motions of objects in response
to change in time. However, as we will see in this book, its applications are
much broader.
The basic tool of calculus is the differential, an infinitesimal change in
a variable or function, indicated by prefixing a "d" to the symbol for the
2
English alchemist, physicist, and mathematician Isaac Newton (1642-1727). Calculus
was also developed, independently and almost simultaneously, by the German philosopher,
mathematician, poet, lawyer, and alchemist Gottfried Wilhelm von Leibniz (1646-1716).
1.2. DIFFERENTIALS AND DERIVATIVES
5
quantity that is changing. If x is changed to x+dx, where dx is "infinitesimally
small," then f(x) changes to / + df in response. The formal definition of the
differential of / is
df= lim [f(x + Ax)-f(x)}. (1.2)
Δχ—>0
This is usually written
df = f(x + dx)-f(x), (1.3)

where f{x + dx)

f(x) is an abbreviation for the left-hand side of Eq. (1.2).
The basic idea of differential calculus is that the response to an infinitesimal
change is linear. In other words, df is proportional to dx; that is,
df = f'dx, (1.4)
where the proportionality factor, /', is called the derivative of /. Solving
Eq. (1.4) for /', we obtain /' = df /dx. We now have three different notations
for the derivative, ,, ,
,/ <V_ d_
f
dx dx
all of which mean the same thing. The choice of notation is a matter of conve-
nience. /' is very concise and allows for convenient indication of the function's
variable, for example,
/'(3).
The fractional notation df /dx is particularly con-
venient for calculations in which this derivative is expressed in terms of other
derivatives. However, to indicate the function's variable requires the awk-
ward notation ^ . The operator notation gj / is commonly used in
advanced mathematics as it can simplify theoretical analyses. The operation
of calculating a derivative is called differentiation.
3
It is important not to confuse the concepts of derivative and differential.
The derivative is a number, describing the rate of change of the function. In
contrast, the differential has no numerical value. It is a theoretical construct
that describes the smallest imaginable amount of
change,
smaller in magnitude
than any number yet not quite zero. The usefulness of the differential is in

mathematical derivations. The key idea is that while the numerical values of
dx and df are undefined, their ratio df /dx can have a defined value.
4
Example 1.2. Numerical approximation of a derivative. Consider the derivative of
f(x) = x
2
at the point x = 3. Let us approximate dx with the numerical value 0.01.
Then5
dfm(x + 0.01)
2
- x
2
= (3.01)
2
- 3
2
= 0.0601,
and /'(3) = df/dx ss 0.0601/0.01 = 6.01. This is quite close to the exact value
f'(3)
= 6 that we obtain from the analytical formula f'(x) = 2x.
Example 1.2 suggests that the derivative can be evaluated as a limit in which
a finite change in the variable becomes infinitesimal. Let Ax be some finite
3
Perhaps this is why students new to the subject so often confuse the words "differential"
and "derivative"!
4
There is no guarantee the ratio has a defined value. This is discussed in Section 1.5.
5
The symbol "ss" means "approximately equal to."
6

CHAPTER 1. FUNCTIONS: GENERAL PROPERTIES
but small change
in x.
Then
f
= lim ^
+
Δ
;>-^,
(1.5)
dx
Δχ^ο Ax
v ;
which
can be
taken
as the
definition
of a
derivative. This equation
can be
used
to
derive rules
for
calculating derivatives
of
analytic expressions.
Example
1.3. The

derivative
of x
2
.
(x
+
Ax)
2
-x
2
,. x
2
+
2xAx
+ (Ax)
2
- x
2
,.
lim
- -^ = lim i - = lim
(2x
+ Ax) = 2x.
Δι->ο
Да; Δι ο Да;
Δχ—»О
There are useful theorems concerning differentials and derivatives
of
com-
binations

of
two functions.
Let f(x)
and g{x)
be
two arbitrary functions.
Theorem 1.2.1. For the
sum of
two functions:
Theorem 1.2.2. For the product
of
two functions:
d(f
9
)=fd
9
+
9
df,
-*(
/s
) =/£+,£. (1.7)
Theorem 1.2.3. For a function
of
a function, f(g(x)):
df{9)
=
Tg
d9
' Tx^TgTx-

(L8)
Theorem 1.2.3
is
called the chain rule.
Example
1.4. The
chain rule. Consider
/ = д~
ъ
. The
derivative
-£- is
(—5)<j
-6
.
Suppose that
g =
1
+ x
2
.
Then
-£ = 2x
and, according
to
the chain rule,
d
f
_
d

/л ,
2ч-5
_ dg df _
fn
^,
c
^_
6
_ 10a;
=
^
(1
+x
2
)-
5
=
-f ^ =
(2x)(-5)
S
-
6
= -
dx
dx dx dg
(1
+ x
2
)
6

Given that
df = f'dx, it
follows that
dx = df/f.
This
is
true as long
as /' is
not equal
to
zero. Dividing each side
by df,
we obtain the derivative
of x as
a function
of /:
dx
1
Theorem 1.2.4. For all
x
such that
f'(x) ф
0,
~ϊϊ
=
~Ш~·
*
dx
It
is

usually the case that
a
function responds more strongly
to a
change
in
its
variables
in
some regions than
in
others. We expect
in
general that
/'
is also
a
function, and
it
can
be of
interest
to
consider the rate
of
change
of
1.3. PARTIAL DERIVATIVES
7
the derivative

of
a
derivative,
ГМ-|Лх), Г(х)-|ГМ,
which defines
the
second derivative,
the
third derivative,
etc. The nth
deriva-
tive
is
defined
as
/(П)(Х)
=^
/(
""
1)(Ж)
·
(L9)
The superscript indicates
the
number
of
primes (e.g.,
f^ =
/'").
6

We can
also
use the
notations
fW
{x)
=
p.
=
*L
f
.
(1
.
10)
K
' dx
n
dx
n K
'
1.3 Partial Derivatives
The extension
of the
concepts
of
differentials
and
derivatives
to

multivariable
functions
is
straightforward,
but we
must take into account that
the
various
variables
can
be
varied independently
of
each other. Consider
a
function
f(x,
y), of
two variables.
The
response
to the
change
(x, y)
—►
(x + dx, у + dy)
is
/
-►
/ +

df,
where
d
^i
dx+
i
d
y-
i
1
·
11
)
The proportionality factors
-^
and -g- are
called partial derivatives with
respect
to x
and
y. -^
of
f(x,y)
is
calculated
in the
same
way as
-£:
of f(x)

except that
у
is
treated
as
a
constant.
Eq. (1.5) is
modified
as
follows:
9
4-
=
lim П*
+ **>У)-П*>У).
(1
.
12)
дх
Дх^о Ах
It
is
a
common practice
to add
subscripts
to
derivatives
and

differentials
to
indicate
any
variables being held constant.
For
example,
the
partial derivative
given
by Eq.
(1.12)
can be
designated
as
(

j .
Eq.
(1.11)
can be
written
"ЧЮ/ЧЮ,*-
(1лз)
Example
1.5.
Differential
of
Gibbs free energy
of

reaction. Consider
a
chemical
reaction
A
—>
B.
Whether
the
reaction
can
occur spontaneously
is
determined
by the
sign
of the
differential
dG of
the
Gibbs free energy
of
the
mixture
of
A and В,
which
is
a
function

G(T, p,
пд,т1в).
(The
reaction
is
spontaneous
if
dG < 0.) The
variables
are temperature, pressure,
and
numbers
of
moles
of
A and B.
dG
can
be
written
dG=
[^rFÌ
dT+
(ir)
dp+
[^—)
dnA+
[^—)
dnB
·

V9T/ \9рУ
т
\дп
А
/т,р,п
в
\дп
в
/т,р,п
А
6
The parentheses
are
included
in the
superscript
to
distinguish from
/
raised
to
a
power.
If
/ = x
2
,
then

2

)
= -$-}' = 2
while
f
2
=
(x
2
)(x
2
)
= x
4
.
CHAPTER 1. FUNCTIONS: GENERAL PROPERTIES
For a process in which у is held constant, we get from Eq. (1.13)
because dy
y
is, by definition, zero. It follows that
d
fy
= \-^)
y
dx
y
+
{oy:)
x
dy
y = {-^)

dx
»>
dx
y
\dxj
y
'
df
v
/dxy is an alternative notation for the partial derivative.
With more than one variable, there is more than one kind of second deriva-
tive.
The change in
-gL
in response to a change in x or y, respectively, is
described by
2 2
#7
=
d_
df_
d^f_^d_df_
дудх ду дх ' дх
2
дх дх
The order in which partial derivatives are evaluated has no effect:
Theorem 1.3.1. For a function f of two variables x and y, -§- Q^ = ^ gh-
Consider a process in which x and у are changed in such a way that the
value of / remains constant. Then df = 0, which implies that
Solving for dyf we obtain dyj =

\M)
y
/
w,
dxf. Therefore,
K
dx)
f
dx
f
\дх)
у
/\ду)
х
\dx)
y
\df)
x
-
(L16)
This is usually written in the following more easily remembered form:
Theorem 1.3.2. For a function f of two variables x and y,
This is called the triple product rule. Note the minus sign!
Example 1.6. Demonstration of the triple product rule. The physical state of a
substance can be described in terms of state variables pressure, molar volume, and
temperature. For an ideal gas, these variables are related to each other by the ideal-gas
equation of state,
p
V
m

/ÄT = 1, (1.18)
where R is a constant. Let us use this equation to demonstrate Theorem
1.3.2:
dVm _ R 07] _ 14η dp _ RT
дТ ~
~p~
' ~dp~~~R' dVn~~V2'
_ /9VV\ (dT\ f^P_\
=
RVm ( RT\
=
RT
* \ &T )
p
\dp)
Vra
\dV
m
)
T
p R\ VI)
P
V
m
'
which agrees with Eq. (1.18).
1.4. INTEGRALS
9
The subscript on the partial derivatives should be omitted only if it is obvious
from context which variables are held constant. With multivariable functions

there may be alternative ways to choose the variables. For example, the molar
Gibbs free energy G
m
of a pure substance depends on p, V
m
, and T, but these
three variables are related to each other by an equation of state, so that the
values of any two of the variables determine the third. Thus, G
m
is really
a function of just two variables. We can choose whichever pair of variables
(p,Vm),
(p, T), or (V
m
, T) is most useful for a given application. It is important
in thermodynamics to indicate the variable held constant. ( ^°-
J
is not the
same as (f^·
Given three variables (x, у, и) and an equation such that one variable can
be expressed in terms of the other two, we can derive a multivariable analog of
the chain rule. Consider a process in which x and и are changed with у held
constant. Let dy

0 in Eq. (1.11) and then divide each side of the equation
by du
y
. Thus we obtain the following:
"".„ (£) (£).(£).·
This is valid only if the same variable is held constant in all three derivatives.

1.4 Integrals
The integral, indicated by the symbol J, is the operator that is the inverse
of the derivative:
d
The integral mapping is not unique. Because the derivative of a constant is
zero,
с can be any constant, с is called a constant of integration.
To be precise, the operator / operates on differentials,
/*-
/ + c. (1.20)
However, for a function f(x) we can substitute f'dx for df, according to
Eq. (1.4), which gives f
f dx = f{x) +c. (1.21)
It is in this sense that the operator J maps /' to / + с
Eqs.
(1.20) and (1-21) are examples of indefinite integrals. In contrast,
/ f'dx = f(x
2
) -
f(
Xl
),
(1.22)
where x\ and X2 are specified values, is called a definite integral. The
10 CHAPTER 1. FUNCTIONS: GENERAL PROPERTIES
Figure 1.2: The value of the definite integral f g(x)dx is the sum of the areas
under the curve but above the x-axis, minus the area below the ж-axis but above
the curve.
definite integral maps a function /' into a constant while the indefinite inte-
gral maps /' into another function. To solve an indefinite integral one needs

additional information, in order to assign a value to с
Example 1.7. Integrals of x. Consider the indefinite integral fxdx. We seek a
function / such that f'(x) = x. We know that the derivative of x
2
is 2x. Therefore,
the derivative of ^x
2
is equal to x. Thus, ^x
2
is one solution for fxdx. However, the
derivative of ^x
2
+ 1 is also equal to x, as is the derivative of ^x
2
+ 1.5. The function
^x
2
+ с for any constant с is an acceptable solution for the indefinite integral f xdx.
Now consider the definite integral f
4
xdx. This is evaluated according to Eq. (1.22)
using any acceptable solution for the indefinite integral. For example,
L
xdx = ±6^ - U
z
= 18 - 8 = 10,
4
г
or
/-6

/ xdx = (|6
2
+ 1) -(±4
2
+ l) = 19-9 = 10.
The solution for the definite integral is unique. The integration constant cancels out.
The definite integral is the kind of integral most commonly seen in science
applications. A remarkable theorem, called the fundamental theorem of
calculus,
7
provides an alternative interpretation of what it represents:
Theorem 1.4.1. The definite integral J g(x)dx of a continuous function g
is equal to the area under the graph of g from a to b.
This is illustrated in Fig. 1.2. If g is negative anywhere in the interval, then
the area above the graph but below zero is counted as "negative area" and
subtracted from the total. In Chapter 5 we will use this theorem to develop
an important practical technique for evaluating definite integrals.
7
This theorem is attributed to the Scottish mathematician James Gregory, who proved
a special case of it in 1668. Soon after that, Newton proved the general statement.

×