Tải bản đầy đủ (.pdf) (429 trang)

introduction to algorithms 2nd edition solutions (instructors.manual)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.66 MB, 429 trang )

Instructor’s Manual
by Thomas H. Cormen
Clara Lee
Erica Lin
to Accompany
Introduction to Algorithms
Second Edition
by Thomas H. Cormen
Charles E. Leiserson
Ronald L. Rivest
Clifford Stein
The MIT Press
Cambridge, Massachusetts London, England
McGraw-Hill Book Company
Boston Burr Ridge, IL Dubuque, IA Madison, WI
New York San Francisco St. Louis Montr´eal Toronto
Instructor’s Manual
by Thomas H. Cormen, Clara Lee, and Erica Lin
to Accompany
Introduction to Algorithms, Second Edition
by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein
Published by The MIT Press and McGraw-Hill Higher Education, an imprint of The McGraw-Hill Companies,
Inc., 1221 Avenue of the Americas, New York, NY 10020. Copyright
c
 2002 by The Massachusetts Institute of
Technology and The McGraw-Hill Companies, Inc. All rights reserved.
No part of this publication may be reproduced or distributed in any form or by any means, or stored in a database
or retrieval system, without the prior written consent of The MIT Press or The McGraw-Hill Companies, Inc., in-
cluding, but not limited to, network or other electronic storage or transmission, or broadcast for distance learning.
Contents
Revision History R-1


Preface P-1
Chapter 2: Getting Started
Lecture Notes 2-1
Solutions 2-16
Chapter 3: Growth of Functions
Lecture Notes 3-1
Solutions 3-7
Chapter 4: Recurrences
Lecture Notes 4-1
Solutions 4-8
Chapter 5: Probabilistic Analysis and Randomized Algorithms
Lecture Notes 5-1
Solutions 5-8
Chapter 6: Heapsort
Lecture Notes 6-1
Solutions 6-10
Chapter 7: Quicksort
Lecture Notes 7-1
Solutions 7-9
Chapter 8: Sorting in Linear Time
Lecture Notes 8-1
Solutions 8-9
Chapter 9: Medians and Order Statistics
Lecture Notes 9-1
Solutions 9-9
Chapter 11: Hash Tables
Lecture Notes 11-1
Solutions 11-16
Chapter 12: Binary Search Trees
Lecture Notes 12-1

Solutions 12-12
Chapter 13: Red-Black Trees
Lecture Notes 13-1
Solutions 13-13
Chapter 14: Augmenting Data Structures
Lecture Notes 14-1
Solutions 14-9
iv Contents
Chapter 15: Dynamic Programming
Lecture Notes 15-1
Solutions 15-19
Chapter 16: Greedy Algorithms
Lecture Notes 16-1
Solutions 16-9
Chapter 17: Amortized Analysis
Lecture Notes 17-1
Solutions 17-14
Chapter 21: Data Structures for Disjoint Sets
Lecture Notes 21-1
Solutions 21-6
Chapter 22: Elementary Graph Algorithms
Lecture Notes 22-1
Solutions 22-12
Chapter 23: Minimum Spanning Trees
Lecture Notes 23-1
Solutions 23-8
Chapter 24: Single-Source Shortest Paths
Lecture Notes 24-1
Solutions 24-13
Chapter 25: All-Pairs Shortest Paths

Lecture Notes 25-1
Solutions 25-8
Chapter 26: Maximum Flow
Lecture Notes 26-1
Solutions 26-15
Chapter 27: Sorting Networks
Lecture Notes 27-1
Solutions 27-8
Index I-1
Revision History
Revisions are listed by date rather than being numbered. Because this revision
history is part of each revision, the affected chapters always include the front matter
in addition to those listed below.

18 January 2005. Corrected an error in the transpose-symmetry properties.
Affected chapters: Chapter 3.

2 April 2004. Added solutions to Exercises 5.4-6, 11.3-5, 12.4-1, 16.4-2,
16.4-3, 21.3-4, 26.4-2, 26.4-3, and 26.4-6 and to Problems 12-3 and 17-4. Made
minor changes in the solutions to Problems 11-2 and 17-2. Affected chapters:
Chapters 5, 11, 12, 16, 17, 21, and 26; index.

7 January 2004. Corrected two minor typographical errors in the lecture notes
for the expected height of a randomly built binary search tree. Affected chap-
ters: Chapter 12.

23 July 2003. Updated the solution to Exercise 22.3-4(b) to adjust for a correc-
tion in the text. Affected chapters: Chapter 22; index.

23 June 2003. Added the link to the website for the clrscode package to the

preface.

2 June 2003. Added the solution to Problem 24-6. Corrected solutions to Ex-
ercise 23.2-7 and Problem 26-4. Affected chapters: Chapters 23, 24, and 26;
index.

20 May 2003. Added solutions to Exercises 24.4-10 and 26.1-7. Affected
chapters: Chapters 24 and 26; index.

2 May 2003. Added solutions to Exercises 21.4-4, 21.4-5, 21.4-6, 22.1-6,
and 22.3-4. Corrected a minor typographical error in the Chapter 22 notes on
page 22-6. Affected chapters: Chapters 21 and 22; index.

28 April 2003. Added the solution to Exercise 16.1-2, corrected an error in
the Þrst adjacency matrix example in the Chapter 22 notes, and made a minor
change to the accounting method analysis for dynamic tables in the Chapter 17
notes. Affected chapters: Chapters 16, 17, and 22; index.

10 April 2003. Corrected an error in the solution to Exercise 11.3-3. Affected
chapters: Chapter 11.

3 April 2003. Reversed the order of Exercises 14.2-3 and 14.3-3. Affected
chapters: Chapter 13, index.

2 April 2003. Corrected an error in the substitution method for recurrences on
page 4-4. Affected chapters: Chapter 4.
R-2 Revision History

31 March 2003. Corrected a minor typographical error in the Chapter 8 notes
on page 8-3. Affected chapters: Chapter 8.


14 January 2003. Changed the exposition of indicator random variables in
the Chapter 5 notes to correct for an error in the text. Affected pages: 5-4
through 5-6. (The only content changes are on page 5-4; in pages 5-5 and 5-6
only pagination changes.) Affected chapters: Chapter 5.

14 January 2003. Corrected an error in the pseudocode for the solution to Ex-
ercise 2.2-2 on page 2-16. Affected chapters: Chapter 2.

7 October 2002. Corrected a typographical error in EUCLIDEAN-TSP on
page 15-23. Affected chapters: Chapter 15.

1 August 2002. Initial release.
Preface
This document is an instructor’s manual to accompany Introduction to Algorithms,
Second Edition, by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein. It is intended for use in a course on algorithms. You might
also Þnd some of the material herein to be useful for a CS 2-style course in data
structures.
Unlike the instructor’s manual for the Þrst edition of the text—which was organized
around the undergraduate algorithms course taught by Charles Leiserson at MIT
in Spring 1991—we have chosen to organize the manual for the second edition
according to chapters of the text. That is, for most chapters we have provided a
set of lecture notes and a set of exercise and problem solutions pertaining to the
chapter. This organization allows you to decide how to best use the material in the
manual in your own course.
We have not included lecture notes and solutions for every chapter, nor have we
included solutions for every exercise and problem within the chapters that we have
selected. We felt that Chapter 1 is too nontechnical to include here, and Chap-
ter 10 consists of background material that often falls outside algorithms and data-

structures courses. We have also omitted the chapters that are not covered in the
courses that we teach: Chapters 18–20 and 28–35, as well as Appendices A–C;
future editions of this manual may include some of these chapters. There are two
reasons that we have not included solutions to all exercises and problems in the
selected chapters. First, writing up all these solutions would take a long time, and
we felt it more important to release this manual in as timely a fashion as possible.
Second, if we were to include all solutions, this manual would be longer than the
text itself!
We have numbered the pages in this manual using the format CC-PP, where CC
is a chapter number of the text and PP is the page number within that chapter’s
lecture notes and solutions. The PP numbers restart from 1 at the beginning of each
chapter’s lecture notes. We chose this form of page numbering so that if we add
or change solutions to exercises and problems, the only pages whose numbering is
affected are those for the solutions for that chapter. Moreover, if we add material
for currently uncovered chapters, the numbers of the existing pages will remain
unchanged.
The lecture notes
The lecture notes are based on three sources:
P-2 Preface

Some are from the Þrst-edition manual, and so they correspond to Charles Leis-
erson’s lectures in MIT’s undergraduate algorithms course, 6.046.

Some are from Tom Cormen’s lectures in Dartmouth College’s undergraduate
algorithms course, CS 25.

Some are written just for this manual.
You will Þnd that the lecture notes are more informal than the text, as is appro-
priate for a lecture situation. In some places, we have simpliÞed the material for
lecture presentation or even omitted certain considerations. Some sections of the

text—usually starred—are omitted from the lecture notes. (We have included lec-
ture notes for one starred section: 12.4, on randomly built binary search trees,
which we cover in an optional CS 25 lecture.)
In several places in the lecture notes, we have included “asides” to the instruc-
tor. The asides are typeset in a slanted font and are enclosed in square brack-
ets.
[Here is an aside.]
Some of the asides suggest leaving certain material on the
board, since you will be coming back to it later. If you are projecting a presenta-
tion rather than writing on a blackboard or whiteboard, you might want to mark
slides containing this material so that you can easily come back to them later in the
lecture.
We have chosen not to indicate how long it takes to cover material, as the time nec-
essary to cover a topic depends on the instructor, the students, the class schedule,
and other variables.
There are two differences in how we write pseudocode in the lecture notes and the
text:

Lines are not numbered in the lecture notes. We Þnd them inconvenient to
number when writing pseudocode on the board.

We avoid using the length attribute of an array. Instead, we pass the array
length as a parameter to the procedure. This change makes the pseudocode
more concise, as well as matching better with the description of what it does.
We have also minimized the use of shading in Þgures within lecture notes, since
drawing a Þgure with shading on a blackboard or whiteboard is difÞcult.
The solutions
The solutions are based on the same sources as the lecture notes. They are written
a bit more formally than the lecture notes, though a bit less formally than the text.
We do not number lines of pseudocode, but we do use the length attribute (on the

assumption that you will want your students to write pseudocode as it appears in
the text).
The index lists all the exercises and problems for which this manual provides solu-
tions, along with the number of the page on which each solution starts.
Asides appear in a handful of places throughout the solutions. Also, we are less
reluctant to use shading in Þgures within solutions, since these Þgures are more
likely to be reproduced than to be drawn on a board.
Preface P-3
Source Þles
For several reasons, we are unable to publish or transmit source Þles for this man-
ual. We apologize for this inconvenience.
In June 2003, we made available a clrscode package for L
A
T
E
X2
ε
. It enables
you to typeset pseudocode in the same way that we do. You can Þnd this package
at That site also
includes documentation.
Reporting errors and suggestions
Undoubtedly, instructors will Þnd errors in this manual. Please report errors by
sending email to
If you have a suggestion for an improvement to this manual, please feel free to
submit it via email to
As usual, if you Þnd an error in the text itself, please verify that it has not already
been posted on the errata web page before you submit it. You can use the MIT
Press web site for the text, />locate the errata web page and to submit an error report.
We thank you in advance for your assistance in correcting errors in both this manual

and the text.
Acknowledgments
This manual borrows heavily from the Þrst-edition manual, which was written by
Julie Sussman, P.P.A. Julie did such a superb job on the Þrst-edition manual, Þnd-
ing numerous errors in the Þrst-edition text in the process, that we were thrilled to
have her serve as technical copyeditor for the second-edition text. Charles Leiser-
son also put in large amounts of time working with Julie on the Þrst-edition manual.
The other three Introduction to Algorithms authors—Charles Leiserson, Ron
Rivest, and Cliff Stein—provided helpful comments and suggestions for solutions
to exercises and problems. Some of the solutions are modiÞcations of those written
over the years by teaching assistants for algorithms courses at MIT and Dartmouth.
At this point, we do not know which TAs wrote which solutions, and so we simply
thank them collectively.
We also thank McGraw-Hill and our editors, Betsy Jones and Melinda Dougharty,
for moral and Þnancial support. Thanks also to our MIT Press editor, Bob Prior,
and to David Jones of The MIT Press for help with T
E
X macros. Wayne Cripps,
John Konkle, and Tim Tregubov provided computer support at Dartmouth, and the
MIT sysadmins were Greg Shomo and Matt McKinnon. Phillip Meek of McGraw-
Hill helped us hook this manual into their web site.
T
HOMAS H. CORMEN
CLARA LEE
ERICA LIN
Hanover, New Hampshire
July 2002

Lecture Notes for Chapter 2:
Getting Started

Chapter 2 overview
Goals:

Start using frameworks for describing and analyzing algorithms.

Examine two algorithms for sorting: insertion sort and merge sort.

See how to describe algorithms in pseudocode.

Begin using asymptotic notation to express running-time analysis.

Learn the technique of “divide and conquer” in the context of merge sort.
Insertion sort
The sorting problem
Input: A sequence of n numbers a
1
, a
2
, ,a
n
.
Output: A permutation (reordering) a

1
, a

2
, ,a

n

 of the input sequence such
that a

1
≤ a

2
≤···≤a

n
.
The sequences are typically stored in arrays.
We also refer to the numbers as keys. Along with each key may be additional
information, known as satellite data.
[You might want to clarify that “satellite
data” does not necessarily come from a satellite!]
We will see several ways to solve the sorting problem. Each way will be expressed
as an algorithm: a well-deÞned computational procedure that takes some value, or
set of values, as input and produces some value, or set of values, as output.
Expressing algorithms
We express algorithms in whatever way is the clearest and most concise.
English is sometimes the best way.
When issues of control need to be made perfectly clear, we often use pseudocode.
2-2 Lecture Notes for Chapter 2: Getting Started

Pseudocode is similar to C, C++, Pascal, and Java. If you know any of these
languages, you should be able to understand pseudocode.

Pseudocode is designed for expressing algorithms to humans. Software en-
gineering issues of data abstraction, modularity, and error handling are often

ignored.

We sometimes embed English statements into pseudocode. Therefore, unlike
for “real” programming languages, we cannot create a compiler that translates
pseudocode to machine code.
Insertion sort
A good algorithm for sorting a small number of elements.
It works the way you might sort a hand of playing cards:

Start with an empty left hand and the cards face down on the table.

Then remove one card at a time from the table, and insert it into the correct
position in the left hand.

To Þnd the correct position for a card, compare it with each of the cards already
in the hand, from right to left.

At all times, the cards held in the left hand are sorted, and these cards were
originally the top cards of the pile on the table.
Pseudocode: We use a procedure I
NSERTION-SORT.

Takes as parameters an array A[1 n] and the length n of the array.

As in Pascal, we use “ ” to denote a range within an array.

[We usually use 1-origin indexing, as we do here. There are a few places in
later chapters where we use 0-origin indexing instead. If you are translating
pseudocode to C, C++, or Java, which use 0-origin indexing, you need to be
careful to get the indices right. One option is to adjust all index calculations

in the C, C++, or Java code to compensate. An easier option is, when using an
array
A[1 n]
, to allocate the array to be one entry longer—
A[0 n]
—and just
don’t use the entry at index
0
.]

[In the lecture notes, we indicate array lengths by parameters rather than by
using the
length
attribute that is used in the book. That saves us a line of pseu-
docode each time. The solutions continue to use the
length
attribute.]

The array A is sorted in place: the numbers are rearranged within the array,
with at most a constant number outside the array at any time.
Lecture Notes for Chapter 2: Getting Started 2-3
INSERTION-SORT( A)
cost times
for j ← 2 to nc
1
n
do key ← A[ j ] c
2
n − 1
✄ Insert A[ j ] into the sorted sequence A[1 j −1]. 0 n − 1

i ← j − 1 c
4
n − 1
while i > 0 and A[i] > key c
5

n
j=2
t
j
do A[i + 1] ← A[i] c
6

n
j=2
(t
j
− 1)
i ← i − 1 c
7

n
j=2
(t
j
− 1)
A[i + 1] ← key c
8
n − 1
[Leave this on the board, but show only the pseudocode for now. We’ll put in the

“cost” and “times” columns later.]
Example:
123456
524613
123456
254613
123456
245613
123456
245613
123456
245613
123456
2 4 5 61 3
j jj
jj
[Read this Þgure row by row. Each part shows what happens for a particular itera-
tion with the value of
j
indicated.
j
indexes the “current card” being inserted into
the hand. Elements to the left of
A[ j ]
that are greater than
A[ j ]
move one position
to the right, and
A[ j ]
moves into the evacuated position. The heavy vertical lines

separate the part of the array in which an iteration works—
A[1 j]
—from the part
of the array that is unaffected by this iteration—
A[ j + 1 n]
. The last part of the
Þgure shows the Þnal sorted array.]
Correctness
We often use a loop invariant to help us understand why an algorithm gives the
correct answer. Here’s the loop invariant for I
NSERTION-SORT:
Loop invariant: At the start of each iteration of the “outer” for loop—the
loop indexed by j—the subarray A[1 j −1] consists of the elements orig-
inally in A[1 j −1] but in sorted order.
To use a loop invariant to prove correctness, we must show three things about it:
Initialization: It is true prior to the Þrst iteration of the loop.
Maintenance: If it is true before an iteration of the loop, it remains true before the
next iteration.
Termination: When the loop terminates, the invariant—usually along with the
reason that the loop terminated—gives us a useful property that helps show that
the algorithm is correct.
Using loop invariants is like mathematical induction:
2-4 Lecture Notes for Chapter 2: Getting Started

To prove that a property holds, you prove a base case and an inductive step.

Showing that the invariant holds before the Þrst iteration is like the base case.

Showing that the invariant holds from iteration to iteration is like the inductive
step.


The termination part differs from the usual use of mathematical induction, in
which the inductive step is used inÞnitely. We stop the “induction” when the
loop terminates.

We can show the three parts in any order.
For insertion sort:
Initialization: Just before the Þrst iteration, j = 2. The subarray A[1 j − 1]
is the single element A[1], which is the element originally in A[1], and it is
trivially sorted.
Maintenance: To be precise, we would need to state and prove a loop invariant
for the “inner” while loop. Rather than getting bogged down in another loop
invariant, we instead note that the body of the inner while loop works by moving
A[ j − 1], A[ j −2], A[ j − 3], and so on, by one position to the right until the
proper position for key (which has the value that started out in A[ j ]) is found.
At that point, the value of key is placed into this position.
Termination: The outer for loop ends when j > n; this occurs when j = n + 1.
Therefore, j −1 = n. Plugging n in for j −1 in the loop invariant, the subarray
A[1 n] consists of the elements originally in A[1 n] but in sorted order. In
other words, the entire array is sorted!
Pseudocode conventions
[Covering most, but not all, here. See book pages 19–20 for all conventions.]

Indentation indicates block structure. Saves space and writing time.

Looping constructs are like in C, C++, Pascal, and Java. We assume that the
loop variable in a for loop is still deÞned when the loop exits (unlike in Pascal).

“✄” indicates that the remainder of the line is a comment.


Variables are local, unless otherwise speciÞed.

We often use objects, which have attributes (equivalently, Þelds). For an at-
tribute attr of object x, we write attr[x]. (This would be the equivalent of
x. attr in Java or x-> attr in C++.)

Objects are treated as references, like in Java. If x and y denote objects, then
the assignment y ← x makes x and y reference the same object. It does not
cause attributes of one object to be copied to another.

Parameters are passed by value, as in Java and C (and the default mechanism in
Pascal and C++). When an object is passed by value, it is actually a reference
(or pointer) that is passed; changes to the reference itself are not seen by the
caller, but changes to the object’s attributes are.

The boolean operators “and” and “or” are short-circuiting: if after evaluating
the left-hand operand, we know the result of the expression, then we don’t
evaluate the right-hand operand. (If x is
FALSE in “x and y” then we don’t
evaluate y.Ifx is TRUE in “x or y” then we don’t evaluate y.)
Lecture Notes for Chapter 2: Getting Started 2-5
Analyzing algorithms
We want to predict the resources that the algorithm requires. Usually, running time.
In order to predict resource requirements, we need a computational model.
Random-access machine (RAM) model

Instructions are executed one after another. No concurrent operations.

It’s too tedious to deÞne each of the instructions and their associated time costs.


Instead, we recognize that we’ll use instructions commonly found in real com-
puters:

Arithmetic: add, subtract, multiply, divide, remainder, ßoor, ceiling). Also,
shift left/shift right (good for multiplying/dividing by 2
k
).

Data movement: load, store, copy.

Control: conditional/unconditional branch, subroutine call and return.
Each of these instructions takes a constant amount of time.
The RAM model uses integer and ßoating-point types.

We don’t worry about precision, although it is crucial in certain numerical ap-
plications.

There is a limit on the word size: when working with inputs of size n, assume
that integers are represented by c lg n bits for some constant c ≥ 1. (lg n is a
very frequently used shorthand for log
2
n.)

c ≥ 1 ⇒ we can hold the value of n ⇒we can index the individual elements.

c is a constant ⇒ the word size cannot grow arbitrarily.
How do we analyze an algorithm’s running time?
The time taken by an algorithm depends on the input.

Sorting 1000 numbers takes longer than sorting 3 numbers.


A given sorting algorithm may even take differing amounts of time on two
inputs of the same size.

For example, we’ll see that insertion sort takes less time to sort n elements when
they are already sorted than when they are in reverse sorted order.
Input size: Depends on the problem being studied.

Usually, the number of items in the input. Like the size n of the array being
sorted.

But could be something else. If multiplying two integers, could be the total
number of bits in the two integers.

Could be described by more than one number. For example, graph algorithm
running times are usually expressed in terms of the number of vertices and the
number of edges in the input graph.
2-6 Lecture Notes for Chapter 2: Getting Started
Running time: On a particular input, it is the number of primitive operations
(steps) executed.

Want to deÞne steps to be machine-independent.

Figure that each line of pseudocode requires a constant amount of time.

One line may take a different amount of time than another, but each execution
of line i takes the same amount of time c
i
.


This is assuming that the line consists only of primitive operations.

If the line is a subroutine call, then the actual call takes constant time, but the
execution of the subroutine being called might not.

If the line speciÞes operations other than primitive ones, then it might take
more than constant time. Example: “sort the points by x-coordinate.”
Analysis of insertion sort
[Now add statement costs and number of times executed to
INSERTION-SORT
pseudocode.]

Assume that the i th line takes time c
i
, which is a constant. (Since the third line
is a comment, it takes no time.)

For j = 2, 3, ,n, let t
j
be the number of times that the while loop test is
executed for that value of j.

Note that when a for or while loop exits in the usual way—due to the test in the
loop header—the test is executed one time more than the loop body.
The running time of the algorithm is

all statements
(cost of statement) · (number of times statement is executed).
Let T (n) = running time of I
NSERTION-SORT.

T (n) = c
1
n + c
2
(n −1) + c
4
(n −1) + c
5
n

j=2
t
j
+ c
6
n

j=2
(t
j
− 1)
+ c
7
n

j=2
(t
j
− 1) + c
8

(n − 1).
The running time depends on the values of t
j
. These vary according to the input.
Best case: The array is already sorted.

Always Þnd that A[i] ≤ key upon the Þrst time the while loop test is run (when
i = j − 1).

All t
j
are 1.

Running time is
T (n) = c
1
n + c
2
(n −1) + c
4
(n −1) + c
5
(n −1) + c
8
(n −1)
= (c
1
+ c
2
+ c

4
+ c
5
+ c
8
)n −(c
2
+ c
4
+ c
5
+ c
8
).

Can express T (n) as an +b for constants a and b (that depend on the statement
costs c
i
) ⇒ T(n) is a linear function of n.
Lecture Notes for Chapter 2: Getting Started 2-7
Worst case: The array is in reverse sorted order.

Always Þnd that A[i] > key in while loop test.

Have to compare key with all elements to the left of the j th position ⇒ compare
with j − 1 elements.

Since the while loop exits because i reaches 0, there’s one additional test after
the j − 1 tests ⇒ t
j

= j.

n

j=2
t
j
=
n

j=2
j and
n

j=2
(t
j
− 1) =
n

j=2
( j − 1).

n

j=1
j is known as an arithmetic series, and equation (A.1) shows that it equals
n(n +1)
2
.


Since
n

j=2
j =

n

j=1
j

− 1, it equals
n(n +1)
2
− 1.
[The parentheses around the summation are not strictly necessary. They are
there for clarity, but it might be a good idea to remind the students that the
meaning of the expression would be the same even without the parentheses.]

Letting k = j −1, we see that
n

j=2
( j − 1) =
n−1

k=1
k =
n(n −1)

2
.

Running time is
T (n) = c
1
n + c
2
(n −1) + c
4
(n −1) + c
5

n(n +1)
2
− 1

+ c
6

n(n −1)
2

+ c
7

n(n −1)
2

+ c

8
(n −1)
=

c
5
2
+
c
6
2
+
c
7
2

n
2
+

c
1
+ c
2
+ c
4
+
c
5
2


c
6
2

c
7
2
+ c
8

n
− (c
2
+ c
4
+ c
5
+ c
8
).

Can express T (n) as an
2
+ bn + c for constants a, b, c (that again depend on
statement costs) ⇒ T (n) is a quadratic function of n.
Worst-case and average-case analysis
We usually concentrate on Þnding the worst-case running time: the longest run-
ning time for any input of size n.
Reasons:


The worst-case running time gives a guaranteed upper bound on the running
time for any input.

For some algorithms, the worst case occurs often. For example, when search-
ing, the worst case often occurs when the item being searched for is not present,
and searches for absent items may be frequent.

Why not analyze the average case? Because it’s often about as bad as the worst
case.
2-8 Lecture Notes for Chapter 2: Getting Started
Example: Suppose that we randomly choose n numbers as the input to inser-
tion sort.
On average, the key in A[ j ] is less than half the elements in A[1 j − 1] and
it’s greater than the other half.
⇒ On average, the while loop has to look halfway through the sorted subarray
A[1 j − 1] to decide where to drop key.
⇒ t
j
= j/2.
Although the average-case running time is approximately half of the worst-case
running time, it’s still a quadratic function of n.
Order of growth
Another abstraction to ease analysis and focus on the important features.
Look only at the leading term of the formula for running time.

Drop lower-order terms.

Ignore the constant coefÞcient in the leading term.
Example: For insertion sort, we already abstracted away the actual statement costs

to conclude that the worst-case running time is an
2
+ bn + c.
Drop lower-order terms ⇒ an
2
.
Ignore constant coefÞcient ⇒ n
2
.
But we cannot say that the worst-case running time T (n) equals n
2
.
It grows like n
2
. But it doesn’t equal n
2
.
We say that the running time is (n
2
) to capture the notion that the order of growth
is n
2
.
We usually consider one algorithm to be more efÞcient than another if its worst-
case running time has a smaller order of growth.
Designing algorithms
There are many ways to design algorithms.
For example, insertion sort is incremental: having sorted A[1 j −1], place A[ j ]
correctly, so that A[1 j] is sorted.
Divide and conquer

Another common approach.
Divide the problem into a number of subproblems.
Conquer the subproblems by solving them recursively.
Base case: If the subproblems are small enough, just solve them by brute force.
[It would be a good idea to make sure that your students are comfortable with
recursion. If they are not, then they will have a hard time understanding divide
and conquer.]
Combine the subproblem solutions to give a solution to the original problem.
Lecture Notes for Chapter 2: Getting Started 2-9
Merge sort
A sorting algorithm based on divide and conquer. Its worst-case running time has
a lower order of growth than insertion sort.
Because we are dealing with subproblems, we state each subproblem as sorting
a subarray A[p r]. Initially, p = 1 and r = n, but these values change as we
recurse through subproblems.
To sort A[p r]:
Divide by splitting into two subarrays A[p q] and A[q + 1 r], where q is the
halfway point of A[p r].
Conquer by recursively sorting the two subarrays A[p q] and A[q + 1 r].
Combine by merging the two sorted subarrays A[p q] and A[q +1 r] to pro-
duce a single sorted subarray A[p r]. To accomplish this step, we’ll deÞne a
procedure M
ERGE( A, p, q, r).
The recursion bottoms out when the subarray has just 1 element, so that it’s trivially
sorted.
M
ERGE-SORT(A, p, r)
if p < r ✄ Check for base case
then q ←


( p + r)/2

✄ Divide
M
ERGE-SORT(A, p, q) ✄ Conquer
MERGE-SORT(A, q + 1, r) ✄ Conquer
MERGE( A, p, q, r) ✄ Combine
Initial call: M
ERGE-SORT(A, 1, n)
[It is astounding how often students forget how easy it is to compute the halfway
point of
p
and
r
as their average
( p + r)/2
. We of course have to take the ßoor
to ensure that we get an integer index
q
. But it is common to see students perform
calculations like
p +(r − p)/2
, or even more elaborate expressions, forgetting the
easy way to compute an average.]
Example: Bottom-up view for n = 8:
[Heavy lines demarcate subarrays used in
subproblems.]
12345678
52471326
25471326

initial array
merge
24571236
merge
1 234567
merge
sorted array
2
12345678
2-10 Lecture Notes for Chapter 2: Getting Started
[Examples when
n
is a power of
2
are most straightforward, but students might
also want an example when
n
is not a power of
2
.]
Bottom-up view for n = 11:
12345678
47261473
initial array
merge
merge
merge
sorted array
5 2 6
91011

472164375 2 6
247146357 2 6
124467235 6 7
122344566 7 7
1234567891011
merge
[Here, at the next-to-last level of recursion, some of the subproblems have only
1
element. The recursion bottoms out on these single-element subproblems.]
Merging
What remains is the M
ERGE procedure.
Input: Array A and indices p, q, r such that

p ≤ q < r.

Subarray A[ p q] is sorted and subarray A[q + 1 r] is sorted. By the
restrictions on p, q, r, neither subarray is empty.
Output: The two subarrays are merged into a single sorted subarray in A[p r].
We implement it so that it takes (n) time, where n = r − p +1 = the number of
elements being merged.
What is n? Until now, n has stood for the size of the original problem. But now
we’re using it as the size of a subproblem. We will use this technique when we
analyze recursive algorithms. Although we may denote the original problem size
by n, in general n will be the size of a given subproblem.
Idea behind linear-time merging: Think of two piles of cards.

Each pile is sorted and placed face-up on a table with the smallest cards on top.

We will merge these into a single sorted pile, face-down on the table.


A basic step:

Choose the smaller of the two top cards.
Lecture Notes for Chapter 2: Getting Started 2-11

Remove it from its pile, thereby exposing a new top card.

Place the chosen card face-down onto the output pile.

Repeatedly perform basic steps until one input pile is empty.

Once one input pile empties, just take the remaining input pile and place it
face-down onto the output pile.

Each basic step should take constant time, since we check just the two top cards.

There are ≤ n basic steps, since each basic step removes one card from the
input piles, and we started with n cards in the input piles.

Therefore, this procedure should take (n) time.
We don’t actually need to check whether a pile is empty before each basic step.

Put on the bottom of each input pile a special sentinel card.

It contains a special value that we use to simplify the code.

We use ∞, since that’s guaranteed to “lose” to any other value.

The only way that ∞ cannot lose is when both piles have ∞ exposed as their

top cards.

But when that happens, all the nonsentinel cards have already been placed into
the output pile.

We know in advance that there are exactly r − p +1 nonsentinel cards ⇒ stop
once we have performed r − p + 1 basic steps. Never a need to check for
sentinels, since they’ll always lose.

Rather than even counting basic steps, just Þll up the output array from index p
up through and including index r.
Pseudocode:
M
ERGE(A, p, q, r)
n
1
← q − p +1
n
2
← r − q
create arrays L[1 n
1
+ 1] and R[1 n
2
+ 1]
for i ← 1 to n
1
do L[i] ← A[p +i − 1]
for j ← 1 to n
2

do R[ j] ← A[q + j]
L[n
1
+ 1] ←∞
R[n
2
+ 1] ←∞
i ← 1
j ← 1
for k ← p to r
do if L[i] ≤ R[ j ]
then A[k] ← L[i]
i ← i + 1
else A[k] ← R[ j ]
j ← j + 1
[The book uses a loop invariant to establish that
MERGE
works correctly. In a
lecture situation, it is probably better to use an example to show that the procedure
works correctly.]
2-12 Lecture Notes for Chapter 2: Getting Started
Example: A call of MERGE(9, 12, 16)
A
LR
1234 1234
ij
k
2457 1236
A
LR

1234 1234
ij
k
2457
1
2361
24571236 4571236
A
LR
9 10111213141516
1234 1234
ij
k
2457
1
2361
5712362 A
LR
1234 1234
ij
k
2457
1
2361
7123622
5

5

5


5

5

5

5

5

9 10111213141516
9 10111213141516
9 101112131415168

17

8

17

8

17

8

17

A

LR
1234 1234
ij
k
2457
1
2361
1236223 A
LR
1234 1234
ij
k
2457
1
2361
2362234
A
LR
1234 1234
ij
k
2457
1
2361
3622345 A
LR
1234 1234
ij
k
2457

1
2361
622345
5

5

5

5

5

5

5

5

6
A
LR
1234 1234
ij
k
2457
1
2361
722345
5


5

6
9 10111213141516
9 10111213141516
9 10111213141516
9 10111213141516
9 10111213141516
8

17

8

17

8

17

8

17

8

17

[Read this Þgure row by row. The Þrst part shows the arrays at the start of the

“for
k ← p
to
r
” loop, where
A[p q]
is copied into
L[1 n
1
]
and
A[q +1 r]
is
copied into
R[1 n
2
]
. Succeeding parts show the situation at the start of successive
iterations. Entries in
A
with slashes have had their values copied to either
L
or
R
and have not had a value copied back in yet. Entries in
L
and
R
with slashes have
been copied back into

A
. The last part shows that the subarrays are merged back
into
A[p r]
, which is now sorted, and that only the sentinels (

) are exposed in
the arrays
L
and
R
.]
Running time: The Þrst two for loops take (n
1
+n
2
) = (n) time. The last for
loop makes n iterations, each taking constant time, for (n) time.
Total time: (n).
Lecture Notes for Chapter 2: Getting Started 2-13
Analyzing divide-and-conquer algorithms
Use a recurrence equation (more commonly, a recurrence) to describe the running
time of a divide-and-conquer algorithm.
Let T (n) = running time on a problem of size n.

If the problem size is small enough (say, n ≤ c for some constant c), we have a
base case. The brute-force solution takes constant time: (1).

Otherwise, suppose that we divide into a subproblems, each 1/b the size of the
original. (In merge sort, a = b = 2.)


Let the time to divide a size-n problem be D(n).

There are a subproblems to solve, each of size n/b ⇒ each subproblem takes
T (n/b) time to solve ⇒we spend aT(n/b) time solving subproblems.

Let the time to combine solutions be C(n).

We get the recurrence
T (n) =

(1) if n ≤ c ,
aT(n/b) + D(n) +C(n) otherwise .
Analyzing merge sort
For simplicity, assume that n is a power of 2 ⇒ each divide step yields two sub-
problems, both of size exactly n/2.
The base case occurs when n = 1.
When n ≥ 2, time for merge sort steps:
Divide: Just compute q as the average of p and r ⇒ D(n) = (1).
Conquer: Recursively solve 2 subproblems, each of size n/2 ⇒ 2T (n/2).
Combine: M
ERGE on an n-element subarray takes (n) time ⇒ C(n) = (n).
Since D(n) = (1) and C(n) = (n), summed together they give a function that
is linear in n: (n) ⇒ recurrence for merge sort running time is
T (n) =

(1) if n = 1 ,
2T (n/2) + (n) if n > 1 .
Solving the merge-sort recurrence: By the master theorem in Chapter 4, we can
show that this recurrence has the solution T (n) = (n lg n).

[Reminder:
lg n
stands for
log
2
n
.]
Compared to insertion sort ((n
2
) worst-case time), merge sort is faster. Trading
a factor of n for a factor of lg n is a good deal.
On small inputs, insertion sort may be faster. But for large enough inputs, merge
sort will always be faster, because its running time grows more slowly than inser-
tion sort’s.
We can understand how to solve the merge-sort recurrence without the master the-
orem.
2-14 Lecture Notes for Chapter 2: Getting Started

Let c be a constant that describes the running time for the base case and also
is the time per array element for the divide and conquer steps.
[Of course, we
cannot necessarily use the same constant for both. It’s not worth going into this
detail at this point.]

We rewrite the recurrence as
T (n) =

c if n = 1 ,
2T (n/2) + cn if n > 1 .


Draw a recursion tree, which shows successive expansions of the recurrence.

For the original problem, we have a cost of cn, plus the two subproblems, each
costing T (n/2):
cn
T(n/2) T(n/2)

For each of the size-n/2 subproblems, we have a cost of cn/2, plus two sub-
problems, each costing T (n/4):
cn
cn/2
T(n/4) T(n/4)
cn/2
T(n/4) T(n/4)

Continue expanding until the problem sizes get down to 1:
cn
cn

Total: cn lg n + cn
cn
lg n
cn
n
c c c c c c c

cn
cn/2
cn/4 cn/4
cn/2

cn/4 cn/4
Lecture Notes for Chapter 2: Getting Started 2-15

Each level has cost cn.

The top level has cost cn.

The next level down has 2 subproblems, each contributing cost cn/2.

The next level has 4 subproblems, each contributing cost cn/4.

Each time we go down one level, the number of subproblems doubles but the
cost per subproblem halves ⇒ cost per level stays the same.

There are lg n + 1 levels (height is lg n).

Use induction.

Base case: n = 1 ⇒ 1 level, and lg 1 +1 = 0 +1 = 1.

Inductive hypothesis is that a tree for a problem size of 2
i
has lg 2
i
+1 = i +1
levels.

Because we assume that the problem size is a power of 2, the next problem
size up after 2
i

is 2
i+1
.

A tree for a problem size of 2
i+1
has one more level than the size-2
i
tree ⇒
i + 2 levels.

Since lg 2
i+1
+ 1 = i + 2, we’re done with the inductive argument.

Total cost is sum of costs at each level. Have lg n +1 levels, each costing cn ⇒
total cost is cn lg n + cn.

Ignore low-order term of cn and constant coefÞcient c ⇒ (n lg n).

×