Tải bản đầy đủ (.pdf) (10 trang)

Tài liệu Thuật toán Algorithms (Phần 7) doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (85.91 KB, 10 trang )

53
such relationships can be challenging to solve precisely, they are often easy to
solve for some particular values of N to get solutions which give reasonable
estimates for all values of N. Our in this discussion is to gain some
intuitive feeling for how divide-and-conquer algorithms achieve efficiency, not
to do detailed analysis of the algorithms. Indeed, the particular recurrences
that we’ve just solved are sufficient to describe the performance of most of
the algorithms that we’ll be studying, and we’ll simply be referring back to
them.
Matrix Multiplication
The most famous application of the divide-and-conquer technique to an arith-
metic problem is Strassen’s method for matrix multiplication. We won’t go
into the details here, but we can sketch the method, since it is very similar to
the polynomial multiplication method that we have just studied.
The straightforward method for multiplying two N-by-N matrices re-
quires scalar multiplications, since each of the elements in the product
matrix is obtained by N multiplications.
Strassen’s method is to divide the size of the problem in half; this cor-
responds to dividing each of the into quarters, each N/2 by N/2.
The remaining problem is equivalent to multiplying 2-by-2 matrices. Just as
we were able to reduce the number of multiplications required from four to
three by combining terms in the polynomial multiplication problem, Strassen
was able to find a way to combine terms to reduce the number of multiplica-
tions required for the 2-by-2 matrix multiplication problem from 8 to 7. The
rearrangement and the terms required are quite complicated.
The number of multiplications required for matrix multiplication using
Strassen’s method is therefore defined by the divide-and-conquer recurrence
M(N) =
which has the solution
M(N)
This result was quite surprising when it first appeared, since it had previously


been thought that multiplications were absolutely necessary for matrix
multiplication. The problem has been studied very intensively in recent years,
and slightly better methods than Strassen’s have been found. The “best”
algorithm for matrix multiplication has still not been found, and this is one
of the most famous outstanding problems of computer science.
It is important to note that we have been counting multiplications only.
Before choosing an algorithm for a practical application, the costs of the
extra additions and subtractions for combining terms and the costs of the
4
recursive calls must be considered. These costs may depend heavily on the
particular implementation or computer used. But certainly, this overhead
makes Strassen’s method less efficient than the standard method for small
matrices. Even for large matrices, in terms of the number of data items input,
Strassen’s method really represents an improvement only from to
This improvement is hard to notice except for very large For example,
would have to be more than a million for Strassen’s method to use four times
as few multiplications as the standard method, even though the overhead per
multiplication is likely to be four times as large. Thus the algorithm is a
theoretical, not practical, contribution.
This illustrates a general tradeoff which appears in all applications (though
the effect, is not always so dramatic): simple algorithms work best for small
problems, but sophisticated algorithms can reap tremendous savings for large
problems.
rl
POLYNOMIALS
55
Exercises
1.
Give a method for evaluating a polynomial with known roots . . . ,
and compare your method with Horner’s method.

2.
Write a program to evaluate polynomials using Horner’s method, where
a linked list representation is used for the polynomials. Be sure that your
program works efficiently for sparse polynomials.
3.
Write an program to do Lagrang:ian interpolation.
4.
Suppose that we know that a polynomial to be interpolated is sparse (has
few non-zero coefficients). Describe how you would modify Lagrangian
interpolation to run in time proportional to N times the number of
zero coefficients.
5.
Write out all of the polynomial performed when the
and-conquer polynomial multiplication method described in the text is
used
6.
The polynomial multiplication could be made more efficient
for sparse polynomials by returning 0 if all coefficients of either input are
0. About how many multiplications within a constant factor) would
such a program use to square 1 +
Can be computed with less than five multiplications? If so, say which
ones; if not, say why not.
8.
Can be computed with less than nine multiplications? If so, say which
ones; if not, say why not.
9.
Describe exactly how you would modify to multiply a polynomial of
degree N by another of degree M, with N M.
10. Give the representation that you would use for programs to add and
multiply multivariate polynomials such as + + w. Give

the single most important reason for choosing this representation.

5. Gaussian Elimination
Certainly one of the most scientific computations is the
solution of systems of simultaneous equations. The basic algorithm for
solving systems of equations, Gaussian elimination, is relatively simple and
has changed little in the 150 years since it was invented. This algorithm has
come to be well understood, especially in the past twenty years, so that it can
be used with some confidence that it will efficiently produce accurate results.
This is an example of an algorithm that will surely be available in most
computer installations; indeed, it is a primitive in several computer languages,
notably APL and Basic. However, the basic algorithm is easy to understand
and implement, and special situations do arise where it might be desirable
to implement a modified version of the algorithm rather than work with a
standard subroutine. Also, the method deserves to be learned as one of the
most important numeric methods in use today.
As with the other mathematical material that we have studied so far, our
treatment of the method will highlight only the basic principles and will be
self-contained. Familiarity with linear algebra is not required to understand
the basic method. We’ll develop a simple Pascal implementation that might
be easier to use than a library subroutine for simple applications. However,
we’ll also see examples of problems which could arise. Certainly for a large or
important application, the use of an expertly tuned implementation is called
for, as well as some familiarity with the underlying mathematics.
A Simple Example
Suppose that we have three variables and and the following three
equations:
x + = 8,
57

×