Tải bản đầy đủ (.pdf) (6 trang)

code thuật toán nén ảnh Fractal

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (328.6 KB, 6 trang )

Journal of Electronic Imaging 8(1), 98 – 103 (January 1999).

Lean domain pools for fractal image
compression
Dietmar Saupe
Universita¨t Leipzig
Institut fu¨r Informatik
Augustusplatz 10/11
04109 Leipzig, Germany
E-mail:

Abstract. In fractal image compression an image is partitioned into
ranges for each of which a similar subimage, called domain, is selected from a pool of subimages. However, only a fraction of this
large pool is actually used in the fractal code. This subset can be
characterized in two related ways: (1) It contains domains with relatively large intensity variation. (2) The collection of used domains is
localized in those image regions that show a high degree of structure. Both observations lead to improvements of fractal image compression. First, we accelerate the encoding process by a priori discarding the low variance domains from the pool that are unlikely to
be chosen for the fractal code. Second, the localization of the domains may be exploited for an improved encoding of the domain
indices, in effect raising the compression ratio. When considering
the performance of a variable rate fractal quadtree encoder we
found that a speedup by a factor of 2 – 3 does not degrade the ratedistortion curve ranging from compression ratio 5 up to 30. © 1999
SPIE and IS&T. [S1017-9909(99)00301-3]

1 Introduction
Fractal image compression1–3 is capable of yielding competitive rate-distortion curves; however, it suffers from long
encoding times. Therefore, large efforts have been undertaken to speed up the encoding process. Most of the proposed techniques attempt to accelerate the searching and
are based on some kind of feature vector assigned to ranges
and domains. The features can be discrete ͑leading to classification and clustering methods͒ or continuous ͑yielding
functional or nearest neighbor methods͒. For a survey see
our papers see Refs. 4 and 5. When applied, these methods
provide greater speed which is traded in for a loss in image
fidelity and compression ratio.


A different route to increased speed can be chosen by
less searching as opposed to faster searching. This means
that if we can devise a technique to estimate a priori
whether a given domain will be used for the fractal code or
not, then we can exclude all unlikely domains. In this way
greater speed is achieved by restricting the search to a reduced domain pool. Of course, the search in the reduced
domain pool may then be supported by a method of the
other kind, e.g., by classification. Several possible ap-

Paper 96018 received April 16, 1996; accepted for publication September 15, 1998.
1017-9909/99/$10.00 © 1999 SPIE and IS&T.

proaches along these lines have been reported in the literature.
1. One may argue that those domains that are close to a
given range in the image ͑e.g., domains that overlap
the range͒ are especially well suited as a partner for
the given range, thereby localizing the domain pool
relative to the range ͑see, e.g., the work of Monro and
Dudbridge6,7 and Barthel et al.8͒. This is an adaptive
domain pool reduction; for each range a different domain pool is constructed.
2. Another adaptive variant has been suggested in a
functional method by Bedford, Dekking, and
Keane.9,4 Depending on the range the domain pool is
shrunk by excluding a number of domains that do not
satisfy a condition which involves certain inner products which are independent of the range and can be
calculated for all domains in a preprocessing step.
These excluded domains are guaranteed not to be optimal for the given range; thus, no image or compression degradation can occur with this method.
3. Complementing these methods one can work with a
fixed domain pool, which is initially scanned once in
order to discard domains that are unlikely to be of

any use.
In principle, this last mentioned approach has already
been implemented in the early work of Jacquin.3 He used a
classification scheme coming from a study of Ramamurthi
and Gersho10 which classifies domain blocks according to
their perceptual geometric features. Three major types of
blocks are differentiated: shade blocks, edge blocks, and
midrange blocks. In shade blocks the image intensity varies
only very little. Since ranges that would be classified as
shade blocks can be approximated well by scaled constant
fixed blocks, it is not necessary to search for corresponding
domains. Thus, in this scheme all domains classified as
shade blocks are never used and effectively are excluded
from the domain pool. However, in Jacquin’s approach the
class of shade blocks cannot be very large. For example,
only 11% of all blocks for the 256ϫ256, 6 bit/pixel image
Lenna have been classified as shade blocks in Jacquin’s

98 / Journal of Electronic Imaging / January 1999 / Vol. 8(1)
Downloaded From: on 11/17/2017 Terms of Use: />

Domain pools for image compression

work.11 The reason for this is that otherwise too many
range blocks would also be classified as shade blocks and
thus be coded as constant fixed blocks yielding poor approximations. Therefore, no variations of shade block definitions have been investigated in these studies.
In this article we consider a parametrized and nonadaptive version of domain pool reduction. Here we allow an
adjustable number of domains to be excluded ͑ranging from
0% to almost 100%͒ and investigate the effects on computation time, image fidelity and compression ratio.
We will see that there is no need for keeping domains

with low intensity variance in the pool. Thus, we propose to
eliminate a fraction 1Ϫ ␣ , ␣ ෈(0,1͔ , of the domain pool
consisting of the domains with least variance. In this way
we remove the mostly useless domains from the pool,
achieving a lean and more productive domain pool. Using
the adaptive quadtree method of Fisher ͑see Ref. 2, Appendix A͒ we will show the following:

Fig. 1 Histogram of variances in the domain pool of domain blocks
of size 8ϫ8 vs that for domains actually used in an adaptive
quadtree fractal code of Lenna.

1. The computation time scales linearly with ␣.
2. Even for low values of ␣, e.g., ␣ ϭ0.15, there is no
degradation in image quality. On the contrary, the
fidelity improves slightly.
3. For medium values of ␣, e.g., ␣ ϭ0.50, the compression ratio suffers a little, decreasing by about 2%.
The fractal code for an image essentially consists of the
partitioning of the image into ranges and the data for one
affine transformation per range. These data are given by an
offset o ͑typically 7 bits͒, a scaling factor s ͑5 bits͒, a
domain D k from the domain pool ͑ log2 ND bits where N D
is the size of the domain pool͒ and the code for an isometry
͑3 bits͒. The intensity values in the coded range are then
taken from the scaled, transformed copy of the domain plus
the added offset. The domains from the pool are indexed
and referenced by that index. Let us discuss the simple
example of a gray scale image of resolution 512ϫ512
with a domain pool of nonoverlapping domains of size 8
ϫ8. Thus, there are 642 ϭ4096ϭ2 12 domains in this pool
altogether and the storage of one domain index costs 12

bits. It turns out that only a certain fraction, e.g., say 1000,
of these domains are used. We propose to make use of this
observation in the following way. We use a standard
‘‘white block skipping’’ ͑WBS͒ quadtree storage scheme to
identify the 1000 domains used out of the total of 4096.
With the quadtree on hand we can now code indices of
domains in the range from 1 to 1000, costing only 10 bits
each. Thus, we save 2 bits per transformation. If M is the
total number of ranges we have an overall file size reduction if the code for the quadtree does not exceed 2M bits.
This approach will become beneficial in combination with
lean domain pools since the collection of domains used will
have even more structure yielding a smaller code for the
domain quadtree.
Parallel to this work researchers are currently investigating other methods for domain pool reduction for fractal
image compression. Kominek12,13 and Signes14 propose to
remove domain blocks from the domain pool if they can be

covered well by other blocks still in the pool. Also high
variance domain blocks are generally favored over low
variance blocks; however, no analysis of the time/
performance trade-off is attempted for this approach.
The remainder of this article is organized as follows. In
Sec. 2 we present the details and results of the domain pool
reduction by eliminating domains with low intensity variation. In Sec. 3 the optimized domain storage scheme is
presented with results. Finally, in Sec. 4 we analyze the
performance of the fractal encoder with lean domain pools
and optimized storage scheme over a range of compression
ratios.

2 Acceleration by Lean Domain Pools

In a first experiment we check our hypothesis that there is
no need for keeping domains with low intensity variance in
the pool. We carry out a fractal encoding of a test image
using the adaptive quadtree method and record a histogram
of intensity variances of blocks of size 8ϫ8 from the domain pool and also the corresponding histogram for the
variances of those domains actually used in the code ͑see
Fig. 1͒. The result is very clear. There is a very large subset
of domains in the pool with small variances while there is
no such trend in the histogram for the blocks used. Thus,
we may indeed expect that discarding a large fraction of
low variance blocks will effect only a few ranges. For these
ranges a suboptimal domain with a larger variance may be
found. If, however, there is no longer a domain available in
the pool which admits a collage error within the prescribed
tolerance, then the range needs to be subdivided into four
smaller ranges.
In the main study of this paper we scan each domain
pool ͑i.e., the pools for block sizes 8ϫ8, 16ϫ16, 32
ϫ32, and 64ϫ64͒ and keep only a fraction ␣, ␣ ෈(0,1͔ , of
them in the pool, namely those domains that have the largest variances. For differing choices of the parameter ␣ we
compute the fractal code and record the computation time
used, the peak-signal-to-noise ratio ͑PSNR͒, and the comJournal of Electronic Imaging / January 1999 / Vol. 8(1) / 99

Downloaded From: on 11/17/2017 Terms of Use: />

Saupe
Table 1 Performance of the adaptive quadtree method with lean
domain pools. The parameter ␣ indicates the fraction of domains
which are kept in the pool. The time is measured on an Indy
R4600SC of Silicon Graphics in seconds. The compression ratio is

in the fourth column. When applying the optimized coding procedure
for the domains, we obtain the ratios of the fifth column with the
difference in file size measured in bytes indicated in the last column.
See also Fig. 2.
Results for 512ϫ512 Lenna
Compression



CPU
Time
s

PSNR
dB

Comp.
ratio

New
ratio

Bytes
saved

1.00

15.2

32.73


14.88

14.84

Ϫ39

0.90

14.0

32.71

14.86

14.84

Ϫ34

0.80

12.6

32.75

14.85

14.83

Ϫ21


0.70

11.3

32.76

14.83

14.82

0.60
0.50

10.1
8.7

32.80
32.87

14.75
14.57

14.77
14.62

Ϫ7
24
60


0.40
0.30
0.20
0.15
0.10

7.4
6.0
4.6
3.9
3.1

32.90
32.93
32.88
32.78
32.53

14.45
14.19
13.49
13.10
12.64

14.56
14.88
14.23
13.89
13.98


137
856
1009
1135
1982

0.08
0.06
0.04
0.02

2.8
2.4
2.1
1.7

32.40
32.03
31.80
31.03

12.36
12.21
11.86
11.39

13.69
14.13
13.79
13.86


2070
2921
3101
4103

pression ratio ͑see the four left columns in Tables 1 and 2,
and Fig. 2͒. The results are as follows:
1. Time. Regarding the computation times there seems
to be an overhead of about 1–2 s. The remaining time
scales linearly with the parameter ␣. This is as expected
since the major computational effort in the encoding lies in
the linear search through the domain pool.
2. Fidelity. The quality of the encoding in terms of fidelity measured by the PSNR increases by 0.1–0.2 dB
when lowering ␣ ͑except for the Baboon image͒. This is
caused by the fact that some larger ranges can be covered
well for ␣ ϭ1.0 by some domains which are removed from
the pool at smaller values of ␣. As a consequence, some of
these larger ranges are subdivided and their quadrants can
be covered better by smaller domains than the large range
previously. This mechanism works for values of ␣ down to
about 0.15.
3. Compression. The range splitting mentioned above
also increases the number of ranges, thus causing the compression rate to decrease slightly. For example, this drop is
1%–2% at ␣ ϭ0.5 and 2%–9% at ␣ ϭ0.2. It is remarkable
that only relatively little loss in overall quality of the encoding is encountered for speed up factors of 10 and
higher.
3

Improved Bitrate by Exploiting Spatial Domain

Entropy

Figure 3 shows the domains of size 8ϫ8 that are used in
the fractal code of Lenna ͑from the first row in Table 1͒. As

Table 2 Results as in Table 1 for a few more test images.
Compression



CPU
Time
s

PSNR
dB

1.00

16.6

Results for 512ϫ512 peppers
32.43
15.20
15.06

0.50
0.20

10.0

5.1

32.49
32.55

14.93
14.00

14.95
14.73

Ϫ161
17
921

0.10
0.05

3.3
2.4

32.35
32.11

13.22
12.31

14.56
14.22


1828
2859

1.00

33.2

25.15

5.68

5.59

0.50

17.3

25.13

5.61

5.75

Ϫ727
1148

0.20
0.10

8.0

4.7

24.81
24.37

5.55
5.52

5.93
6.16

3028
4915

0.05

3.3

23.87

5.50

6.42

6766

1.00
0.50
0.20
0.10


25.6
14.6
7.3
4.4

boats
10.18
10.19
10.51
10.85

185
335
1476
2531

0.05

2.8

11.29

3562

1.00
0.50
0.20
0.10
0.05


21.3
12.6
6.2
3.8
2.4

F16
12.60
12.55
12.75
13.23
13.65

75
211
1186
2079
2967

Comp.
ratio

New
ratio

Bytes
saved

Results for 512ϫ512 baboon


Results for 512ϫ512
32.03
10.11
32.07
10.06
31.89
9.92
31.53
9.82
30.95

9.79

Results for 512ϫ512
32.86
12.55
32.97
12.42
32.94
12.05
32.63
11.98
32.13
11.82

expected the indicated domains are located mostly along
edges and in regions of high contrast of the image. These
black squares can be interpreted as a bitmap ͑of resolution
64ϫ64 in this case͒ and the goal of the procedure outlined

in the introduction is to store this bitmap efficiently. Then
the number of bits required to identify a particular used
domain is reduced. If the structure of the bitmap is strong,
then these savings are greater than the overhead necessary
for coding the bitmap and an overall reduced file size for
the code can be achieved.
We use the WBS quadtree storage scheme described in
Gonzales and Woods ͑see Ref. 15, p. 354͒. For a bitmap of
size 2 k ϫ2 k we proceed recursively starting with the block
given by the entire bitmap. A solid white block is coded as
0; all other blocks are coded with a prefix 1 and followed
by the four codes of their four subquadrants, which are
generated in the same way until a subblock of size 1 is
reached which is coded as 0 ͑white͒ or 1 ͑black͒. For an
example, see Fig. 4.
The last two columns in Table 1 report the results of this
procedure for the Lenna test image. For the case without
domain pool decimation ( ␣ ϭ1.00), there are no savings.
The costs for storing the WBS quadtree outweigh the savings from shorter domain codes. However, as we decrease
the value of ␣ below 0.7, we are obtaining some gain in

100 / Journal of Electronic Imaging / January 1999 / Vol. 8(1)
Downloaded From: on 11/17/2017 Terms of Use: />

Domain pools for image compression

Fig. 3 Domains of size 8ϫ8 that are used for a fractal code of
512ϫ512 Lenna are shown in black.

We have carried out the same experiment with domain

pools enlarged by factors of 4 and 16. In these cases the
break-even point of ␣ ϭ0.70 is reduced to 0.25 and 0.15,
respectively. Thus, the proposed method for domain address encoding seems to be applicable in situations where
either the domain pool is not very large or when extremely
fast encodings are desired ͑e.g., by choosing ␣ р0.05͒ and
quality can be compromised. In a production implementation one could carry out the WBS quadtree coding, check
whether it yields any savings or not, and then use the better
storage scheme.
The WBS storage scheme for bitmaps is simple and effective. However, there are more efficient methods, for example, using adaptive context based arithmetic coding.
Thus, there is room for further improvement.
Also we note that with strongly decimated domain
pools, domains are used more often than once in the fractal
code. Thus, standard entropy coding of the domain indices
will yield additional savings in storage. Domains used in
Fig. 2 Performance of the adaptive quadtree method with lean domain pools (from Table 1). The three plots show the computation
times, the PSNR, and the compression ratio for varying parameters
␣. The horizontal lines indicate the performance without enhancement, i.e., for ␣ ϭ1.0. The vertical line is for ␣ ϭ0.3, for which the
performance is improved in PSNR and CPU time and equal in bit
rate.

compression. An especially notable result is obtained for
␣ ϭ0.30 ͑see the vertical lines in Fig. 2͒. The new enhanced storage scheme completely makes up for the loss of
compression which occurred due to the domain pool decimation. Thus, in effect, when compared with the original
method ͑no domain pool decimation, no enhanced domain
storage, line 1 in Table 1͒, we arrive at a fractal encoding
with exactly the same compression ratio of 14.88, an improved PSNR ͑by 0.2 dB͒ and a computation time reduced
from 15.2 s down to only 6.0 s!

Fig. 4 Example of the white block skip coding of a bitmap of size
8ϫ8. The code is

1 ͓ 1 ͑ 0 ͒ ͑ 1 1001͒ ͑ 1 0010͒ ͑ 1 0101͔͒
͓0͔
͓ 1 ͑ 1 0011͒ ͑ 0 ͒ ͑ 1 0011͒ ͑ 0 ͔͒
͓0͔
obtained by recursively processing subblocks counterclockwise
each time starting from the corresponding upper right quadrant. For
better readability we have written prefix codes in bold face, and
bracketed the codes for the main and sub quadrants.

Journal of Electronic Imaging / January 1999 / Vol. 8(1) / 101
Downloaded From: on 11/17/2017 Terms of Use: />

Saupe

the pool carry different costs with respect to ͑w.r.t.͒ the
bitrate of the code. Frequently used domains reduce the
entropy, thus, are cheaper to encode than domains used
only once. Moreover, domains corresponding to outliers in
the bitmap require extra bits in the white block skip code
and are therefore more expensive than domains corresponding to bits in clusters. One can design a postprocessing for
a fractal code with the goal of eliminating such outliers: It
is possible that ranges covered by domains that correspond
to outliers in the bitmap may also be covered well by other
domains that are cheaper to encode. We have not implemented these options.
4

Variable Rate Encodings with Lean Domain
Pools
While the experiments in the previous sections considered
only a particular compression ratio for each image tested,

we discuss in this section the performance of the fractal
quadtree encoder over a range of compression ratios with
and without lean domain pools and optimized storage
scheme for the domain addresses.
An adaptive partitioning of an image may hold strong
advantages over encoding range blocks of fixed size. There
may be homogeneous image regions in which a sufficient
collage can be attained using large blocks, while in high
contrast regions smaller block sizes may be required to arrive at the desired quality. The first approach ͑already taken
by Jacquin͒ was to consider square blocks of varying sizes,
e.g., being 4, 8, and 16 pixels wide. This idea leads to the
concept of using a quadtree partition, first explored in the
context of fractal coding in Refs. 9 and 16. In contrast to
fixed block size encodings, the output file must also contain
the specification of the quadtree underlying the encoding.
Adaptive partitionings naturally lend themselves to the
design of a variable rate encoder. The user may specify
targets either for the image quality or the compression ratio.
The encoder recursively breaks up the image into suitable
portions until the target quality or rate is reached. In more
detail the algorithm targeting fidelity might proceed as follows.
1. Define a minimal range size and a tolerance for the
root-mean-square ͑rms͒ error of the collage obtained by the
encoder.
2. Initialize a stack of ranges by pushing the entire image as a range onto it.
3. While the stack is nonempty, carry out the following
steps:

Fig. 5 Rate-distortion curves for 512ϫ512 Lenna using lean domain pools of varying relative size ␣ and optimized domain address
coding.


ings, i.e., by iteration of the collage image operator. Typically, seven or eight iterations are required to get sufficiently close to the attractor.
Our test image is the 512ϫ512 gray scale image Lenna,
for which we let the quadtree encoder produce fractal codes
with compression ratios ranging from 5 up to 30. The parameter ␣ is decreased from 1.00 to 0.50, 0.20, 0.10, 0.05,
and for each value of ␣ a rate-distortion curve is obtained,
shown in Fig. 5. The result clearly shows that with ␣
ϭ0.5 the rate-distortion curve practically is the same as that
for the standard method ͑i.e., for ␣ ϭ1͒. When setting ␣
ϭ0.2 we also obtain comparable performance for compression ratios from about 15 and up. For smaller compression
ratios there is a slight penalty of up to 0.3 dB in PSNR. Still
lower values of ␣ degrade the rate-distortion curve more
noticeably.
Figures 6 and 7 display the coding results at compression ratios 15 and 30 for ␣ ϭ0.2. Some artifacts are visible,
especially for the high compression image. Let us remark

a. Pop a range block R from the stack and search the
corresponding domain pool yielding an optimal approximation with a corresponding least rms error for
that range block.
b. If the rms error is less than the tolerance or if the
range size is less than or equal to the minimum range
size, then save the code for the range.
c. Otherwise, partition the range block R into four
quadrants and push them onto the stack.
By using different fidelity tolerances for the collage one
obtains a series of encodings of varying compression ratios
and fidelities. The decoder for the quadtree codes proceeds
in the same way as for the case of fixed block size encod-

Fig. 6 Decoded image at compression ratio 15.75, PSNRϭ32.4 dB,

␣ ϭ0.2.

102 / Journal of Electronic Imaging / January 1999 / Vol. 8(1)
Downloaded From: on 11/17/2017 Terms of Use: />

Domain pools for image compression

Acknowledgments
The author appreciates the invaluable contribution of Matthias Ruhl, who organized the computer programs and ran
the experiments.

References

Fig. 7 Decoded image at compression ratio 30.17, PSNRϭ28.2 dB,
␣ ϭ0.2.

that our rate-distortion curves in Fig. 5 are produced only
for the study of how the choice of the parameter ␣ effects
the performance of our method. No postprocessing is carried out to remove blocking artifacts. These curves should
not be used for judging the quality of fractal image compression as such since the simple quadtree approach does
not yield the best possible numbers. In fact, with a state-ofthe-art fractal encoder, based on irregular adaptive partitionings, one can achieve PSNR values of 35.45 and 32.74
dB for the given test image at compression ratios of 15.3
and 29.9, respectively.17
5 Conclusion
We have introduced the concept of lean domain pools in
which a fraction 1Ϫ ␣ of low intensity variance domains
are discarded from the domain pool. This scales the time
complexity of the encoding by a factor of roughly ␣. With
this procedure, implemented in an adaptive quadtree fractal
encoder, the trade-off between increased speed and quality

in terms of fidelity and compression has been investigated.
Also we have introduced a new way for the specification of
the domains used for the fractal code which improves efficiency when the collection of domains used shows a high
degree of structure which often is the case when lean domain pools are used. Our techniques are simple and can
easily be incorporated into existing fractal coding programs, even in combination with other acceleration methods such as classification. In summary, the lean domain
pools introduced in this work cause only negligible or no
loss with ␣ ϭ0.5 or even 0.2, thereby reducing the encoding time to about one half or 20% of the normally required
time. Moreover, with smaller values of ␣, the method also
has a strong potential in applications where extremely fast
encodings are desired and some quality can be compromised.

1. M. Barnsley and L. Hurd, Fractal Image Compression, AK Peters,
Wellesley ͑1993͒.
2. Y. Fisher, Fractal Image Compression—Theory and Application,
Springer-Verlag, New York ͑1994͒.
3. A. E. Jacquin, ‘‘Image coding based on a fractal theory of iterated
contractive image transformations,’’ IEEE Trans. Image Process. 1,
18–30 ͑1992͒.
4. D. Saupe and R. Hamzaoui, ‘‘Complexity reduction methods for fractal image compression,’’ in I.M.A. Conf. Proc. on Image Processing;
Mathematical Methods and Applications, Sept. 1994, J. M. Blackledge, Ed., Oxford University Press, Oxford ͑1997͒.
5. D. Saupe, ‘‘Fractal image compression via nearest neighbor search,’’
in Fractal Image Encoding and Analysis, Y. Fisher, Ed., Conf. Proc.
NATO Advanced Study Institute, Trondheim, July 1995, SpringerVerlag, New York ͑1998͒.
6. D. M. Monro, ‘‘A hybrid fractal transform,’’ Proc. IEEE Int. Conf.
Acoust. Speech Signal Process. 5, 169–172 ͑1993͒.
7. D. M. Monro and F. Dudbridge, ‘‘Fractal approximation of image
blocks,’’ Proc. IEEE Int. Conf. Acoust. Speech Signal Process. 3,
485–488 ͑1992͒.
8. K. U. Barthel, J. Schu¨ttemeyer, T. Voye´, and P. Noll, ‘‘A new image
coding technique unifying fractal and transform coding,’’ IEEE Int.

Conf. on Image Processing (ICIP’94), pp. 112–116, Austin, Texas
͑1994͒.
9. T. Bedford, F. M. Dekking, and M. S. Keane, ‘‘Fractal image coding
techniques and contraction operators,’’ Nieuw Arch. Wisk. 10͑3͒,
185–218 ͑1992͒.
10. B. Ramamurthi and A. Gersho, ‘‘Classified vector quantization of
images,’’ IEEE Trans. Commun. COM-34, 1105–1195 ͑1986͒.
11. A. E. Jacquin, ‘‘Image coding based on a fractal theory of iterated
contractive Markov operators, Part II: Construction of fractal codes
for digital images,’’ Technical Report Math. 91389-17, Georgia Institute of Technology ͑1989͒.
12. J. Kominek, ‘‘Advances in fractal compression in multimedia applications,’’ manuscript ͑1995͒; available from />pub/Fractals/Papers/Waterloo/
13. J. Kominek, ‘‘Codebook reduction in fractal image compression,’’
Proc. SPIE 2669, 33–41 ͑Jan. 1996͒.
14. J. Signes, ‘‘Geometrical interpretation of IFS based image coding,’’
Fractals Suppl. 5, 133–143 ͑1997͒.
15. R. C. Gonzales and R. E. Woods, Digital Image Processing,
Addison–Wesley, Reading, MA ͑1992͒.
16. E. W. Jacobs, Y. Fisher, and R. D. Boss, ‘‘Image compression: A
study of the iterated transform method,’’ Signal Process. 29, 251–263
͑1992͒.
17. H. Hartenstein, ‘‘Topics in fractal image compression and nearlossless image coding,’’ dissertation, University of Freiburg, 1998.

Dietmar Saupe received his Dr. rer. nat.
and Habilitation degrees, both from the
University of Bremen, in 1982 and 1993,
respectively. He has served as visiting assistant professor of mathematics at the
University of California at Santa Cruz
(1985–1987), assistant professor at the
University of Bremen (1987–1993), professor of computer science at the AlbertLudwigs-University of Freiburg (1993–
1998), and at the University of Leipzig

(since 1998). His research has focused on image processing, computer graphics, visualization, experimental mathematics, and dynamical systems. He is the co-author and editor of several books on
fractals, e.g., Chaos and Fractals (Springer-Verlag, New York,
1992). He is a member of the IEEE Signal Processing Society, ACM
SIGGRAPH, Eurographics, and others.

Journal of Electronic Imaging / January 1999 / Vol. 8(1) / 103
Downloaded From: on 11/17/2017 Terms of Use: />


×