Tải bản đầy đủ (.pdf) (30 trang)

The Essential Guide to Image Processing- P22 pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (2.03 MB, 30 trang )

640 CHAPTER 22 Image Watermarking: Techniques and Applications
protection [12]. The image authentication algorithm generates a watermark according to
the owner’s private key. Subsequently, the watermark is imperceptibly embedded in the
image. In the authentication detection procedure, the watermar k is extracted from the
image and a measure of tampering is produced for the entire image. The algorithm detects
the regions of the image that are altered/unaltered and, thus, are considered nonau-
thentic/authentic, respectively. The alterations that are produced by a relatively mild
compression and do not change significantly the quality of the image are also detected.
An example of an image authentication procedure using the image “Opera of Lyon”
( which has
been used as a reference image for watermark benchmarking, is depicted in Fig. 22.8.
Themethodin[12] has been extended to support tampering detection using a
hierarchical structure in the detection phase that ensures accurate tamper localiza-
tion [131].
A novel framework for lossless (invertible) authentication watermarking, which
enables zero-distortion reconstruction of the original image upon verification, has been
proposed in [132]. The framework allows authentication of the watermarked images
before recovery of the original image. This reduces computational requirements in situ-
ations where either the verification step fails or the zero-distortion reconstruction is not
needed. The framework also enables public-key authentication without granting access
to the original and allows for efficient tamper localization. Effectiveness of the framework
is demonstrated by implementing it using hierarchical image authentication along with
lossless generalized-least significant bit data embedding.
A blind image watermarking method based on a multistage vector quantizer struc-
ture, which can be used simultaneously for both image authentication and copy right
protection, has been proposed in [133]. In this method, the semifragile watermark and
the robust watermar k are embedded in different vector quantization stages usingdifferent
techniques. Simulation results demonstrated the effectiveness of the proposed algorithm
in terms of robustness and fragility. Another semifragile watermarking method that is
(a) (b) (c)
FIGURE 22.8


(a) Original watermarked image; (b) tampered watermarked image; (c) tampered regions.
References 641
robust against lossy compression has been proposed in [134]. The proposed method uses
random bias and nonuniform quantization to improve the performance of the methods
proposed in [121].
Differentiating between malicious and incidental manipulations in content authen-
tication remains an open issue. Exploitation of robust watermarks with self-restoration
capabilities for image authentication is another research topic. The authentication of
certain regions instead of the whole image when only some regions are tampered with
has also attracted the attention of the watermarking community.
ACKNOWLEDGMENT
The authoring of this chapter has been supported in part by the European Commission
through the IST Programme under Contract IST-2002-507932 ECRYPT.
REFERENCES
[1] I. Cox, M. Miller, J. Bloom, J. Fridrich, and T. Kalker. Digital Watermarking and Steganography,
2nd ed. Morgan Kaufmann Publishers, Burlington, MA, 2007.
[2] S. Craver and J. Stern. Lessons Learned from SDMI. In IEEE Workshop on Multimedia Signal
Processing, MMSP 01, pp. 213–218, Cannes, France, October 2001.
[3] R. Venkatesan, S. M. Koon, M. H. Jakubowski, and P. Moulin. Robust image hashing. In IEEE
International Conference on Image Processing, Vancouver, Canada, October 2000.
[4] V. Monga and B. Evans. Robust perceptual image hashing using feature points. In IEEE
International Conference on Image Processing, ICIP 04, Singapore, October 2004.
[5] N. Nikolaidis and I. Pitas. Digital rights management of images and videos using robust replica
detection techniques. In L. Drossos, S. Sioutas, D. Tsolis, and T. Papatheodorou, editors, Digital
Rights Management for E- Commerce Systems , Idea Group Publishing, Hershey, PA, 2008.
[6] M. Barni. What is the future for watermarking? (part i). IEEE Signal Process. Mag., 20(5):55–59,
2003.
[7] M. Barni. What is the future for watermarking? (part ii). IEEE Signal Process. Mag., 20(6):53–59,
2003.
[8] M. Barni and F. Bartolini. Watermarking Systems Engineering: Enabling Digital Assets Security and

Other Applications. Marcel Dekker, New York, 2004.
[9] S. Katzenbeisser and F. Petitcolas. Information Hiding Techniques for Steganography and Digital
Watermarking. Artech House, Norwood, MA, 2000.
[10] N. Nikolaidis and I. Pitas. Benchmarking of watermarking algorithms. In J S. Pan, H C. Huang,
and L. Jain, editors, Intelligent Watermarking Techniques, 315–347. World Scientific Publishing,
Hackensack, NJ, 2004.
[11] B. Furht and D. Kirovski, editors. Multimedia Watermarking Techniques and Applications.
Auer bach, Boca Raton, FL, 2006.
[12] G. Voyatzis and I. Pitas. The use of watermarks in the protection of digital multimedia products.
Proc. IEEE, 87(7):1197–1207, July 1999.
642 CHAPTER 22 Image Watermarking: Techniques and Applications
[13] N. Nikolaidis and I. Pitas. Digital image watermarking: an overview. In Int. Conf. on Multimedia
Computing and Systems (ICMCS’99), Vol. I, 1–6, Florence, Italy, June 7–11, 1999.
[14] Special issue on identification & protection of multimedia information. Proc. IEEE, 87(7):1999.
[15] Special issue on signal processing for data hiding in digital media and secure content delivery.
IEEE Trans. Signal Process., 51(4):2003.
[16] F. A. P. Petitcolas, R. J. Anderson, and M. G. Kuhn. Information hiding-asurvey.Proc. IEEE,
87(7):1062–1078, July 1999.
[17] Special issue on authentication copyright protection and information hiding. IEEE Trans. Circuits
Syst Video Technol., 13(8):2003.
[18] I. Cox, M. Miller, and J. Bloom. Watermarking applications and their properties. In International
Conference on Information Technology: Coding and Computing 2000, 6–10. Las Vegas, March 2000.
[19] A. Nikolaidis, S. Tsekeridou, A. Tefas, and V. Solachidis. A survey on watermarking application
scenarios and related attacks. In 2001 IEEE International Conference on Image Processing (ICIP
2001), Thessaloniki, Greece, October 7–10, 2001.
[20] M. Barni and F. Bartolini. Data hiding for fighting piracy. IEEE Signal Process. Mag., 21(2):28–39,
2004.
[21] S. Nikolopoulos, S. Zafeiriou, P. Sidiropoulos, N. Nikolaidis, and I. Pitas. Image replica detection
using R-trees and linear discriminant analysis. In IEEE International Conference on Multimedia
and Expo, 1797–1800. Toronto, Canada, 2006.

[22] J. A. Bloom, I. J. Cox, T. Kalker, M. I. Miller, and C. B. S. Traw. Copy protection for dvd video.
Proc. IEEE, 87(7):1267–1276, 1999.
[23] F. Bartolini, A. Tefas, M. Barni, and I. Pitas. Image authentication techniques for surveillance
applications. Proc. IEEE, 89(10):1403–1418, 2001.
[24] A. M. Alattar. Smart images using digimarcs watermarking technology. In IS&T/SPIEs 12th
International Symposium on Electronic Imaging, Vol. 3971, San Jose, CA, January 25, 2000.
[25] T. Furon, I.Venturini, and P. Duhamel. A unified approach of asymmetric watermarking schemes.
In SPIE Electronic Imaging, Security andWatermarking of Multimedia Contents,Vol. 4314, 269–279.
San Jose, CA, January 2001.
[26] T. Furon and P. Duhamel. An asymmetric watermarking method. IEEE Trans. Signal Process.,
51(4):981–995, 2003.
[27] I. J. Cox, M. I. Miller, and A. L. McKellips. Watermarking as communications with side
information. Proc. IEEE, 87(7):1127–1141, 1999.
[28] B. Chen and G. Wornell. Quantization index modulation: a class of provably good methods
for digital watermarking and information embedding. IEEE Trans. Inf. Theory, 47(4):1423–1443,
2001.
[29] I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon. Secure spread spectrum watermarking for
multimedia. IEEE Trans. Image Process., 6(12):1673–1687, 1997.
[30] J. K. Ruanaidh and T. Pun. Rotation, scale and translation invariant spread spectrum digital
image watermarking. Elsevier Signal Process., Sp. Issue on Copyright Protection and Access Control,
66(3):303–317, 1998.
[31] J. Cannons and P. Moulin. Design and statistical analysis of a hash-aided image watermarking
system. IEEE Trans. Image Process., 13(10):1393–1408, 2004.
[32] A. Nikolaidis and I. Pitas. Region-based image watermarking . IEEE Trans. Image Process.,
10(11):1726–1740, 2001.
References 643
[33] M. Barni, F. Bartolini, and A. Piva. Improved wavelet-based watermarking through pixel-wise
masking. IEEE Trans. Image Process., 10(5):783–791, 2001.
[34] V. Solachidis and I. Pitas. Circularly symmetric watermark embedding in 2D DFT domain. IEEE
Trans. Image Process., 10(11):1741–1753, 2001.

[35] J. R. Hernandez, M. Amado, and F. Perez-Gonzalez. DCT-domain watermarking techniques
for still images: detector performance analysis and a new structure. IEEE Trans. Image Process.,
9(1):55–68, 2000.
[36] M. Barni, F. Bartolini, A. DeRosa, and A. Piva. A new decoder for the optimum recovery of
nonadditive watermarks. IEEE Trans. Image Process., 10(5):755–766, 2001.
[37] M. Barni, F. Bartolini, A. DeRosa, and A. Piva. Optimum decoding and detection of multiplicative
watermarks. IEEE Trans. Signal Process., 51(4):1118–1123, 2003.
[38] Q. Cheng and T. S. Huang. Robust optimum detection of transform domain multiplicative
watermarks. IEEE Trans. Signal Process., 51(4):906–924, 2003.
[39] V. Solachidis and I. Pitas. Optimal detector for multiplicative watermarks embedded in the DFT
domain of non-white signals. J. Appl. Signal Process., 2004(1):2522–2532, 2004.
[40] M. Yoshida, T. Fujita, and T. Fujiwara. A new optimum detection scheme for additive watermarks
embedded in spatial domain. In International Conference on Intelligent Information Hiding and
Multimedia Signal Processing, 2006.
[41] J. Wang, G. Liu, Y. Dai, J. Sun, Z. Wang, and S. Lian. Locally optimum detection for Barni’s
multiplicative watermarking in DWT domain. Signal Process., 88(1):117–130, 2008.
[42] A. Tefas and I. Pitas. Robust spatial image watermarking using progressive detection. In Proc. of
2001 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP 2001), 1973–1976, Salt
Lake City, Utah, USA, 7–11 May, 2001.
[43] D. P. Mukherjee, S. Maitra, and S. T. Acton. Spatial domain digital watermarking of multimedia
objects for buyer authentication. IEEE Trans. Multimed., 6(1):1–15, 2004.
[44] H. S. Kim and H K. Lee. Invariant image watermark using Zernike moments. IEEE Trans. Circuits
Syst. Video Technol., 13(8):766–775, 2003.
[45] M. Barni, F. Bartolini, V. Cappelini, and A. Piva. A DCT-domain system for robust image
watermarking. Elsev ier Signal Process., 66(3):357–372, 1998.
[46] W. C. Chu. DCT-based image watermarking using subsampling. IEEE Trans. Multimed., 5(1):
34–38, 2003.
[47] A. Briassouli, P. Tsakalides, and A. Stouraitis. Hidden messages in heavy-tails: DCT-domain
watermark detection using alpha-stable models. IEEE Trans. Multimed., 7(4):700–715, 2005.
[48] C Y. Lin, M. Wu, J. Bloom, I. Cox, M. Miller, and Y. Lui. Rotation, scale, and translation resilient

watermarking for images. IEEE Trans. Image Process., 10(5):767–782, 2001.
[49] Y. Wang, J. Doherty, and R. Van Dyck. A wavelet-based watermarking algorithm for ownership
verification of digital images. IEEE Trans. Image Process., 11(2):77–88, 2002.
[50] P. Bao and M. Xiaohu. Image adaptive watermarking using wavelet domain singular value
decomposition. IEEE Trans. Circuits Syst. Video Technol., 15(1):96–102, 2005.
[51] ITU. Methodology for the subjective assessment of the quality of television pictures. ITU-R
Recommendation BT.500-11, 2002.
[52] A. Netravali and B. Haskell. Digital Pictures, Representation and Compression. Plenum Press,
New York, 1988.
644 CHAPTER 22 Image Watermarking: Techniques and Applications
[53] S. Voloshynovskiy, S. Pereira, V. Iquise, and T. Pun. Attack modelling: towards a second generation
benchmark. Signal Process., 81(6):1177–1214, 2001.
[54] T. Pappas, R. Safranek, and J. Chen. Perceptual criteria for image quality evaluation. In A. Bovik,
editor, Handbook of Image and Video Processing, 2nd ed, 939–960. Academic Press, Burlington,
MA, 2005.
[55] Z. Wang, A. Bovik, and E. Simoncelli. Structur al approaches to image quality assessment. In
A. Bovik, editor, Handbook of Image and Video Processing, 961–974. Academic Press, Burlington,
MA, 2005.
[56] H. Sheikh and A. Bovik. Information theoretic approaches to image quality assessment. In
A. Bovik, editor, Handbook of Image and Video Processing, 975–992. Academic Press, Burlington,
MA, 2005.
[57] I. Setyawan, D. Delannay, B. Macq, and R. L. Lagendijk. Perceptual quality evaluation of geo-
metrically distorted images using relevant geometric transformation modeling. In SPIE/IST 15th
Electronic Imaging, Santa Clara, 2003.
[58] A. Tefas and I. Pitas. Multi-bit image watermarking robust to geometric distortions. In CD-ROM
Proc. of IEEE Int. Conf. on Image Processing (ICIP’2000), 710–713. Vancouver, Canada, 10–13
September, 2000.
[59] P. Comesana, L. Perez-Freire, and F. Perez-Gonzalez. An information-theoretic framework
for assessing security in practical water-marking and data hiding scenarios. In 6th Interna-
tional Workshop on Image Analysis for Multimedia Interactive Services, Montreux, Switzerland,

2005.
[60] P. Comesana, L. Perez-Freire, and F. Perez-Gonzalez. Fundamentals of data hiding security and
their application to spread-spectrum analysis. In 7th Information Hiding Workshop, 146–160,
Barcelona, Spain, 2005.
[61] F. Cayre, C. Fontaine, and T. Furon. Watermarking security : theory and practice. IEEE Trans.
Signal Process., 53(10):3976–3987, 2005.
[62] C. E. Shannon. Communication theory of secrecy systems. Bell Syst. Tech. J., 28:656–715, 1949.
[63] W. Diffie and M. Hellman. New directions in cryptography. IEEE Trans. Inf. Theory, 22(6):644–
654, 1976.
[64] L. Perez-Freire, P. Comesana, J. R. Troncoso-Pastoriza, and F. Perez-Gonzalez. Watermarking
security: a survey. Trans. Data Hiding Multimed. Secur. I, 4300:41–72, 2006.
[65] V. Licks and R. Jordan. Geometric attacks on image watermarking systems. IEEE Multimed.,
12(3):68–78, 2005.
[66] F.A. P. Petitcolas. Watermarking schemes evaluation. IEEE Signal Process. Mag., 17(5):58–64,2000.
[67] F. Petitcolas, R. Anderson, and M. Kuhn. Attacks on copyright marking systems. In Second Int.
Workshop on Information Hiding, Vol. LNCS 1525, 219–239, Portland, Oregon, April 1998.
[68] M. Kutter, S. Voloshynovskiy, and A. Herrigel. The watermark copy attack. In Electronic Imaging
2000, Security and Watermarking of Multimedia Content II, Vol. 3971, 2000.
[69] S. Craver, N. Memon, B L. Yeo, and M. Yeung. Resolving rightful ownerships with invisible
watermarking techniques: limitations, attacks and implications. IEEE J. Sel. Areas Commun.,
16(4):573–586, 1998.
[70] B. Macq, J. Dittmann, and E. Delp. Benchmarking of image watermarking algorithms for digital
rights management. Proc. IEEE, 92(6):971–984, 2004.
References 645
[71] B. Michiels and B. Macq. Benchmarking image watermarking algorithms with OpenWatermark.
In European Signal Process. Conf., Florence, Italy, September 2006.
[72] O. Guitart, H. Kim, and E. Delp. The watermark evaluation testbed (WET): new functionalities.
In SPIE Int. Conf. on Security and Water marking of Multimedia Contents, January 2006.
[73] F. Petitcolas, M. Steinebach, F. Raynal, J. Dittmann, C. Fontaine, and N. Fates. A public automated
web-based evaluation service for watermarking schemes: Stirmark benchmark. In SPIE Electronic

Imaging 2001, Security and Watermarking of Multimedia Contents, Vol. 4314, San Jose, CA, USA,
January 2001.
[74] S. Pereira, S. Voloshynovskiy, M. Madueno, S. Marchand-Maillet, and T. Pun. Second genera-
tion benchmarking and application ori ented evaluation. In Information Hiding Workshop III,
Pittsburgh, PA, USA, April 2001.
[75] V. Solachidis, A. Tefas, N. Nikolaidis, S. Tsekeridou, A. Nikolaidis, and I. Pitas. A benchmarking
protocol forwatermarking methods.In Proc. of ICIP’01, 1023–1026, Thessaloniki, Greece,October
7–10, 2001.
[76] M. Simon, J. Omura, R. Scholtz, and B. Levitt. Spread Spectrum Communications Handbook.
McGr aw-Hill, Columbus, OH, 1994.
[77] I. Pitas and T. H. Kaskalis. Applying signatures on digital images. In Proc. of 1995 IEEE Workshop
on Nonlinear Signal and Image Process. (NSIP’95), 460–463, N. Marmaras, Greece, June 20–22,
1995.
[78] N. Nikolaidis and I. Pitas. Copyright protection of images using robust digital signatures. In Proc.
of ICASSP’96, Vol. 4, 2168–2171, Atlanta, USA, 1996.
[79] A. Tefas, A. Nikolaidis, N. Nikolaidis, V. Solachidis, S. Tsekeridou, and I. Pitas. Performance
analysis of correlation-based water-marking schemes employing markov chaotic sequences. IEEE
Trans. Signal Process., 51(7):1979–1994, 2003.
[80] G. Voyatzis and I. Pitas. Chaotic watermarks for embedding in the spatial digital image domain.
In Proc. of ICIP’98, Vol. II, 432–436, Chicago, USA, October 4–7, 1998.
[81] A. Nikolaidis and I. Pitas. Comparison of different chaotic maps with application to image
watermarking. In Proc. of ISCAS’00, Vol. V, 509–512, Geneva, Switzerland, May 28–31, 2000.
[82] S. Tsekeridou, N. Nikolaidis, N. Sidiropoulos, and I. Pitas. Copyr ight protection of still images
using self-similarchaoticwatermarks. In Proc. of ICIP’00, 411–414,Vancouver,Canada,September
10–13, 2000, to appear.
[83] A. Tefas, A. Nikolaidis, N. Nikolaidis, V. Solachidis, S. Tsekeridou, and I. Pitas. Statistical analysis
of Markov chaotic sequences for watermarking applications. In 2001 IEEE Int. Symposium on
Circuits and Systems (ISCAS 2001), 57–60, Sydney, Australia, May 6–9, 2001.
[84] A. Mooney, J. Keating, and I. Pitas. A comparative study of chaotic and white noise signals in
digital watermarking. Chaos, Solitons and Fractals, 35:913–921, 2008.

[85] N. Nikolaidis, S. Tsekeridou, A. Nikolaidis, A. Tefas, V. Solachidis, and I. Pitas. Applications of
chaotic signal processing techniques to multimedia watermarking. In Proc. of the IEEE Workshop
on Nonlinear Dynamics in Electronic Systems, 1–7, Catania, Italy, May 18–20, 2000.
[86] N. Nikoladis, A. Tefas, and I. Pitas. Chaotic sequences for digital watermarking. In S. Marshall,
and G. Sicuranza, editors, Advances in Nonlinear Signal and Image Processing, 205–238. Hindawi
Publishing, New York, 2006.
[87] C. Lin, M. Wu, J. A. Bloom, I. J. Cox, M. L. Miller, and Y. M. Lui. Rotation, scale, and translation
resilient public watermarking for images. In Proc. of SPIE:Electronic Imaging 2000, Vol. 3971,
90–98, January 2000.
646 CHAPTER 22 Image Watermarking: Techniques and Applications
[88] P. Dong, J. G. Brankov, N. P. Galatsanos, Y. Yang, and F. Davoine. Digital watermarking robust to
geometric distortions. IEEE Trans. Image Process., 14(12):2140–2150, 2005.
[89] I. Djurovic, S. Stankovic, and I. Pitas. Watermarking in the space/spatial domain using two-
dimensional Radon Wigner distribution. IEEE Trans. Image Process., 10(4):650–658, 2001.
[90] T. K. Tsui, X P. Zhang, and D. Androutsos. Color image watermarking using multidimensional
Fourier transforms. IEEE Trans. Inf. Forensics Secur., 3(1):16–28, 2008.
[91] S. Sangwine and T. Ell. The discrete Fourier transform of a colour image. In Second IMA Conf. on
Image Process., 430–441, Leicester, UK, September 1998.
[92] S. Pereira and T. Pun. Robust template matching for affine resistant image watermarks. IEEE
Trans. on Image Process., 9(6):1123–1129, 2000.
[93] S. Pereira, J. Ruanaidh, F. Deguillaume,G. Csurka, andT. Pun. Template based recovery of Fourier-
based watermarks using log-polar and log-log maps. In Proc. of ICMCS’99, Florence, Italy, Vol. I,
870–874, June 7–11, 1999.
[94] S. Voloshynovskiy, F. Deguillaume, and T. Pun. Multibit digital watermarking robust against local
non-linear geometrical transformations. In ICIP 2001, Int. Conf. Image Process., October 2001.
[95] A. Herrigel, S. Voloshynovskiy, and Y. Rytsar. Template removal attack. In Proc. of SPIE: Electronic
Imaging 2001, January 2001.
[96] V. Solachidis and I. Pitas. Circularly symmetric watermark embedding in 2-D DFT domain. In
Proc. of ICASPP’99, 3469–3472, Phoenix, Arizona, USA, March 15–19, 1999.
[97] S. Tsekeridou and I. Pitas. Embedding self-similar watermarks in the wavelet domain. In Proc. of

ICASSP’00, 1967–1970, Istanbul, Turkey, June 5–9, 2000.
[98] M. Kutter. Watermarking resisting to translation, rotation and scaling. Proc. SPIE, 3528:423–431,
1998.
[99] A. Nikolaidis andI. Pitas. Robust watermarking of facial images based on salientgeometric pattern
matching. IEEE Trans. Multimed., 2(3):172–184, 2000.
[100] C. S. Lu, S. W. Sun, C. Y. Hsu, and P. C. Chang. Media hash-dependent image watermarking
resilient against both geometric attacks and estimation attacks based on false positive-oriented
detection. IEEE Trans. Multimed., 8(4):668–685, 2006.
[101] X. Wang, J. Wu, and P. Niu. A new digital image watermarking algorithm resilient to
desynchronization attacks. IEEE Trans. Inf. Forensics Secur., 2(4):655–663, 2007.
[102] H. C. Papadopoulos and C. E. W. Sundberg. Simultaneous broadcasting of analogus fm and
digital audio signals by means of precanceling techniques. In Proc. of the IEEE Int. Conf. on
Communications, 728–732, 1998.
[103] J. Zhong and S. Huang. Double-sided watermark embedding and detection. IEEE Trans. Inf.
Forensics Secur., 2(3):297–310, 2007.
[104] M. Costa. Writing on dirty paper. IEEE Trans. Inf. Theory, 29:439–441, 1983.
[105] P. Moulin and J. A. O’Sullivan. Information-theoretic analysis of information hiding. IEEE Trans.
Inf. Theory, 49(3):563–593, 2003.
[106] Q. Li and I. Cox. Using perceptual models to improve fidelity and provide resistance to valu-
metric scaling for quantization index modulation watermarking. IEEE Trans. Inf. Forensics Secur.,
2(2):127–139, 2007.
[107] B. Chen and G. Wornell. Quantization index modulation methods for digital watermarking and
information embedding in multimedia. J. VLSI Signal Process., 27:7–33, 2001.
References 647
[108] J. J. Eggers, R. Buml, and B. Girod. Estimation of amplitude modifications before scs watermark
detection. In Proc. SPIE Security Watermarking Multimedia Contents, Vol. 4675,387–398, San Jose,
CA, January 2002.
[109] F. Prez-Gonzlez, C. Mosquera, M. Barni, and A. Abrardo. Rational dither modulation: a high-rate
data-hiding method invariant to gain attacks.IEEE Trans. Signal Process., 53(10):3960–3975,2005.
[110] A. Abrardo and M. Barni. Informed watermarking by means of orthogonal and quasi-orthogonal

dirt y paper coding. IEEE Trans. on Signal Process., 53(2):824–833, 2005.
[111] C. I. Po dilchuk and W. Zeng. Image-adaptive watermarking using visual models. IEEE J. Sel. Areas
Commun., 16(4):525–539, 1998.
[112] A. B. Watson. DCT quantization matrices optimized for individual images. Human Vision, Visual
Process. D igital Disp., SPIE-1903:202–216, 1993.
[113] N. Baumgartner, S. Voloshynovskiy, A. Herrigel, and T. Pun. A stochastic approach to content
adaptive digital image watermarking. In Proc. of Int. Workshop on Information Hiding, 211–236,
Dresden, Germany, September 1999.
[114] I. Karybali and K. Berberidis. Efficient spatial image watermarking via new perceptual masking
and blind detection schemes. IEEE Trans. Inf. Forensics Secur., 1(2):256–274, 2006.
[115] P. Comesaa and F. Perez-Gonzalez. On a watermarking scheme in the logarithmic domain and its
perceptual advantages. In IEEE Int. Conf. on Image Process., Vol. 2, 145–148, September 2007.
[116] J. Fridrich and A. Goljan. Images with self-correcting capabilities. In Proc. IEEE Conf. on Image
Process., 25–28, October 1999.
[117] G. L. Friedman. The trustworthy digital camera: restoring credibility to the photographic image.
IEEE Trans. Consum. Electron., 39(4):905–910, 1993.
[118] S. Bhattacharjee and M. Kutter. Compression tolerant image authentication. In Proc. of ICIP’98,
Chicago, USA, Vol. I, 425–429, October 4–7, 1998.
[119] B. Zhu, M. D. Swanson, and A. H. Tewfik. Transparent robust authentication and distortion
measurement technique for images. In Proc. of DSP’96, 45–48, Loen, Norway, September 1996.
[120] P. W. Wong. A public key watermark for image verification and authentication. In Proc. of ICIP’98,
Chicago, USA, Vol. I, 425–429, October 4–7, 1998.
[121] C Y. Lin and S F. Chang. Semi-fragile watermarking for authenticating jpeg visual content. In
SPIE EI’00, 3971, 2000.
[122] M. Wu and B. Liu. Watermarking for image authentication. In Proc. of ICIP’98, Vol. I, 425–429,
Chicago, USA, October 4–7, 1998.
[123] M. M. Yeung and F. Mintzer. An invisible watermarking technique for image verification. In Proc.
of ICIP’97, Vol. II, 680–683, Atlanta, USA, October 1997.
[124] M. Holliman and N. Memon. Counterfeiting attacks on oblivious blockwise independent invisible
watermarking schemes. IEEE Trans. Image Process., 9(3):432–441, 2000.

[125] L. Xie and G. R. Arce. Joint wavelet compression and authentication watermarking. In Proc. of
ICIP’98, Vol. II, 427–431, Chicago, Illinois, USA, October 4–7, 1998.
[126] D. Kundur and D. Hatzinakos. Digital watermarking for telltale tamper proofing and authentica-
tion. Proc. IEEE, 87(7):1167–1180, 1999.
[127] F. Mintzer, G. W. Braudaway, and M. M. Yeung. Effective and ineffective digital watermarks. In
Proc. of ICIP’97, Vol. III, 9–12, Atlanta, USA, October 1997.
[128] P. W. Wong. A watermark for image integrity and ownership verification. In Proc. of IS&T PINC
Conf., Por tland, 1997.
648 CHAPTER 22 Image Watermarking: Techniques and Applications
[129] A. Tefas and I. Pitas. Image authentication based on chaotic mixing. In CD-ROM Proc. of
IEEE Int. Symposium on Circuits and Systems (ISCAS’2000), 216–219, Geneva, Switzerland, May
28–31, 2000.
[130] A. Tefas and I. Pitas. Image authentication and tamper proofing using mathematical morpho-
logy. In Proc. of European Signal Processing Conf. (EUSIPCO’2000), Tampere, Finland, September
5–8, 2000.
[131] P. L. Lina, C. K. Hsiehb, and P. W. Huangb. A hier archical digital watermarking method for image
tamper detection and recovery. Pattern Recognit., 38(12):2519–2529, 2005.
[132] M. U. Celik, G. Sharma, and A. M. Tekalp. Lossless watermarking for image authentication: a new
framework and an implementation. IEEE Trans. on Image Process., 15(4):1042–1049, 2006.
[133] Z. M. Lu, D. G. Xu, and S. H. Sun. Multipurpose image watermarking algorithm based on
multistage vector quantization. IEEE Trans. on Image Process., 14(6):822–911, 2005.
[134] K. Maeno, Q. Sun, S. F. Chang, and M. Suto. New semi-fragile image authentication watermark-
ing techniques using random bias and nonuniform quantization. IEEE Trans. Multimed., 8(1):
32–45, 2006.
CHAPTER
23
Fingerprint Recognition
Anil Jain
1
and Sharath Pankanti

2
1
Michigan State University;
2
IBM T. J. Watson Research Center,
New York
23.1 INTRODUCTION
The problem of resolving the identity of a person can be categor ized into two fundamen-
tally distinct types of problems with different inherent complexities [1]: (i) verification
and (ii) recognition. Verification (authentication) refers to the problem of confirming
or denying a person’s claimed identity (Am I who I claim I am?). Recognition (Who am
I?) refers to the problem of establishing a subject’s identity.
1
A reliable personal iden-
tification is critical in many daily transactions. For example, access control to physical
facilities and computer priv ileges are becoming increasingly important to prevent their
abuse. There is an increasing interest in inexpensive and reliable personal identification
in many emerging civilian, commercial, and financial applications.
Typically, a person could be identified based on (i) a person’s possession (“something
that you possess”), e.g., permit physical access to a building to all persons whose identity
could be authenticated by possession of a key; (ii) a person’s knowledge of a piece of infor-
mation (“something that you know”), e.g., permit login access to a system to a person
who knows the user id and a password associated with it. Another approach to identifi-
cation is based on identifying physical characteristics of the person. The characteristics
could be either a person’s anatomical traits, e.g., fingerprints and hand geometry, or his
behavioral characteristics, e.g., voice and signature. This method of identification of a
person based on his anatomical/behavioral characteristics is called biometrics. Since these
physical characteristics cannot be forgotten (like passwords) and cannot be easily shared
or misplaced (like keys), they are generally considered to be a more reliable approach to
solving the personal identification problem.

23.2 EMERGING APPLICATIONS
Accurate identification of a person could deter crime and fraud, streamline business
processes, and save critical resources. Here are a few mind boggling numbers: about one
1
Often, recognition is also referred to as identification.
649
650 CHAPTER 23 Fingerprint Recognition
billion dollars in welfare benefits in the United States are annually claimed by “double
dipping”welfare recipients with fraudulent multiple identities [44]. MasterCardestimates
credit card fraud at $450 million per annum which includes charges made on lost and
stolen credit cards: unobtrusive personal identification of the legitimate ownership of a
credit card at the point of sale would greatly reduce credit card fraud. About 1 billion
dollars worth of cellular te lephone calls are made by cellular bandwidth thieves—many
of which are made using stolen PINs and/or cellular phones. Again an identification of
the legitimate ownership of a cellular phone would prevent loss of bandwidth. A reliable
method of authenticating the legitimate owner of an ATM card would greatly reduce
ATM-related fraud worth approximately $3 billion annually [9]. A method of identifying
the rightful check payee would also save billions of dollars that are misappropriated
through fraudulent encashment of checks each year. A method of authentication of each
system login would eliminate illegal break-ins into traditionally secure (even federal
government) computers. The United States Immigra tion and Naturalization Service has
stated that each day it could detect/deter about 3,000 illegal immigrants crossing the
Mexican border without delaying legitimate persons entering the United States if it had
a quick way of establishing personal identification.
High-speed computer networks offer interesting opportunities for electronic com-
merce and electronic purse applications. Accurate authentication of identities over net-
works is expected to become one of the most important applications of biometric-based
authentication.
Miniaturization and mass-scale production of relatively inexpensive biometric sen-
sors (e.g., solid state fingerprint sensors) has facilitated the use of biometric-based

authentication in asset prote ction (laptops, PDAs, and cellular phones).
23.3 FINGERPRINT AS A BIOMETRIC
A smoothly flowing pattern formed by alternating crests (ridges) and troughs (valleys) on
the palmar aspect of a hand is called a palmprint. Formation of a palmprint depends on
the initial conditions of the embryonic mesoderm from which they develop. The pattern
on the pulp of each terminal phalanx (finger) is considered as an individual pattern and
iscommonlyreferredtoasafingerprint (see Fig. 23.1). A fingerprint is believed to be
unique to each person (and each finger). Even the fingerprints of identical twins are
different.
Fingerprints are one of the most mature biometric technologies and are considered
legitimate evidence in courts of law all over the world. Fingerprints are, therefore, rou-
tinely used in forensic divisions worldwide for criminal investigations. More recently, an
increasing number of civilian and commercial applications are either using or actively
considering the use of fingerprint-based identification because of a better understand-
ing of fingerprints as well as demonstrated matching performance better than any other
existing biometric technology.
23.4 History of Fingerprints 651
(a) (b) (c)
(d) (e) (f)
FIGURE 23.1
Fingerprints and a fingerprint classification schema involving six categories: (a) arch; (b) tented
arch; (c) right loop; (d) left loop; (e) whorl; and (f) twin loop. Critical points in a fingerprint,
called core and delta, are marked as squares and triangles, respectively. Note that an arch type
fingerprint does not have a delta or a core. One of the two deltas in (e) and both the deltas in
(f) are not imaged. A sample minutiae ridge ending (◦) and ridge bifurcation (ϫ) is illustrated
in (e). Each image is 512 ϫ 512 with 256 gray levels and is scanned at 512 dpi resolution. All
feature points were manually extracted by one of the authors.
23.4 HISTORY OF FINGERPRINTS
Humans have used fingerprints for personal identification for a very long time [29].Mod-
ern fingerprint matching techniques were initiated in the late 16th century [10].Henry

Fauld, in 1880, first scientifically suggested the individuality and uniqueness of finger-
prints. At the same time, Herschel asserted that he had practiced fingerprint identification
for about 20 years [29]. This discovery established the foundation of modern fingerprint
identification. In the late 19th century, Sir Francis Galton conducted an extensive study
of fingerprints [29]. He intro duced the minutiae features for fingerprint classification
in 1888. The discovery of the uniqueness of fingerpr ints caused an immediate decline
652 CHAPTER 23 Fingerprint Recognition
in the prevalent use of anthropometric methods of identification and led to the adop-
tion of fingerprints as a more efficient method of identification [36]. An important
advance in finger print identification was made in 1899 by Edward Henry, who (actu-
ally his two assistants from India) established the famous “Henry system” of fingerprint
classification [10, 29]: an elaborate method of indexing fingerprints very much tuned
to facilitating the human experts performing (manual) fingerprint identification. In the
early 20th century, fingerprint identification was formally accepted as a valid personal
identification method by law enforcement agencies and became a standard procedure in
forensics [29]. Fingerprint identification agencies were set up worldwide and criminal
fingerprint databases were established [29]. With the advent of livescan fingerprinting
and availability of cheap fingerprint sensors,fingerprints are increasingly used in govern-
ment (US-VISIT program [40]) and commercial (Walt Disney World fingerprint
verification system [8]) applications for person identification.
23.5 SYSTEM ARCHITECTURE
The architecture of a fingerprint-based automatic identity authentication system is
shown in Fig. 23.2. It consists of four components: (i) user interface, (ii) system database,
(iii) enrollment module, and (iv) authentication module. The user interface provides
mechanisms for a user to indicate his identity and input his fingerprints into the system.
The system database consists of a collection of records, each of which corresponds to
an authorized person that has access to the system. In general, the records contain
Minutia
Extractor
Minutia

Matcher
Extractor
Minutia
Quality
Authentication Module
Enrollment Module
Checker
User Name
System DatabaseUser Interface
FIGURE 23.2
Architecture of an automatic identity authentication system [22]. © IEEE.
23.6 Fingerprint Sensing 653
the following fields which are used for authentication pur poses: (i) user name of the
person, (ii) minutiae templates of the person’s fingerprint, and (iii) other information
(e.g., specific user privileges).
The task of the enrollment module is to enroll persons and their fingerprints into
the system database. When the fingerprint images and the user name of a person to
be enrolled are fed to the enrollment module, a minutiae extr action algorithm is first
applied to the fingerprint images and the minutiae patterns are extracted. A quality
checking algorithm is used to ensure that the records in the system database only consist
of fingerprints of good quality, in which a significant number (default value is 25) of
genuine minutiae are detected. If a fingerprint image is of poor quality, it is enhanced to
improve the clarity of ridge/valley structures and mask out all the regions that cannot be
reliably recovered. The enhanced fingerprint image is fed to the minutiae extractor again.
The task of the authentication module is to authenticate the identity of the person
who intends to access the system. The person to be authenticated indicates his identity
and places his finger on the fingerprint scanner; a digital image of the fingerprint is
captured; minutiae pattern is extracted from the captured fingerprint image and fed to a
matching algorithm which matches it against the person’s minutiae templates stored in
the system database to establish the identity.

23.6 FINGERPRINT SENSING
There are two primary methods of capturing a fingerprint image: inked (offline) and
live scan (inkless) (see Fig. 23.3). An inked fingerprint image is typically acquired in the
following way: a trained professional
2
obtains an impression of an inked finger on a
paper, and the impression is then scanned using a flat bed document scanner. The live
scan fingerprint is a collective term for a fingerprint image directly obtained from the
finger without the intermediate step of getting an impression on a paper. Acquisition of
inked fingerprints is cumbersome; in the context of an identity authentication system,
it is both infeasible and socially unacceptable. The most popular technology to obtain
a live-scan fingerprint image is based on the optical frustrated total internal reflection
(FTIR) concept [28]. When a finger is placed on one side of a glass platen (prism), ridges
of the finger are in contact with the platen, while the valleys of the finger are not in
contact with the platen (see Fig. 23.4). The rest of the imaging system essentially consists
of an assembly of an LED light source and a CCD placed on the other side of the glass
platen. The light source illuminates the glass at a certain angle, and the camera is placed
such that it can capture the light reflected from the glass. The light that incidents on
the platen at the glass surface touched by the ridges is randomly scattered while the
2
Possibly, for reasons of expediency, MasterCard sends fingerprint kits to their credit card customers.
The kits are used by the customers themselves to create an inked fingerprint impression to be used for
enrollment.
654 CHAPTER 23 Fingerprint Recognition
(a) (b) (c)
(d) (e) (f)
FIGURE 23.3
Fingerprint sensing: (a) An inked fingerprint image could be captured from the inked
impression of a finger; (b) a live-scan fingerprint is directly imaged from a live finger
based on optical total internal reflection principle: the light scatters where the finger (e.g.,

ridges) touches the glass prism and the light reflects where the finger (e.g., valleys) does
not touches the glass prism; (c) fingerprints captured using solid state sensors show a
smaller area of the finger than a typical fingerprint dab captured using optical scan-
ners; (d) rolled fingerprints are images depicting the nail-to-nail area of a finger; (e) a
3D fingerprint is reconstructed from touchless fingerprint sensors (adopted from [33]);
(f) a latent fingerprint refers to a partial print typically lifted from the scene of a crime.
Valley
Ridge
Prism
LED
Lens
CCD
FIGURE 23.4
FTIR-based fingerprint sensing [30].
23.7 Fingerprint Features 655
FIGURE 23.5
Fingerprint sensors can be embedded in many consumer products.
light that incidents at the glass surface corresponding to valleys suffers total internal
reflection. Consequently, portions of the image formed on the imaging plane of the CCD
corresponding to ridges are dark and those corresponding to valleys are bright. In recent
years, capacitance-based solid state live-scan fingerprint sensors are gaining popularity
since they are very small in size and can be easily embedded into laptop computers,
mobile phones, computer peripherals, and the like (see Fig. 23.5). A capacitance-based
fingerprint sensor essentially consists of an array of electrodes. The fingerprint skin acts
as the other electrode, thereby, forming a miniature capacitor. The capacitance due to
the ridges is higher than those formed by valleys. This differential capacitance is the basis
of operation of a capacitance-based solid state sensor [45]. More recently, multispectral
sensors [38] and touchless sensors [33] have been invented.
23.7 FINGERPRINT FEATURES
Fingerprint features are generally categorized into three levels. Le vel 1 features are the

macro details of the fingerprint such as ridge flow, pattern type, and singular points
(e.g., core and delta). Level 2 features refer to minutiae such as ridge bifurcations and
endings. Level 3 features include all dimensional attributes of the ridge such as ridge
path deviation, width, shape, pores, edge contour, incipient ridges, breaks, creases, scars,
and other permanent details (see Fig. 23.6). While Level 1 features are mainly used for
fingerprint classification, Level 2 and Level 3 features can be used to establish the indi-
viduality of fingerprints. Minutiae-based representations are the most commonly used
656 CHAPTER 23 Fingerprint Recognition
Arch
Line-unit Line-fragment Ending Bifurcation Eye Hook
ScarsWartsCreasesIncipient
ridges
Line shapePores
Tented
arch
Level 1 features
Level 2 features
Level 3 Features
Left loop Right loop Double loop Whorl
FIGURE 23.6
Fingerprint features at three levels [25]. © IEEE.
representation, primarily due to the following reasons: (i) minutiae capture much of
the individual information, (ii) minutiae-based representations are storage efficient, and
(iii) minutiae detection is relatively robust to various sources of fingerprint degradation.
Typically, minutiae-based representations rely on locations of the minutiae and the direc-
tions of ridges at the minutiae location. In recent years, with the advances in fingerprint
sensing technology, many sensors are now equipped with dual resolution (500 ppi/1000
ppi) scanning capability. Figure 23.7 shows the images captured at 500 ppi and 1000 ppi
by a CrossMatch L SCAN 1000P optical scanner for the same portion of a fingerprint.
Level 3 features are receiving more and more attention [6, 25] due to their importance in

matching latent fingerprints which generally contain much fewer minutiae than rolled
or plain fingerprints.
23.8 Feature Extraction 657
(a) (b)
FIGURE 23.7
Local fingerprint images captured at (a) 500 ppi; and (b) 1000 ppi. Level 3 features such as
pores are more clearly visible in a higher resolution image.
23.8 FEATURE EXTRACTION
A feature extractor finds the ridge endings and ridge bifurcations from the input fin-
gerprint images. If ridges can be perfectly located in an input fingerprint image, then
minutiae extraction is a relatively simple task of extracting singular points in a thinned
ridge map. However, in practice, it is not always possible to obtain a perfect ridge map.
The performance of currently available minutiae extraction algorithms depends heavily
on the quality of the input fingerprint images. Due to a number of factors (aberrant
formations of epidermal ridges of finger prints, postnatal marks, occupational marks,
problems with acquisition devices, etc.), fingerprint images may not always have
well-defined ridge structures.
A reliable minutiae extraction algorithm is critical to the performance of an auto-
matic identity authentication system using fingerprints. The overall flowchart of a typical
algorithm [22, 35] is depicted in Fig. 23.8. It mainly consists of three components:
(i) Orientation field estimation, (ii) ridge extr action, and (iii) minutiae extraction and
postprocessing.
1. Orientation Estimation: The orientation field of a fingerprint image represents
the directionality of ridges in the fingerprint image. It plays a very important role in
fingerprint image analysis. A number of methods have been proposed to estimate
the orientation field of fingerprint images [28]. A fingerprint image is typically
divided into a number of nonoverlapping blocks (e.g., 32 ϫ 32 pixels), and an
orientation representative of the ridges in the block is assigned to the block based
on an analysis of grayscale gradients in the block. The block orientation could
658 CHAPTER 23 Fingerprint Recognition

Orientation
estimation
Ridge
extraction
Fingerprint
locator
Thinning
Minutia
extraction
Orientation field
Region of interest
Thinned ridges
Input image Extracted ridges
Minutiae
FIGURE 23.8
Flowchart of the minutiae extraction algorithm [22]. © IEEE.
be determined from the pixel gradient orientations based on, say, averaging [28],
voting [31], or optimization [35]. We have summarized the orientation estimation
algorithm in Fig . 23.9.
2. Segmentation: It is important to localize the portions of the fingerprint image
depicting the finger (foreground). The simplestapproach segments theforeground
by global or adaptive thresholding. A novel and reliable approach to segmenta-
tion by Ratha et al. [35] exploits the fact that there is significant difference in the
magnitudes of variance in the gray levels along and across the flow of a finger-
print ridge. Typically, block size for variance computation spans 1-2 inter-ridge
distances.
3. Ridge Detection: Common approaches to ridge detection use either simple or
adaptive thresholding. These approaches may not work for noisy and low-contrast
23.8 Feature Extraction 659
(a) Divide the input fingerprint image into blocks of size W ϫ W .

(b) Compute the gradients G
x
and G
y
at each pixel in each block [4].
(c) Estimate the local orientation at each pixel (i,j) using the following equa-
tions [35]:
V
x
(i, j) ϭ

W
2

uϭiϪ
W
2

W
2

vϭjϪ
W
2
2G
x
(u,v)G
y
(u,v), (23.1)
V

y
(i, j) ϭ

W
2

uϭiϪ
W
2

W
2

vϭjϪ
W
2
(G
2
x
(u,v) Ϫ G
2
y
(u,v)), (23.2)
␪(i, j) ϭ
1
2
tan
Ϫ1
(
V

x
(i, j)
V
y
(i, j)
), (23.3)
where W is the size of the local window; G
x
and G
y
are the gradient magnitudes
in x and y directions, respective ly.
(d) Compute the consistency level of the orientation field in the local neighborhood
of a block (i, j) with the follow ing formula:
C(i, j) ϭ
1
N





(i
Ј
,j
Ј
)∈D


␪(i

Ј
, j
Ј
) Ϫ ␪(i, j)


2
, (23.4)
|
␪ЈϪ ␪
|
ϭ

d if (d ϭ (␪
Ј
Ϫ ␪ ϩ 360) mod 360)<180,
d Ϫ 180 otherwise,
(23.5)
where D represents the local neighborhood around the block (i,j) (in our
system, the size of D is 5 ϫ 5); N is the number of blocks within D; ␪(i
Ј
, j
Ј
) and
␪(i, j) are local ridge orientations at blocks (i
Ј
, j
Ј
) and (i, j), respectively.
(e) If the consistency level Eq. (23.5) is above a certain threshold T

c
, then the local
orientations around this region are re-estimated at a lower resolution level until
C(i, j) is below a certain level.
FIGURE 23.9
Hierarchical orientation field estimation algorithm [22]. © IEEE.
portions of the image. An important property of the ridges in a fingerprint image
is that the gray level values on ridges attain their local maxima along a direction
normal to the local ridge or ientation [22, 35]. Pixels can be identified to be ridge
pixels based on this property. The extracted ridges may be thinned/cleaned using
standard thinning [32] and connected component algorithms [34].
660 CHAPTER 23 Fingerprint Recognition
4. Minutiae Detection: Once the thinned ridge map is available, the ridge pixels with
three ridge pixel neighbors are identified as ridge bifurcations and those with one
ridge pixel neighbor identified as ridge endings. However, all the minutiae thus
detected are not genuine due to image processing artifacts and the noise in the
fingerprint image.
5. Postprocessing: In this stage, typically, genuine minutiae are gleaned from the
extracted minutiae using a number of heuristics. For instance, too many minutiae
in a small neighborhood may indicate noise and they could be discarded.Very close
ridge endings oriented antiparallel to each other may indicate spurious minutia
generated by a break in the ridge due to either poor contrast or a cut in the finger.
Two very closely located bifurcations sharing a common short ridge often suggest
extraneous minutia generated by bridging of adjacent ridges as a result of dirt or
image processing artifacts.
23.9 FINGERPRINT ENHANCEMENT
The performance of a fingerprint image matching algorithm relies critically on the quality
of the input fingerprint images. In practice, a significant percentage of acquired finger-
print images (approximately 10% according to our experience) is of poor quality. The
ridge structures in poor-quality fingerprint images are not always well defined and hence

they cannot be correctly detected. This leads to the following problems: (i) a significant
number of spurious minutiae may be created, (ii) a large percentage of genuine minutiae
may be ignored, and (iii) large errors in minutiae localization (position and orientation)
may be introduced. In order to ensure that the performance of the minutiae extraction
algorithm will be robust with respect to the quality of fingerprint images, an enhancement
algorithm which can improve the clar ity of the ridge structures is necessary.
Typically, fingerprint enhancement approaches [7, 14, 18, 26] employ frequency
domain techniques [14, 15, 26] and are computationally demanding. In a small local
neighborhood, the ridges and furrows approximately form a two-dimensional sinusoidal
wave along the direction orthogonal to local ridge orientation. Thus, the ridges and
furrows in a small local neighborhood have well-defined local frequency and local ori-
entation properties. The common approaches employ bandpass filters which model the
frequency domain characteristics of a good-quality fingerprint image. The poor-quality
fingerprint image is processed using the filter to block the extraneous noise and pass
the fingerprint signal. Some methods may estimate the orientation and/or frequency of
ridges in each block in the fingerprint image and adaptively tune the filter characteristics
to match the ridge characteristics.
One typical variation of this theme segments the image into nonoverlapping square
blocks of widths larger than the average inter-ridge distance. Using a bank of directional
bandpass filters, each filter is matched to a predeter mined model of generic fingerprint
ridges flowing in a certain direction; the filter generating a strong response indicates
the dominant direction of the ridge flow in the finger in the given block. The resulting
23.9 Fingerprint Enhancement 661
orientation information is m ore accurate, leading to more reliable features. A single
block direction can never truly represent the directions of the ridges in the block and
may consequently introduce filter artifacts.
For instance, one common directional filter used for fingerprint enhancement is a
Gabor filter [21]. Gabor filters have both frequency-selective and orientation-selective
properties and have optimal joint resolution in both spatial and frequency domains. The
even-symmetric Gabor filter has the general form [21]:

h(x,y) ϭ exp

Ϫ
1
2

x
2

2
x
ϩ
y
2

2
y

cos(2␲u
0
x), (23.6)
where u
0
is the frequency of a sinusoidal plane wave along the x-axis, and ␦
x
and ␦
y
are
the space constants of the Gaussian envelope along the x and y axes, respectively. Gabor
filters with arbitrar y orientation can be obtained via a rotation of the x Ϫ y coordinate

system. The modulation transfer function (MTF) of the Gabor filter can be represented as
H(u,v) ϭ 2␲␦
x

y

exp

Ϫ
1
2

(u Ϫ u
0
)
2

2
u
ϩ
v
2

2
v

ϩ exp

Ϫ
1

2

(u Ϫ u
0
)
2

2
u
ϩ
v
2

2
v

, (23.7)
where ␦
u
ϭ 1/2␲␦
x
and ␦
v
ϭ 1/2␲␦
y
. Figure 23.10 shows an even-symmetric Gabor
filter and its MTF. Typically, in a 500 dpi, 512 ϫ 512 fingerprint image, a Gabor filter
with u
0
ϭ 60 cycles per image width (height), the radial bandwidth of 2.5 octaves, and

orientation ␪ models the fingerprint ridges flowing in the direction ␪ ϩ ␲/2.
We summarize a novel approach to fingerprint enhancement proposed by Hong [16]
(see Fig. 23.11). It decomposes the given fingerprint image into several component images
using a bank of directional Gabor bandpass filters and extracts ridges from each of the
filtered bandpass images using a typical feature extraction algorithm [22]. By integrating
information from the sets of ridges extracted from filtered images, the enhancement
0
100
200
2200
2200
2100
2100
0
100
200
21
20.5
0
0.5
1
(a)
(b)
0
100
200
2200
2100
0
100

200
0
0.2
0.4
0.6
0.8
1
2200
2100
FIGURE 23.10
An even-symmetric Gabor filter: (a) Gabor filter tuned to 60 cycles/width and 0

orientation;
(b) corresponding modulation transfer function.
662 CHAPTER 23 Fingerprint Recognition
Coarse-level
ridge map
Bank of gabor filters
Ridge extraction
Voting algorithm
Estimate local orientation
Composition
Enhanced image
Input image
Filtered images
Orientation field
Ridge maps
Unrecoverable
region mask
0

100
200
2200
2200
2100
2100
0
100
200
21
20.5
0
0.5
1
FIGURE 23.11
Fingerprint enhancement algorithm [16].
23.9 Fingerprint Enhancement 663
algorithm infers the region of a fingerprint where there is sufficient information to be
considered for enhancement (recoverable region) and estimates a coarse-level ridge map
for the recoverable region. The information integration is based on the observation that
genuine ridges in a region evoke a strong response in the feature images extracted from
the filters oriented in the direction parallel to the ridge direction in that region and at
most a weak response in feature images extracted from the filters oriented in the direction
orthogonal to the ridge direction in that region. The coarse ridge map thus generated
consists of the ridges extracted from each filtered image which are mutually consistent,
and portions of the image where the ridge information is consistent across the filtered
images constitutes a recoverable region. The orientation field estimated from the coarse
ridge map (see Section 23.8) is more reliable than the orientation estimation from the
input fingerprint image.
After the orientation field is obtained, the fingerprint image can then be adaptively

enhanced by using the local orientation information. Let f
i
(x,y) (i ϭ 0, 1, 2, 3, 4, 5, 6,
7) denote the gray level value at pixel (x, y) of the filtered image corresponding to the
orientation ␪
i
, ␪
i
ϭ i ∗22.5

. The gray level value at pixel (x,y) of the enhanced image
can be interpolated according to the following formula:
f
enh
(x,y) ϭ a(x,y)f
p(x,y)
(x,y) ϩ (1 Ϫ a( x,y))f
q(x,y)
(x,y),
(23.8)
where p(x,y) ϭ 
␪(x,y)
22.5
, q(x,y) ϭ 
␪(x,y)
22.5
 mod 8, a(x,y) ϭ
␪(x,y)Ϫp(x,y)
22.5
, and ␪(x,y)

represents the value of the local orientation field at pixel (x,y). The major reason that we
interpolate the enhanced image directly from the limited number of filtered images is that
the filtered images are already available and the above interpolation is computationally
efficient.
An example illustrating the results of a minutiae extraction algorithm on a noisy
input image and its enhanced counterpart is shown in Fig. 23.12. The improvement in
performance due to image enhancement was evaluated using the fingerprint matcher
described in Section 23.11. Figure 23.13 shows improvement in accuracy of the matcher
(a) (b) (c)
FIGURE 23.12
Fingerprint enhancement results: (a) a poor-quality fingerprint; (b) minutia extracted without
image enhancement; (c) minutiae extracted after image enhancement [16].
664 CHAPTER 23 Fingerprint Recognition
10
{3
10
{2
10
{1
10
0
10
1
10
2
0
10
20
30
40

50
60
70
80
90
100
False acceptance rate (%)
Authentic acceptance rate (%)
With enhancement
Without enhancement
FIGURE 23.13
Performance of fingerprint enhancement algorithm.
with and without image enhancement on the MSU database consisting of 700 fingerprint
images of 70 individuals (10 fingerprints per finger per indiv idual).
23.10 FINGERPRINT CLASSIFICATION
Fingerprints have been traditionally classified into categories based on information in
the global patterns of ridges. In large-scale fingerprint identification systems, elaborate
methods of manual finger print classification systems were developed to index individ-
uals into bins based on classification of their fingerprints; these methods of binning
eliminate the need to match an input fingerprint(s) to the entire fingerprint database
in identification applications and significantly reduce the computing requirements
[3, 13, 23, 27, 42].
Efforts in automatic fingerprint classification have been exclusively directed at repli-
cating the manual fingerprint classification system. Figure 23.1 shows one prevalent
manual fingerprint classification scheme that has been the focus of many automatic fin-
gerprint classification efforts. It is important to note that the distribution of fingers into
the six classes (shown in Fig. 23.1) is highly skewed. Three fingerprint types, namely
left loop, right loop, and whorl, account for over 93% of the fingerprints. A fingerprint
classification system should be invariant to rotation, translation, and elastic distortion of
the frictional skin. In addition, often a significant part of the finger may not be imaged

×