A Statistical Investigation of Fingerprint Patterns Bob Hastings

A Statistical Investigation of Fingerprint Patterns
Bob Hastings
Department of Computer Science & Software Engineering
The University of Western Australia
35 Stirling Highway, Crawley, W.A. 6009, Australia.
email: [email protected]
Different measurements
from same person
This project is concerned with the statistical uniqueness of
fingerprints. We aim to ultimately define criteria that will
allow an individual to be identified to a defined confidence
level using fingerprint information. This has become an important issue following several well publicised instances of
mistakes in identification by law enforcement agencies.
Fingerprint based identification relies on finding matching
sets of ridge features (minutiae) between a sample print and
one from a database. We present some preliminary results
of a statistical analysis of the behaviour of two distributions
– the match scores between different prints taken from the
same finger, and the match score between prints from two
different fingers. These two distributions determine whether
it is possible to specify select a match threshold that will give
sufficient confidence in deciding whether or not two prints
are from the same source.
The above analysis employs a database of prints which have
the positions of the ridge features located manually. We are
also working on the automated analysis of fingerprint images
to extract these features. This will be a necessary step in
analysing larger fingerprint databases to obtain meaningful
detailed statistics on the spatial distribution of the features.
Keywords: fingerprint classification, fingerprint feature
extraction, statistics, minutiae, identification.
CR Classifications: 1.2.1 (Application and Expert Systems) - Medicine and Science.
1.1 Biometric identification
Various techniques exist for characterising an individual based
on certain biological parameters believed to be unique to the
individual. These methods are known as biometric identification, and as well as fingerprints they include iris imaging,
voice recognition, face recognition and handwriting analysis.
A biometric identification system functions by employing
a matcher, defined as “a system that takes two samples
of biometric data and returns a score that indicates their
similarity” [5]. This score is compared with a prespecified
15th School of Computer Science & Software Engineering Research
Conference, Yanchep, Western Australia, 30–31 October 2006.
c 2006 is asserted by the author(s).
False Acceptance
False Rejection
Figure 1: Typical distribution of encoding differences for samples from the same person (left), and
for different people (right), showing the trade-off between false acceptance and false rejection.
threshold in order to decide whether or not two samples
originated from the same individual. Two kinds of errors
may occur when applying the matcher ([5, p 65]):
1. False Match (FM): Deciding that two biometrics are
from the same identity, while in reality they are from
different identities,
2. False Non-Match (FNM): Deciding that two biometrics are not from the same identity, while in reality
they are from the same identity.
The rates at which these two types of error occur are denoted, respectively, FMR (False Match Rate) and FNMR
(False Non-Match Rate). There is always a tradeoff between
these two error rates – the FMR may be reduced by appropriately changing the value of the threshold, but this will
invariably result in a larger FNMR, and vice versa (Figure 1).
Identification of an individual via fingerprint analysis has
traditionally relied on having a fixed number of point matches
based on the ridge pattern. A point match is a single feature, for example a ridge termination, that is identified in
both the input print and in the data print to which it is being matched. The feature set must satisfy, as far as possible,
the criteria of uniqueness and invariance, so as to minimise
the probability of a false match or a false non-match.
The manual examination and classification of fingerprints is
a time-intensive and error-prone activity [16, P. 1] due to
the sheer number of samples in the databases, the attention
to detail required and the inadequacy of existing classification schemes. Serious concerns have been raised about the
reliability of human examiners. Several cases of misidentification based on fingerprints, leading to wrongful arrests
and/or convictions, have been noted [17] [28] [7].
Some things that make fingerprint identification difficult are:
incomplete prints, distortion of the print and finger injuries
such as cuts and scars [11], [16, pp. 131-132].
The result of these factors is that two prints of the same
finger, taken at different times, will never be identical even
if the same sensing technology is employed.
The use of fingerprints in establishing the presence of a suspect at the scene of a crime, or in clearing innocent suspects,
is well known and has a long history (see for example [12,
pp. 275-317]). Newer applications include ([16]):
• Authorisation of a person to enter a restricted area (in
lieu of requiring the person to carry a pass card),
• User identification at an ATM (avoiding the need to
remember a PIN),
• Use of the print as a seal of authenticity in lieu of a
A number of challenges in the courts to the reliability of the
fixed point standard has resulted in the abandonment of this
standard in most countries. The challenges have revolved
around the admissibility rules for scientific evidence set forth
in the Daubert vs. Merrill Dow Pharmaceuticals case
in 1993 in the U.S. The Daubert ruling laid down a number
of criteria for the admissibility of scientific evidence, the
most significant in the current context being that the rate
of error should be known and stated [9]. Although this
case had nothing to do with fingerprints, there are clearly
implications regarding the admissibility of fingerprints as
evidence. There is a need to properly assess the reliability
and validity of the identification methods presently used.
2.1 History of fingerprints
Finger ridge patterns were noted and described in detail by
scientists as early as the 17th century [12]. Sir Francis Galton (1822-1911), began in the late 1880’s the process of classifying fingerprints according to the presence and position of
“triangles”, or delta regions, in the ridge flow field. Following on from this work, Edward Henry(1859-1931) identified
patterns in the ridge flow which were to become the basis of
the standard fingerprint classification scheme described in
section 2.2.1 [12, p. 278].
Many researchers, eg. [21], have noted that:
• The pattern appears to remain unchanged with age
and to regenerate even after serious injury to the skin.
• Each individual has a wholly distinctive set of ridge
The first criminal conviction based on the evidence provided
by “latent” fingerprints took place in Argentina in 1892 [12,
p. 284].
Fingerprint Features and Classification
Before a print can be classified the relevant features must be
extracted, either manually or automatically. These features
are of three kinds:
• Level 1 features: Large scale patterns of the ridge
• Level 2 features: Finer scale features of the placement and behaviour of the ridges themselves, especially instances where ridges meet or terminate. These
features are referred to as minutiae.
• Level 3 features: Details of the individual ridges,
eg. the location of the sweat pores on the ridges, or
irregularities in the ridge edges. These entities are
much more difficult than the level 1 and 2 features
to capture in a fingerprint image.
Level 1 Features (Ridge Flow Patterns)
Penrose [21] describes how a system of lines (such as the
epidermal ridges on the fingertips) that are locally parallel
but can curve gradually has certain topological constraints.
The lines may fan out and form cusps without disturbing
the continuity of the field. However there are two kinds of
discontinuity that can arise:
1. A loop is formed when the parallel field turns through
180◦ and meets itself.
2. A triradius or delta is formed when three fields surround a point, leaving a gap in the centre.
The presence of a loop in a pattern requires that a triradius
also be present in order to maintain broad scale continuity
of the ridge line flow (this can be shown mathematically).
Berry and Stoney [4] classify these patterns into 4 types:
arch, tented arch, loop and whorl.
• An arch fingerprint has ridges that enter from one
side, rise to a hump, and leave from the opposite side.
• A tented arch is similar to the plain arch, but contains one ridge with a high curvature and contains one
loop and one delta.
• A loop has one or more ridges that enter from one side,
curve back, and leave on the same side they entered.
A loop may be further classified as a right loop or a
left loop according to whether the ridges enter and
leave from the right side or the left side.
• A whorl contains at least one ridge that makes a complete 360◦ path around the centre of the feature. The
whorl class can be be further divided into two categories: twin loop or plain whorl; the latter may be
thought of as two loops that coincide. In either case,
a whorl is always associated with exactly two deltas.
These 5 categories - arch, tented arch, right loop, left loop
and whorl, illustrated in figure 2, form the basis of the
Galton-Henry classification scheme [12, p. 279], [16, pp.
Level 2 Features (Ridge Placement and Minutiae)
Several classes of feature may be identified. [4], for example,
define seven basic types of fine scale ridge features. However
these can all be described as combinations of the basic two:
ridge endings and ridge bifurcations. Figure 3 shows
some typical occurrences of these.
(a) Arch
(b) Tented arch
(c) Loop
(d) Twin loop
Subtypes include the independent ridge (a short isolated
ridge that can be represented as two ridge endings), and the
crossover (a short ridge running between two longer ridges,
which can be specified as two bifurcations).
Level 3 Features (Fine Ridge Details)
Level 3 details are considered beyond the scope of this project.
Whilst details such as the individual sweat pores are frequently visible in good quality database prints (“ten-prints”),
and improved latent fingerprint extraction techniques can often reveal these features in a latent print, there has been little work done on the individuality of these features, though
human fingerprint examiners frequently make use of them
as an additional aid to verification [2, pp 149-150].
Fingerprint Processing and Feature Extraction
There are several steps involved in the use of fingerprints for
identification purposes:
1. Selection of the appropriate parameters, eg. presence
or absence of certain features in the ridge pattern.
2. Capture of the data, i.e. the image of the fingerprint.
3. Determination of the relevant parameters, i.e. identification of features of the print (minutiae). This may
be manual or automated.
Fingerprint Classification
Once the parameters have been extracted, the print must be
described by some classification scheme in order to narrow
down the search for a matching database print as much as
possible. The Galton-Henry scheme provides an essential
but very rudimentary initial classification, using only Level
1 detail. To narrow down significantly the number of possible matching prints, a much stronger analysis is necessary,
using both level 1 and level 2 detail. Coetzee and Botha
[6] for example describe a number of different classification
(e) Whorl
Figure 2: Examples of fingerprint classes
a high-resolution orientation field via Principal Component
Analysis, extracting the singular points, identifying each as
a core or delta, and finding their orientations.
Figure 3: Closeup of portion of fingerprint, indicating some occurrences of ridge bifurcations and
• Using the directional image. Feature vectors are derived based on the dominant ridge direction in each
block of the image.
• Using a Fourier Transform combined with a wedgering detector which partitions the frequency domain
image into frequency components.
• Using a correlation classifier, which compares the
print images directly.
3.1 Previous Work on Fingerprint Processing
and Feature Extraction
Jain and Pankati [11, Fig. 8.11] describes a typical feature
extraction algorithm at the conceptual level:
1. From the input image, estimate the ridge orientation at each point.
2. Generate a ridge representation:
Minutiae extraction can be performed by using the Crossing Number or Condition Number ([25]; [1]). The image is first preprocessed, in the course of which it is converted to a binary image. The crossing number is then calculated at each pixel by traversing the eight neighbouring
pixels and counting the number of transitions between the
two possible binary values:
CN =
| (Pk+1 − Pk ) |
where Pk has the value 0 or 1, and P9 = P1 .
A CN of 2 identifies a ridge ending, while a CN of 6 identifies
a bifurcation.
The spacing between the ridges, or equivalently the ridge
frequency, may be a useful quantity for classification purposes, and can be computed in a number of ways. Most
researchers have proceeded by first producing an estimate
of the orientation field, then using one or more projections
perpendicular to the orientation to infer the ridge frequency.
Ratha et al. [22] for example analyse an image via the following steps:
1. Perform preprocessing and segmentation to convert
the image into a binary image with the print impression separated from the background.
• Identify the locations of the ridges.
2. Derive the orientation field using Principal Component
Analysis. The variance also gives a “quality” measurement for the orientation estimate.
• Use a thinning algorithm to reduce the ridges to
one pixel width, creating a “skeleton image”.
3. Locate the ridges using a single projection perpendicular. to the orientation. The ridges are smoothed, then
the peaks are counted, giving a ridge count.
• Refine the ridges to eliminate imperfections such
as spurious gaps.
3. Analyse the ridges to identify the location and orientation of the minutiae.
Most existing classification approaches make use of the orientation image [16, P 176]. This representation was first
introduced by [10]. The orientation field may be computed
by the application of appropriate image filters sensitive to
particular texture orientations ([8]; [14]). For example a 2-D
Gabor filter may be applied to detect periodicity in a particular direction [27]. [19] show how to efficiently compute
a hierarchy of such filters at various resolutions and various
Another way to compute orientation is to employ Principal
Component Analysis. This proceeds by taking the intensity gradients in the x and y directions at each point, and
constructing a correlation matrix. The orientation derived
in this way is effectively the direction in which the intensity
variation is a minimum. [3] describe a method for deriving
4. Locate the minutiae by constructing a skeleton image
and examining the behaviour of the binary values at
the neighbour points (i.e. using the Crossing Number
5. Perform postprocessing to eliminate spurious minutiae
– ridge breaks, spikes, and minutiae introduced as artifacts of boundary effects.
Mehtre [18] uses a similar approach, though in computing
the ridge directions he uses the magnitudes of intensity differences (whereas Principal Component Analysis is based on
the squares of these differences).
Maio and Maltoni [15] note that automatic minutia extraction is subject to three kinds of error:
1. False minutiae,
2. Misclassification (ridge ending classified as a ridge bifurcation or vice versa),
3. Missed (“dropped”) minutiae.
These errors are caused by gaps in the impression of the
ridge lines, and by noise in the image.
In an attempt to reduce the first two types or error, they
employ a neural network based minutiae filter. The minutia neighbourhood is first normalised with respect to orientation, scale and perturbations such as image intensity. A
neural network classifier is then applied to the resultant pattern. They succeed in effecting significant reductions in the
overall error rate, though with a small increase in the rate
of dropped minutiae.
The elimination of false minutiae is a field in itself. It is
important because a falsely marked minutia point in a data
or sample print can dramatically increase the likelihood of
a false match or a false non-match. Xiao and Raafat [29]
describe a combined statistical and structural approach to
removing false minutiae in skeleton fingerprint images.
3.2 Work on Fingerprint Classification and
Beginning with the work of Galton in 1892, many researchers
have devised models to describe the individuality of fingerprints [24] [16, p 261]. Traditionally the number of matching
minutiae has been used as the criterion for identification. In
the U.S., for example, a tally of twelve match points was formerly regarded as definitely sufficient, while seven or eight
was considered acceptable if the points satisfied an experienced examiner. Stoney concludes, however, that:
From a statistical viewpoint, the scientific foundation for fingerprint individuality is incredibly
weak. [24]
A recent approach to classification is that taken by [26], who
perform a statistical analysis of the distribution of minutiae
over a 100 by 100 pixel grid centred on a given minutia and
aligned with the orientation of this minutia. The grid is
subdivided into 100 squares, each of size 10 by 10 pixels,
and a binary code is generated recording the presence or
absence of a minutia in each square. A similarity measure
between two codes is then calculated by taking the Hamming
distance between the two bit sequences. The prints in the
database are thereby ranked according to the closeness of
the match with the latent print.
Pankati et al. [20] derive a formula for estimating the probability that two randomly selected prints that respectively
have m and n discernible minutiae will have a given number ρ of minutiae that match, i.e. are present in the same
location to within a certain spatial tolerance. The size of
this tolerance depends on the observed variation between
different impressions of the same finger, and is selected via
analysis of a database of mated fingerprint pairs. Specifying
this tolerance determines the value of M , the total number of possible minutia locations. The probability of such a
match follows a hyper-geometric distribution with parameters M , m, n and ρ, and is shown to be highly sensitive to
both the number of minutiae and the number of matches.
In other words, a misjudgment regarding the existence of a
minutia when examining the prints dramatically increases
the probability of a false match or a false non-match.
There are two main components to this project:
1. We employ a point pattern matching technique to gather
statistics on the degree of match in the configurations
of features points between prints from different fingers,
and also between different prints from the same finger.
For this work a comparatively small database ( several hundred prints) is available that contains prints
from which the features have already been manually
extracted and tabulated.
2. We are investigating ways of improving the automatic
extraction of features from the prints, in the hope that
this will enable us to generate more reliable statistical
information from much larger print databases, such
as those maintained by NIST which contain several
hundred thousand prints.
Feature Pattern Matching
The type of question one might want to ask is: given a
fingerprint with eleven feature points identified on it, how
many fingerprints are there in the database that contain
the same eleven points in roughly the same relative spatial
positions, and what is the probability that a different print
will be matched more strongly than the correct print? A
related question is: given two prints taken from the same
finger under different circumstances, what is the nature of
the variation in the location of discernible features? The
answers to these questions will determine the reliability of
this match point technique as a means of identification.
Given a pair of prints to be examined for a possible match,
plus the positions of the features points in each, we first
generate a “feature descriptor” for each feature point in each
of the two prints. This feature descriptor is based on the the
spatial distribution and orientation of other minutiae in the
neighbourhood of the point. It is generated as follows:
• A square array of cells is set up. the cells represent the
presence or absence of a minutia at the point in the
image whose displacement from the reference minutia
corresponds to the coordinates of the cell. The centre
cell therefore corresponds to the reference point.
• The array is rotated so as to align it with the orientation of the reference minutia.
• A Gaussian smoothing filter is applied so as to provide
some spatial tolerance in the location of points when
testing for a match.
• A similarity score between two feature descriptors is
then calculated by summing the absolute differences
of the values at corresponding cell locations. (A lower
value results in a higher similarity score.)
Figure 4: Feature descriptor generated from a region containing four minutiae (including the reference minutia at the centre). The bright region in the
centre results from the overlapping of the blurred
images of the two minutiae near the centre.
Figure 4 shows the positions of minutiae in the neighbourhood of a reference minutia, and the corresponding feature
The effect of the smoothing filter is that two feature points
need not lie at exactly the same pixel locations in order
to contribute to a high similarity score; this allows for the
effects of print distortion and inaccuracies in the imaging
A list of putative first-guess matches is then generated. The
pair of points – one from the first print and the other from
the second print – whose feature descriptors have the closest
match are chosen as the first pair in the list. From the
points that remain, the most closely matching points from
each set are chosen as the second pair, etc Although some
of the pairings may be erroneous – i.e. even if the two prints
are from the same finger the correct point correspondences
have not been chosen – it is expected that the percentage of
correct matches will be sufficient for this list of point pairs
to serve as input to the second stage of the procedure.
In the second stage, we perform an alignment by using a
RANSAC methodology to find the translation and rotation
mapping that gives the best correspondence between the two
paired sets of feature points. Having found the best relative
alignment for the two images, the number and quality of
the pairwise correspondences are used to calculate an overall
match score for the two images. Here “quality” refers to the
spatial separation between the actual position of the feature
point in the second image and the position derived by taking
the position of its matched partner in the first image and
Figure 5: Frequency of a given number of match
points in prints taken from different fingers. Upper
curve results from permitting an affine point transformation, lower curve results from allowing only
rigid transformations.
applying the rotation and translation transformation that
was calculated by the RANSAC algorithm.A pair of putative
matches is retained if this relative separation is less than
10% of the effective diameter of the point sets. This value
is somewhat arbitrary - a relative distortion of 30% is not
uncommon [26].
The number of point pairs that now remain in the list of
matches is denoted the match score.
We examined the statistical behaviour of the distribution of
the match scores between prints from two different fingers
(figure 5).
There were 258 prints in the database; the number of pairwise comparisons was therefore about 33,000.
The mean number of match points is approximately seven,
with a variance close to one; note however that a small number of print pairs give at least ten match points. In fact there
were several print pairs in the database that showed twelve
or more match points.
It is true that this comparison considers only the location
and orientation of minutia – other features such as the ridge
orientation and spacing may well serve to differentiate between two prints from different fingers. These results do
indicate however that using say an eleven-point match standard for identification is at best risky.
(For purpose of comparison, when the matching algorithm
is applied to different prints taken from the same finger, it
typically finds all or nearly all of the matches that were manually identified in the prints. Typically there are between
20 and 80 of these, depending on the quality of the latent
4.2 Automated Feature Extraction
The components of the ridge field we are interested in are:
1. The orientation of the ridges,
2. The locations at which ridges terminate or bifurcate,
and the directionality of these features (i.e. in which
of two possible flow directions is the ridge ending or
We first preprocess the image by applying a Butterworth
Filter which filters out both high and low frequency components. In doing this we take advantage of our knowledge
that the average ridge spacing on the fingertips is about
0.5mm, and seldom differs from this average by more than
a factor of 2, so we do not need to retain features with a
frequency component outside this range. Removal of the
low frequency components effectively normalises the image,
so that the mean intensity is zero over regions significantly
larger than the average ridge spacing.
We determine the orientation field via principal component
analysis (see section 3.1). It can be shown that the direction
θ for which the mean squared gradient of image intensity is
a maximum is given by:
2θ = atan2 (P, D)
• P = 2Gxy
• D = Gxx − Gyy
• Gxx is the mean squared gradient in the x direction,
• Gyy is the mean squared gradient in the y direction,
• Gxy is the mean product of the x and y gradients.
This gives two values of θ (differing by 180◦ ) for which the
mean squared gradient is a maximum; the two directions
at right angles to these therefore give the orientation of the
ridges. Along with the orientation we calculate the coherence, which is a measure of the anisotropy of the image
gradient. we define the coherence C as:
P 2 + D2
Gxx + Gyy
Here the numerator represents the total directional gradient
response while the denominator represents the total variance (which includes any non-directional variance). For an
image region containing perfectly parallel lines of uniform
intensity the coherence would be 1, while a value of zero
would correspond to a region where the intensity variation
is independent of direction.
The orientation field now enables us to locate the cores and
deltas in the print. In tracing out a closed path on the image
(a) Core
(b) Delta
Figure 6: Closed loop surrounding a singular point,
showing the orientation vector at various points
around the curve.
and observing the behaviour of the ridge orientation vector,
we note that in most cases the nett change in θ is zero.
However, if the path surrounds a core point or a delta, the
change is θ is π or −π respectively (see figure 6). This total
change in θ is denoted the Poincaré Index (here referred
to as P I).
Making use of Green’s Theorem [23, P 916], which relates
the integral of a quantity around a closed curve to the integral of the curl of the quantity over the area inside the
(P dx + Qdy) =
Z „
and setting P = ∂θ/∂x and Q = ∂θ/∂y
we find that the Poincaré Index is equal to the algebraic
sum of the Poincaré indices for all points inside the curve as
given by
PI = ∂Q/∂x − ∂P/∂y
In other words, the sum of the Poincaré indices for all points
in a region is equal to π times the difference between the
number of cores and the number of deltas.
Ridge lines in real fingerprint images contain many tiny
breaks and irregularities. and it is necessary to distinguish
these true ridge endings. The solution adopted is to use a
bank of oriented smoothing filters. At each point in the image, the appropriate filter is selected that corresponds with
the orientation of the ridge. Such smoothing can be applied iteratively, resulting in an image in which the ridges
are smooth and uniform but still well defined.
A ridge map is then generated by simply thresholding the
resulting image at zero.
The minutiae are located by a modified version of the Crossing Number called the Divergence Number (DN), which
is calculated by traversing a path around a unit cell, one
pixel on a side, and counting crossings as positive or negative according to whether they are leaving or entering the
cell (as given by the ridge orientation) (see figure 7).
For a unit cell there are only 3 possible values of DN: ±2 or
zero. A non-zero value indicates a change in the number of
ridges, i.e. a minutia. A direction can then be assigned to
the minutia by multiplying the divergence number by a unit
vector in the direction of the ridge orientation.
DN = 0
DN = 0
The methodology also calculates orientation values and finds
spurious ridges and minutiae in parts of the image outside
the fingerprint. An ongoing challenge is the segmentation of
the image to separate the print from the background. the
directional coherence is useful for this but is not sufficient
– artifacts such as ruled lines and handwriting display a
high degree of directional coherence but are not part of the
DN = −1
Preliminary statistical results show that there is a small but
non-negligible probability of two prints from two different
fingers giving a similar pattern of features, indicating that
the number of match points is probably not a sufficiently
reliable determinant of individuality. (This fact is acknowledged by fingerprint examiners, who have for the most part
abandoned the number-of-match-points standard in favour
of a more holistic approach that looks at every discernible
aspect of a print including level 3 detail if available.)
Automated feature extraction is definitely feasible, at least
to a high enough accuracy to permit the output to be used
as the input to a more thorough analysis of large numbers
of good quality prints, to derive some answers on the individuality of given patterns.
DN = +1
DN = 0
1 RI
DN = +1
Clearly the question of whether fingerprints are really unique
can only be answered with certainty by examining the finger ridge patterns of every human on the planet. However,
by examining the frequency of occurrence of various ridge
patterns, over areas ranging in size from very small regions
to a significant fraction of the whole fingerprint, we hope
to be able to provide a scientifically based estimate of the
probability of occurrence of a given pattern.
(a) Ridge traversing cell: DN = (+1) + 0 + 0 + (−1) = 0.
Once this is achieved, we will have a means of automatically
locating the singular points and minutiae, and will be in a
position to apply this to the images in the extensive NIST
databases and derive some statistics on the spatial distribution of the features.
The chief task remaining in the feature extraction component of the project is to reliably separate the fingerprint
from the background.
DN = +1
Figure 8 shows a typical input image and the results of various stages of processing. It can be seen that, in the area of
the image corresponding to the actual fingerprint, we obtain
a good orientation field and a clean well defined ridge map.
1 0 DN = 0
(b) Ridge ending in cell:DN = 0 + (+1) + (+1) + 0 = 2.
Figure 7: Calculation of Divergence Number by
tracing a path around a unit cell and counting transitions between the 2 possible values in the binary
ridge map.
[1] J C Amengual, A Juan, J Prez, F Prat, S Sez, and
J M Vilar. Real-time minutiae extraction in
fingerprint images. In Proceedings of the 6th
International Conference on Image Processing and its
Applications, pages 871–875. IEEE, Jul 1997.
[2] David R Ashbaugh. Quantitative-qualitative friction
ridge analysis. CRC Press, 1999.
[3] A M Bazen and S H Gerez. Systematic methods for
the computation of the directional fields and singular
points of fingerprints. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 24(7):905–919, Jul
(a) Input image
[4] John Berry and David A Stoney. History and
development of fingerprinting. In Lee and Gaensslen
[13], chapter 1, pages 1–40.
[5] Ruud M Bolle, Jonathan A Connell, Sharath
Pankanti, Nalini K Ratha, and Andrew W Senior.
Guide to Biometrics. Springer, 2004.
[6] Louis Coetzee and Elizabeth C Botha. Fingerprint
recognition in low quality images. Pattern Recognition,
26(10):1441–1460, 1993.
[7] Andy Coghlan. How far should prints be trusted?
New Scientist, (2517):6–7, Sep 2005.
[8] D.J. Fleet and A.D. Jepson. Hierarchical contruction
of orientation and velocity selective filters. IEEE
Transactions on Pattern Analysis and Machine
Intelligence, 11(3):315–325, Mar 1989.
(b) Orientation field
[9] Edward German. Daubert hearings. In Lee and
Gaensslen [13], pages 413–417.
[10] A Grasselli. On the automatic classification of
fingerprints. In S Watanabe, editor, Methodologies of
Pattern Recognition, pages 253–273. Academic Press,
New York, 1969.
[11] Anil Jain and Sharath Pankati. Automated fingerprint
identification and imaging systems. In Lee and
Gaensslen [13], chapter 8, pages 275–326.
[12] Brian Lane. The Encyclopedia of Forensic Science.
Headline Book Publishing PLC, London UK, 1992.
(c) Ridge-enhanced image
[13] Henry C Lee and R E Gaensslen, editors. Advances in
Fingerprint Technology. CRC Press, Boca Raton, Fla,
USA, 2 edition, 2001.
[14] A. Low. Introductory Computer Vision and Image
Processing. McGraw-Hill, UK, 1991.
[15] D Maio and D Maltoni. Neural network based
minutiae filtering in fingerprints. In Proceedings of the
14th International Conference on Pattern Recognition,
pages 1654–1658, 1998.
[16] Davide Maltoni, Dario Maio, Anil K Jain, and Salil
Prabhakar. Handbook of Fingerprint Recognition.
Springer-Verlag, New York NY USA, 2003.
(d) Binary ridge map
Figure 8: Typical fingerprint image and the results
of processing.
[17] Flynn McRoberts, Steve Mills, and Maurice Possley.
Forensics under the microscope, Oct 2004. Retrieved
27 Sep 2005 from http://www.truthinjustice.org/
[18] B M Mehtre. Fingerprint image analysis for automatic
identification. Machine Vision and Applications,
6:124–139, 1993.
[19] K.R. Namuduri, R. Mehrotra, and N. Ranganathan.
Efficient computation of Gabor filter based
multiresolution responses. Pattern Recognition,
27(7):925–938, Jul 1994.
[20] S Pankati, S Prabhakar, and A K Jain. On the
individuality of fingerprints. IEEE Transactions on
Pattern Analysis and Machine Intelligence,
24(8):1010–1025, 2002.
[21] L S Penrose. Dermatoglyphics. Scientific American,
221(6):72–84, Dec 1969.
[22] Nalini K. Ratha, Shaoyun Chen, and Anil K. Jain.
Adaptive flow orientation-based feature extraction in
fingerprint images. Pattern Recognition,
28(11):1657–1672, Nov 1995.
[23] James Stewart. Calculus. Brooks/Cole, 3 edition, 1995.
[24] David A Stoney. Measurement of fingerprint
individuality. In Lee and Gaensslen [13], pages
[25] Raymond Thai. Fingerprint image enhancement and
minutiae extraction. Retrieved 11 Jul 2005, from
studentprojects/raymondthai/, 2003.
[26] Peter Tu and Richard Hartley. Statistical significance
as an aid to system performance evaluation. In 6th
European Conference on Computer Vision, Dublin,
Ireland, Jun 26 July 1, 2000, Proceedings, Part II,
pages 366–378, 2000.
[27] R.M. Turner. Texture discrimination by Gabor
functions. Biological Cybernetics, 55:71–82, 1986.
[28] Jeremy Webb. The myth of fingerprints. New
Scientist, (2517):3, Sep 2005.
[29] Qinghan Xiao and Hazem Raafat. Fingerprint image
postprocessing: a combined statistical and structural
approach. Pattern Recognition, 24(10):985–992, 1991.