Minutiae Extraction from Fingerprint Images - a Review
Fingerprints are the oldest and most widely used form of biometric identification. Everyone is known to have unique, immutable fingerprints. As most Automatic Fingerprint Recognition Systems are based on local ridge features known as minutiae, marking minutiae accurately and rejecting false ones is very important. However, fingerprint images get degraded and corrupted due to variations in skin and impression conditions. Thus, image enhancement techniques are employed prior to minutiae extraction. A critical step in automatic fingerprint matching is to reliably extract minutiae from the input fingerprint images. This paper presents a review of a large number of techniques present in the literature for extracting fingerprint minutiae. The techniques are broadly classified as those working on binarized images and those that work on gray scale images directly.
💡 Research Summary
The paper provides a comprehensive review of minutiae extraction techniques, which are pivotal for automatic fingerprint recognition systems. It begins by outlining the hierarchical nature of fingerprint features: Level‑1 (global ridge flow), Level‑2 (minutiae such as ridge endings and bifurcations), and Level‑3 (pores). The authors emphasize that Level‑2 minutiae are the primary discriminative elements used in matching, and that the quality of the input fingerprint image critically influences the reliability of minutiae detection. Consequently, a variety of image enhancement methods—Gabor filtering, directional Fourier filtering, stochastic resonance, fuzzy logic, and neural‑network‑based approaches—are surveyed as preprocessing steps that improve ridge‑valley contrast and reduce spurious structures.
The core of the review classifies minutiae extraction methods into two broad categories: techniques that operate on binarized images and those that work directly on gray‑scale images. The binarized‑image group is further divided into non‑thinned and thinned approaches. Non‑thinned methods include chain‑code processing, run‑length analysis, and ridge‑flow/local‑pixel analysis. Chain‑code techniques trace object contours and detect minutiae by counting left or right turns; run‑based methods examine transitions in consecutive black pixel runs; ridge‑flow methods follow ridge boundaries and infer minutiae from significant directional changes. Thinned approaches first skeletonize the binary image to a one‑pixel wide ridge representation and then apply crossing‑number calculations or morphological operations. The crossing‑number method counts transitions in an 8‑neighbourhood to classify endings (value 1) and bifurcations (value 3), while morphological schemes use iterative erosion and dilation to eliminate noise‑induced spikes and breaks. The authors note that thinning is highly sensitive to noise, often producing false minutiae that necessitate extensive post‑processing.
Gray‑scale direct methods bypass binarization altogether. Ridge‑line‑following algorithms trace the ridge centerline and detect minutiae based on abrupt curvature. Fuzzy‑logic approaches model the uncertainty of pixel intensity and orientation through membership functions, offering robustness against noise but requiring careful rule design. The most recent trend highlighted is deep‑learning‑based extraction, where convolutional neural networks (e.g., U‑Net) are trained to predict minutiae locations and orientations directly from raw gray‑scale patches. These data‑driven models demonstrate superior performance on low‑quality prints but demand large annotated datasets and significant computational resources.
After presenting each class of algorithms, the paper discusses common post‑extraction filtering strategies: spatial distance thresholds, angular consistency checks, and structural coherence constraints. These steps are essential to prune spurious minutiae and improve matching accuracy.
In the concluding sections, the authors critique the current state of the art. While many classical methods are computationally lightweight and suitable for real‑time deployment, they degrade sharply with poor image quality. Conversely, modern gray‑scale and deep‑learning techniques are more resilient but raise issues of scalability, training data availability, and hardware demands. Moreover, most evaluations are confined to a limited set of optical fingerprint sensors, leaving a gap in understanding performance across diverse acquisition modalities (e.g., capacitive, ultrasonic) and environmental conditions (dry, wet, contaminated).
The paper calls for future research to focus on (1) developing lightweight yet robust hybrid frameworks that combine the speed of binary‑based heuristics with the adaptability of learned models, (2) extending validation to multi‑sensor and multi‑environment datasets, and (3) exploring multi‑scale and multi‑modal feature fusion to further enhance minutiae reliability. By systematically cataloguing existing techniques and highlighting their strengths and weaknesses, this review serves as a valuable roadmap for researchers aiming to improve minutiae extraction and, consequently, the overall performance of fingerprint authentication systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment