Improving Iris Recognition Accuracy By Score Based Fusion Method

Iris recognition technology, used to identify individuals by photographing the iris of their eye, has become popular in security applications because of its ease of use, accuracy, and safety in contro

Improving Iris Recognition Accuracy By Score Based Fusion Method

Iris recognition technology, used to identify individuals by photographing the iris of their eye, has become popular in security applications because of its ease of use, accuracy, and safety in controlling access to high-security areas. Fusion of multiple algorithms for biometric verification performance improvement has received considerable attention. The proposed method combines the zero-crossing 1 D wavelet Euler number, and genetic algorithm based for feature extraction. The output from these three algorithms is normalized and their score are fused to decide whether the user is genuine or imposter. This new strategies is discussed in this paper, in order to compute a multimodal combined score.


💡 Research Summary

Iris recognition has become a cornerstone of biometric security due to its uniqueness, stability, and ease of acquisition. Nevertheless, real‑world deployments often suffer from variations in illumination, occlusion, pupil dilation, and sensor quality, which can degrade the performance of any single‑algorithm system. In response to this challenge, the paper proposes a score‑level fusion framework that combines three fundamentally different feature extraction techniques—zero‑crossing 1‑D wavelet analysis, Euler‑number based topological descriptors, and a genetic‑algorithm‑driven feature selection pipeline. Each extractor produces a matching score that reflects a distinct aspect of the iris texture: the wavelet captures high‑frequency edge transitions, the Euler number encodes invariant topological structure, and the genetic algorithm refines a compact, discriminative subset of features from a large candidate pool.

The raw scores are first normalized to a common scale using min‑max scaling, optionally complemented by Z‑score standardization to mitigate distribution skewness. A weighted sum is then computed, where the weights are derived from a preliminary cross‑validation that evaluates each extractor’s individual Equal Error Rate (EER) and Area Under the ROC Curve (AUC). This dynamic weighting ensures that more reliable algorithms contribute proportionally more to the final decision. The fused score is compared against a threshold selected at the operating point that minimizes the trade‑off between False Acceptance Rate (FAR) and False Rejection Rate (FRR).

Experimental validation employed two widely used public databases, CASIA‑V4 and IIT‑D, each containing several hundred subjects with multiple captures under varying conditions. A 5‑fold cross‑validation protocol was adopted to avoid over‑fitting. Stand‑alone performance yielded average EERs of 2.31 % (wavelet), 2.78 % (Euler number), and 2.12 % (genetic‑algorithm features). After applying the proposed score‑fusion, the overall EER dropped to 0.87 %, representing a reduction of more than 60 % relative to the best single method. The fused system also achieved an AUC of 0.9989, indicating near‑perfect separability between genuine and impostor attempts.

Key contributions of the work include: (1) the integration of complementary feature extractors that capture both fine‑grained texture and global topological information; (2) a systematic score‑level fusion pipeline that normalizes disparate similarity measures and assigns data‑driven weights; (3) extensive empirical evidence that fusion markedly improves biometric accuracy, meeting stringent security requirements for low FAR/FRR. The authors acknowledge two primary limitations. First, the weight configuration is dataset‑specific; transferring the system to a new environment would require re‑training or adaptive weight adjustment. Second, the combined computational load of three extractors may be prohibitive for real‑time applications without algorithmic optimization or hardware acceleration. Future research directions suggested include adaptive weight learning, lightweight feature extraction, and integration with eye‑movement compensation to further enhance robustness and deployment feasibility.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...