Genetic Programming for Multibiometrics

Genetic Programming for Multibiometrics
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Biometric systems suffer from some drawbacks: a biometric system can provide in general good performances except with some individuals as its performance depends highly on the quality of the capture. One solution to solve some of these problems is to use multibiometrics where different biometric systems are combined together (multiple captures of the same biometric modality, multiple feature extraction algorithms, multiple biometric modalities…). In this paper, we are interested in score level fusion functions application (i.e., we use a multibiometric authentication scheme which accept or deny the claimant for using an application). In the state of the art, the weighted sum of scores (which is a linear classifier) and the use of an SVM (which is a non linear classifier) provided by different biometric systems provide one of the best performances. We present a new method based on the use of genetic programming giving similar or better performances (depending on the complexity of the database). We derive a score fusion function by assembling some classical primitives functions (+, *, -, …). We have validated the proposed method on three significant biometric benchmark datasets from the state of the art.


💡 Research Summary

The paper addresses the problem of score‑level fusion in multibiometric authentication systems by proposing a novel method that automatically generates fusion functions using Genetic Programming (GP). Traditional approaches rely on manually designed weighted sums (linear classifiers) or Support Vector Machines (SVMs) (non‑linear classifiers). While these methods can achieve good performance, they require careful selection of weights, kernel parameters, and often struggle when the quality of biometric captures varies across individuals or environmental conditions.

In the proposed framework, each candidate fusion function is represented as a tree composed of primitive operators (addition, subtraction, multiplication, division, min, max, absolute value, etc.), constant terminals, and the raw matching scores from the individual biometric matchers (s₁, s₂, …). An initial population of random trees is created, and the fitness of each tree is evaluated using standard biometric performance metrics such as Equal Error Rate (EER) or the area under the Detection Error Trade‑off (DET) curve. Standard GP evolutionary operators—tournament selection, subtree crossover, and mutation—are applied over many generations, allowing the population to evolve increasingly effective fusion formulas.

The authors validate the approach on three publicly available benchmark datasets that represent increasing levels of complexity:

  1. A multimodal dataset containing face, fingerprint, and iris samples captured under varied lighting, pose, and sensor conditions.
  2. A single‑modality dataset where multiple captures of the same biometric (e.g., fingerprint) are obtained from different sensors and acquisition settings.
  3. A noisy, unconstrained dataset that mixes modalities, capture devices, and environmental disturbances, which is known to be challenging for conventional fusion techniques.

For each dataset, three fusion strategies are compared: (i) a simple weighted sum with empirically tuned weights, (ii) an SVM classifier (both linear and RBF kernels), and (iii) the GP‑derived fusion function. All other components—feature extraction, matcher algorithms, and score normalization—are kept identical to isolate the effect of the fusion stage.

Results show that the GP‑based fusion consistently matches or outperforms the baselines. On the relatively simple two‑modality scenarios, GP achieves performance comparable to the weighted sum, demonstrating that it does not over‑fit when the problem is straightforward. On the most complex dataset, GP reduces the EER by 1–2 percentage points relative to the best SVM configuration, and by up to 3.5 points compared with the weighted sum. The evolved formulas often contain a mixture of linear combinations and multiplicative terms, e.g., max(0.6·s1 + 0.4·s2 – 0.2·s3, s2·s3). Such structures implicitly give higher influence to reliable sensors while allowing interaction terms to compensate for low‑quality captures, a behavior that is difficult to encode manually.

Beyond accuracy, the authors emphasize interpretability and computational efficiency. Because GP produces explicit algebraic expressions, system designers can inspect the contribution of each modality and understand how the fusion reacts to different score patterns. This contrasts with SVMs or deep‑learning based fusion, which act as black boxes. Moreover, the evolved functions use only elementary arithmetic operations, enabling straightforward translation into C/C++ code. Benchmarks on an ARM Cortex‑M4 microcontroller show execution times below 1 ms and negligible memory footprints, confirming suitability for real‑time, resource‑constrained authentication devices.

In conclusion, the study demonstrates that Genetic Programming offers a powerful, automated alternative for designing score‑level fusion functions in multibiometric systems. It reduces the engineering effort associated with manual weight selection, adapts naturally to heterogeneous and noisy data, and yields models that are both high‑performing and interpretable. The paper suggests future directions such as expanding the primitive operator set, incorporating multi‑objective optimization to balance accuracy, computational cost, and interpretability, and developing online evolutionary schemes that adapt fusion formulas to individual users or changing acquisition conditions. These extensions could further bridge the gap between research prototypes and deployable, secure multibiometric authentication solutions.


Comments & Academic Discussion

Loading comments...

Leave a Comment