Next Level of Data Fusion for Human Face Recognition
📝 Abstract
This paper demonstrates two different fusion techniques at two different levels of a human face recognition process. The first one is called data fusion at lower level and the second one is the decision fusion towards the end of the recognition process. At first a data fusion is applied on visual and corresponding thermal images to generate fused image. Data fusion is implemented in the wavelet domain after decomposing the images through Daubechies wavelet coefficients (db2). During the data fusion maximum of approximate and other three details coefficients are merged together. After that Principle Component Analysis (PCA) is applied over the fused coefficients and finally two different artificial neural networks namely Multilayer Perceptron(MLP) and Radial Basis Function(RBF) networks have been used separately to classify the images. After that, for decision fusion based decisions from both the classifiers are combined together using Bayesian formulation. For experiments, IRIS thermal/visible Face Database has been used. Experimental results show that the performance of multiple classifier system along with decision fusion works well over the single classifier system.
💡 Analysis
This paper demonstrates two different fusion techniques at two different levels of a human face recognition process. The first one is called data fusion at lower level and the second one is the decision fusion towards the end of the recognition process. At first a data fusion is applied on visual and corresponding thermal images to generate fused image. Data fusion is implemented in the wavelet domain after decomposing the images through Daubechies wavelet coefficients (db2). During the data fusion maximum of approximate and other three details coefficients are merged together. After that Principle Component Analysis (PCA) is applied over the fused coefficients and finally two different artificial neural networks namely Multilayer Perceptron(MLP) and Radial Basis Function(RBF) networks have been used separately to classify the images. After that, for decision fusion based decisions from both the classifiers are combined together using Bayesian formulation. For experiments, IRIS thermal/visible Face Database has been used. Experimental results show that the performance of multiple classifier system along with decision fusion works well over the single classifier system.
📄 Content
NEXT LEVEL OF DATA FUSION FOR HUMAN FACE RECOGNITION
M. K. Bhowmika, G. Majumdera, D. Bhattacharjeeb, D. K. Basub, M. Nasipurib
aDepartment of Computer Science & Engineering, Tripura University (A Central University), Suryamaninagar. bDepartment of Computer Science & Engineering, Jadavpur University, Kolkata.
ABSTRACT
This paper demonstrates two different fusion techniques at two different levels of a human face recognition process. The first one is called data fusion at lower level and the second one is the decision fusion towards the end of the recognition process. At first a data fusion is applied on visual and corresponding thermal images to generate fused image. Data fusion is implemented in the wavelet domain after decomposing the images through Daubechies wavelet coefficients (db2). During the data fusion maximum of approximate and other three details coefficients are merged together. After that Principle Component Analysis (PCA) is applied over the fused coefficients and finally two different artificial neural networks namely Multilayer Perceptron(MLP) and Radial Basis Function(RBF) networks have been used separately to classify the images. After that, for decision fusion based decisions from both the classifiers are combined together using Bayesian formulation. For experiments, IRIS thermal/visible Face Database has been used. Experimental results show that the performance of multiple classifier system along with decision fusion works well over the single classifier system.
Keywords: Thermal Image, Visual Image, Fused Image, Data Fusion, Wavelet decomposition, Decision Fusion, Classification.
INTRODUCTION
Biometric security systems based on human face recognition have already been an established field for authentication and surveillance purposes. Till date, no feature extraction technique for human face recognition exists which is capable to ease the work of classifiers to large extent because if features are extracted with highest accuracy then the task of classifier becomes very easy, a simple distance measure e.g. Euclidean distance may suffice. But that doesn’t happen in reality and we need to design complex classifiers. Having such complex classifier it has been observed that in most of the cases to classify complex pattern like human faces we take the advantage of multiple classifiers by combining their individual result into a final classification answer. In order to achieve such an objective the ultimate goal of this work is to design a multi-classifier system for human face recognition. Several works have already been developed using fusion of decisions from different classifier; like for land cover [15-17], sea ice classification [18] cloud classification [19], and also for face recognition. There are mainly three types of fusion strategies [12], namely, information/data fusion (low-level fusion), feature fusion (intermediate-level fusion), and decision fusion (high-level fusion). Data fusion combines several sources of raw data to produce new raw data that is expected to be more informative and synthetic than the inputs [12]. Decision fusion uses a set of classifiers to provide a better and unbiased result. The classifiers can be of same or different type and can also have same or different feature sets [13]. Hence a set of classifiers is used and finally the outputs of all the classifiers are merged together by various methods to obtain the final output. In recent years, decision fusion techniques have been investigated [9] [13] [10] and their application on classification domain has been widely tested. Decision fusion can be defined as the process of fusion information of individual data sources. The main contribution of this paper is a decision fusion of two different artificial neural networks based on a distance the individual test image and corresponding feature images. First the individual images trained separately using two different classifier and finally decision of these two classifiers have combined together. There are many techniques have been developed for face recognition given in Table 1.
J. Czyz et al. [20], used decision fusion to demonstrate their fully automatic multi-frame–multi-expert system on realistic database to. They study sequential fusion, that is, the fusion of outputs of a single face authentication algorithm obtained on several video frames. This type of fusion is referred to as intra-modal fusion. The main contribution of their paper is a fusion architecture which takes into account the distinctive features of the intra-modal and sequential fusion. They used two different face verification algorithms, namely a Linear
Table 1: Existing Techniques of decision fusion for Human face recognition
Authors
Technique
Reference
J. Czyz et al.
Decision
Fusion
for
Face
Authentication
[20]
J. Heo et al.
Robust Face Recognition
[21]
B. Gokberk et al.
Rank-based Decision Fusion
[
This content is AI-processed based on ArXiv data.