The paper will present a novel approach for solving face recognition problem. Our method combines 2D Principal Component Analysis (2DPCA), one of the prominent methods for extracting feature vectors, and Support Vector Machine (SVM), the most powerful discriminative method for classification. Experiments based on proposed method have been conducted on two public data sets FERET and AT&T; the results show that the proposed method could improve the classification rates.
Human faces contain a lot of important biometric information. The information can be used in a variety of civilian and law enforcement applications. For example, identity verification for physical access control in buildings or security areas is one of the most common face recognition applications. At the access point, an image of a claimed person's face is captured by a camera and is compared with stored images of the claimed persons. Then it will be accepted only if it is matched. For high security areas, a combination with card terminals is possible, so that a double check is performed.
Since Matthew Turk and Alex Pentland [1] used Principal Component Analysis (PCA) to deal with the face recognition problem, PCA has become the standard method to extract feature vectors in face recognition because it is stable and has good performance. Nevertheless, PCA could not capture all local variances of images unless this information is explicitly provided in the training data. To deal with this problem, some researchers proposed other approaches. For example, Wiskott et al. [2] suggested a technique known as elastic bunch graph matching to extract local features of face images. Penev and Atick [3] proposed using local features to represent faces; they used PCA to extract local feature vectors. They reported that there was a significant improvement in face recognition. Bartlett et al. [4] proposed using independent component analysis (ICA) for face representation to extract higher dependents of face images that cannot represented by Gaussian distributions, and reported that it performed better than PCA. Ming-HsuanYang [5] suggested Kernel PCA (or nonlinear subspace) for face feature extraction and recognition and described that his method o utperformed PCA (linear subspace). However, the performance costs of them are higher than PCA.
To solve these problems, Jian Yang [6] proposed a new method called 2D Principal Component Analysis (2DPCA). In conventional PCA, face images have been represented in vectors by some technique like concatenation. As opposed to PCA, 2DPCA represents face images by using matrices or 2D images instead of vectors (Fig. 1). Clearly, using 2D images directly is quite simple and local information of the original images is preserved sufficiently, which may bring more important features for facial representation. In face identification, some face images are easy to recognize, but others are hard to identify; for example, frontal face images are easier than to be recognized than profile face images. Therefore, we proposed a weighted-2DPCA model to deal with the difficulty.
In 1995, Vapnik and Cortes [7] presented the foundations for Support Vector Machine (SVM). Since then, it has become the prominent method to solve problems in pattern classification and regression. The basic idea behind SVM is finding the optimal linear hyperplane such that the expected classification error for future test samples is minimized, i.e., good generalization performance. Obviously, the goal of all classifiers is not to get the lowest training error. For example, a k-NN classifier can achieve the accuracy rate 100% with k=1. However, in practice, it is the worst classifier because it has high structural risk. They suggested the formula testing error = training error + risk of model (Fig. 2). To achieve the goal to get the lowest testing error, they proposed the structural risk minimization inductive principle. It means that a discriminative function that classifies the training data accurately and belongs to a set of functions with the lowest VC dimension will generalize best results regardless of the dimensionality of the input space. Based on this principle, an optimal linear discriminative function has been found. For linearly non-separable data, SVM maps the input to a higher dimensional feature space where a linear hyperplane can be found. Although there is no warranty that a linear solution will always exist in the higher dimensional space, it is able to find effective solutions in practice. To deal with the face gender classification, many researchers [8][9][10][11] have applied SVM in their studies and stated that the experiment results are very positive. In our research, we have combined the power of each method, weighted-2DPCA and SVM, to solve the problem.
The remaining sections of our paper will discuss the implementation of our face recognition system, related theory, and experiments. Section 2 gives details of 2DPCA. Section 3 discusses how to use SVM in face classification. In Section 4, we will describe the implementation and experiments. Finally, Section 5 is our conclusion.
As mentioned above, we propose a weighed-2DPCA to deal with some practical situations in which some face images in database are difficult to identify due to their poses (front or profile) or their qualities (noise, blur).
Training data
Step 2: Compute matrix
Step 3: Compute eigenvectors
First, a projection point of
This content is AI-processed based on open access ArXiv data.