Feature Selection By KDDA For SVM-Based MultiView Face Recognition

Reading time: 6 minute
...

๐Ÿ“ Original Info

  • Title: Feature Selection By KDDA For SVM-Based MultiView Face Recognition
  • ArXiv ID: 0812.2574
  • Date: 2008-12-16
  • Authors: Researchers from original ArXiv paper

๐Ÿ“ Abstract

Applications such as face recognition that deal with high-dimensional data need a mapping technique that introduces representation of low-dimensional features with enhanced discriminatory power and a proper classifier, able to classify those complex features. Most of traditional Linear Discriminant Analysis suffer from the disadvantage that their optimality criteria are not directly related to the classification ability of the obtained feature representation. Moreover, their classification accuracy is affected by the "small sample size" problem which is often encountered in FR tasks. In this short paper, we combine nonlinear kernel based mapping of data called KDDA with Support Vector machine classifier to deal with both of the shortcomings in an efficient and cost effective manner. The proposed here method is compared, in terms of classification accuracy, to other commonly used FR methods on UMIST face database. Results indicate that the performance of the proposed method is overall superior to those of traditional FR approaches, such as the Eigenfaces, Fisherfaces, and D-LDA methods and traditional linear classifiers.

๐Ÿ’ก Deep Analysis

Deep Dive into Feature Selection By KDDA For SVM-Based MultiView Face Recognition.

Applications such as face recognition that deal with high-dimensional data need a mapping technique that introduces representation of low-dimensional features with enhanced discriminatory power and a proper classifier, able to classify those complex features. Most of traditional Linear Discriminant Analysis suffer from the disadvantage that their optimality criteria are not directly related to the classification ability of the obtained feature representation. Moreover, their classification accuracy is affected by the “small sample size” problem which is often encountered in FR tasks. In this short paper, we combine nonlinear kernel based mapping of data called KDDA with Support Vector machine classifier to deal with both of the shortcomings in an efficient and cost effective manner. The proposed here method is compared, in terms of classification accuracy, to other commonly used FR methods on UMIST face database. Results indicate that the performance of the proposed method is overall s

๐Ÿ“„ Full Content

Selecting appropriate features to represent faces and proper classification of these features are two central issues to face recognition (FR) systems. For feature selection, successful solutions seem to be appearance-based approaches, (see [3], [2] for a survey), which directly operate on images or appearances of face objects and process the images as two-dimensional (2-D) holistic patterns, to avoid difficulties associated with Three-dimensional (3-D) modelling, and shape or landmark detection [2]. For the purpose of data reduction and feature extraction in the appearance-based approaches, Principle component analysis (PCA) and linear discriminant analysis (LDA) are introduced as two powerful tools. Eigenfaces [4] and Fisherfaces [5] built on the two techniques, respectively, are two state-of-the-art FR methods, proved to be very successful. It is generally believed that, LDA based algorithms outperform PCA based ones in solving problems of pattern classification, since the former optimizes the lowdimensional representation of the objects with focus on the most discriminant feature extraction while the latter achieves simply object reconstruction. However, many LDA based algorithms suffer from the so-called "small sample size problem" (SSS) which exists in high-dimensional pattern recognition tasks where the number of available samples is smaller than the dimensionality of the samples. The traditional solution to the SSS problem is to utilize PCA concepts in conjunction with LDA (PCA+LDA) as it was done for example in Fisherfaces [11]. Recently, more effective solutions, called Direct LDA (D-LDA) methods, have been presented [12], [13]. Although successful in many cases, linear methods fail to deliver good performance when face patterns are subject to large variations in viewpoints, which results in a highly non-convex and complex distribution. The limited success of these methods should be attributed to their linear nature [14]. Kernel discriminant analysis algorithm, (KDDA) generalizes the strengths of the recently presented D-LDA [1] and the kernel techniques while at the same time overcomes many of their shortcomings and limitations.

In this work, we first nonlinearly map the original input space to an implicit high-dimensional feature space, where the distribution of face patterns is hoped SETIT2007 to be linearized and simplified. Then, KDDA method is introduced to effectively solve the SSS problem and derive a set of optimal discriminant basis vectors in the feature space. And then SVM approach is used for classification.

The rest of the paper is organized as follows. In Section tow, we start the analysis by briefly reviewing KDDA method. Following that in section three, SVM is introduced and analyzed as a powerful classifier. In Section four, a set of experiments are presented to demonstrate the effectiveness of the KDDA algorithm together with SVM classifier on highly nonlinear, highly complex face pattern distributions. The proposed method is compared, in terms of the classification error rate performance, to KPCA (kernel based PCA), GDA (Generalized Discriminant Analysis) and KDDA algorithm with nearest neighbour classifier on the multi-view UMIST face database. Conclusions are summarized in Section five.

2 Kernel Direct Discrimi-nant Analysis (KDDA)

In the statistical pattern recognition tasks, the problem of feature extraction can be stated as follows:

Assume that we have a training set, { } 1

( )

is the face image size and denotes a N-dimensional real space [1].

It is further assumed that each image belongs to one of C classes { } . The objective is to find a transformation 1

ฯ• , based on optimization of certain separability criteria, which produces a mapping, with that leads to an enhanced separability of different face objects.

Let and be the between-and withinclass scatter matrices in the feature space respectively, expressed as follows:

Where ( )

and ฯ† is the average of the ensemble.

The maximization can be achieved by solving the following eigenvalue problem:

The feature space F could be considered as a “linearization space” [6], however, its dimensionality could be arbitrarily large, and possibly infinite. Solving this problem lead us to LDA [1].

Assuming that is nonsingular and WTH S ฮฆ the basis vectors correspond to the M first eigenvectors with the largest eigenvalues of the discriminant criterion:

)

The M-dimensional representation is then obtained by projecting the original face images onto the subspace spanned by the eigenvectors.

The maximization process in ( 3) is not directly linked to the classification error which is the criterion of performance used to measure the success of the FR procedure. Modified versions of the method, such as the Direct LDA (D-LDA) approach, use a weighting function in the input space, to penalize those classes that are close and can potentially lead to misclassifications in the output space.

Most LDA based algorithms including Fish

…(Full text truncated)…

๐Ÿ“ธ Image Gallery

cover.png page_2.webp page_3.webp

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

โ†‘โ†“
โ†ต
ESC
โŒ˜K Shortcut