Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis
📝 Abstract
Previous approaches to model and analyze facial expression analysis use three different techniques: facial action units, geometric features and graph based modelling. However, previous approaches have treated these technique separately. There is an interrelationship between these techniques. The facial expression analysis is significantly improved by utilizing these mappings between major geometric features involved in facial expressions and the subset of facial action units whose presence or absence are unique to a facial expression. This paper combines dimension reduction techniques and image classification with search space pruning achieved by this unique subset of facial action units to significantly prune the search space. The performance results on the publicly facial expression database shows an improvement in performance by 70% over time while maintaining the emotion recognition correctness.
💡 Analysis
Previous approaches to model and analyze facial expression analysis use three different techniques: facial action units, geometric features and graph based modelling. However, previous approaches have treated these technique separately. There is an interrelationship between these techniques. The facial expression analysis is significantly improved by utilizing these mappings between major geometric features involved in facial expressions and the subset of facial action units whose presence or absence are unique to a facial expression. This paper combines dimension reduction techniques and image classification with search space pruning achieved by this unique subset of facial action units to significantly prune the search space. The performance results on the publicly facial expression database shows an improvement in performance by 70% over time while maintaining the emotion recognition correctness.
📄 Content
Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis Mehdi Ghayoumi1, Arvind K Bansal1 1Computer Science Department, Kent State University, {mghayoum,akbansal}@kent.edu Keywords: Facial Action Unit, Facial Expression, Geometric features. Abstract: Previous approaches to model and analyze facial expression analysis use three different techniques: facial action units, geometric features and graph based modelling. However, previous approaches have treated these technique separately. There is an interrelationship between these techniques. The facial expression analysis is significantly improved by utilizing these mappings between major geometric features involved in facial expressions and the subset of facial action units whose presence or absence are unique to a facial expression. This paper combines dimension reduction techniques and image classification with search space pruning achieved by this unique subset of facial action units to significantly prune the search space. The performance results on the publicly facial expression database shows an improvement in performance by 70% over time while maintaining the emotion recognition correctness. 1 INTRODUCTION Your Emotion represents an internal state of human mind [28], and affects their interaction with the world. Emotion recognition has become an important research area in: 1) the entertainment industry to assess consumer response; 2) health care industry to interact with patients and elderly persons; and 3) the social robotics for effective human-robot interaction. Online facial emotion recognition or detection of emotion states from video has applications in video games, medicine, and affective computing [26]. It will also be useful in future in auto-industry and smart homes to provide right ambience and interaction with the occupying humans. Emotions are expressed by: (1) behavior [28]; (2) spoken dialogs [22]; (3) verbal actions such as variations in speech and its intensity including silence; (3) non-verbally using gestures; (4) facial expressions [11] and tears; and (5) their combinations. In addition to the analysis of these signals, one has to be able to analyze and understand the preceding events and/or predicted future events, individual expectations, personality, intentions, cultural expectations, and the intensity of an action. There are many studies to classify primary and derived emotions [4, 6, 16, and 28]. During conversation, people scan facial expressions of other persons to get a visual cue to their emotions. In social robotics, it is essential for robots to analyze facial expressions and express a subset of human emotions to have a meaningful human-robot interaction. There are many schemes for the classification of human emotions. One popular theory for social robotics is due to Ekman [10, 11] that classifies human emotions into six basic categories: surprise, fear, disgust, anger, happiness, and sadness. In addition, there are many composite emotions derived by the combination of these basic emotions. The transitions between emotions that require continuous facial-expression analysis. Three major techniques have been used to simulate and study human facial expressions: FACS (Facial Action Control System) [5], GF (Geometric Features) and GBMT (Graph Based Modeling Techniques) [7]. FACS simulates facial muscle movement using a combination of facial action units (FAUS or AUs). Different combinations of AUs model different muscle movements and specific facial expressions. FACS has found a major use in realistic visualization of facial-expressions through animation [1, 13]. Facial expression analysis techniques are based upon geometrical feature New Developments in Circuits, Systems, Signal Processing, Communications and Computers ISBN: 978-1-61804-285-9 259 extraction [8], modeling extracted features as graphs, and analyzing the variations in the graph for deviation. Previous techniques [9] start afresh every time they analyze the emotion, and the accounting for expectations of emotions is not important. They also do not take into account the fact that a subset of features are unique to the presence or the absence of specific facial-expression. Identification of these subsets of unique features during facial expression analysis can prune the search space. In this paper, we identify subsets of action units (AUs) that uniquely characterize the presence or the absence of a subset of emotions and map these AUs to the geometric feature-points to prune the search space. The technique extends previous facial expression identification techniques based upon LSH (Locality Sensitive Hashing) [17] that employ LSH for efficient pruning of search space. The technique has been demonstrated using a publicly available image database [21, 23] that has been used by previous approaches. Results show significant improvement in performance over time (70% improvement in execution time) compa
This content is AI-processed based on ArXiv data.