Sign Language Tutoring Tool

Reading time: 5 minute
...

📝 Abstract

In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user’s video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.

💡 Analysis

In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user’s video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.

📄 Content

eNTERFACE’06, July 17th – August 11th, Dubrovnik, Croatia ⎯ Final Project Report

Sign Language Tutoring Tool
Oya Aran¹, Ismail Ari¹, Alexandre Benoit², Ana Huerta Carrillo³, François-Xavier Fanard4, Pavel Campr5, Lale Akarun¹, Alice Caplier², Michele Rombaut² and Bulent Sankur¹ ¹Bogazici University, ²LIS_INPG, ³Technical University of Madrid, 4Universite Catholique de Louvain, 5 University of West Bohemia in Pilsen

Abstract—In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user’s video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.

Index Terms—Gesture recognition, sign language recognition, head movement analysis, human body animation

I. INTRODUCTION HE purpose of this project is to develop a Sign Language Tutoring Demonstrator that lets users practice demonstrated signs and get feedback about their performance. In a learning step, a video of a specific sign is demonstrated to the user and in the practice step, the user is asked to repeat the sign. An evaluation of produced gesture is given to the learner; together with a synthesized version of the sign that lets the user get visual feedback in a caricatured form. The specificity of Sign Language is that the whole message is contained not only in hand gestures and shapes (manual signs) but also in facial expressions and head/shoulder motion (non-manual signs). As a consequence, the language is intrinsically multimodal. In order to solve the hand trajectory recognition problem, Hidden Markov Models have been used extensively for the last decade. Lee and Kim [1] propose a method for online gesture spotting using HMMs. Starner et al. [2] used HMMs for continuous American Sign Language recognition. The vocabulary contains 40 signs and the sentence structure to be recognized was constrained to personal pronoun, verb, noun, and adjective. In 1997, Vogler and Metaxas [3] proposed a system for both isolated and continuous ASL recognition sentences with a 53-sign vocabulary. In a later study [4] the same authors attacked the scalability problem and proposed a method for the parallel modeling of the phonemes within an HMM framework. Most systems of Sign Language recognition concentrate on hand gesture analysis only. In , a survey on automatic sign language analysis is given and integrating non-manual signs with hand gestures is examined. A preliminary version of the tutor we propose to develop, demonstrated at EUSIPCO, uses only hand trajectory based gesture recognition [6]. The signs selected were signs that could be recognized based on solely the trajectory of one hand. In this project, we aim at developing a tutoring system able to cope with two sources of information: hand gestures and head motion. The database contains complex signs that are performed with two hands and head gestures. Therefore, our Sign Language Recognition system fuses the data coming from two sources of information to recognize a performed sign: The shape and trajectory of the two hands and the head movements.
T

Fig. 1. Sign language recognition system block diagram

Fig. 1 illustrates the steps in sign recognition. The first step in hand gesture recognition is to detect and track both hands. This is a complex task because the hands may occlude each other and also overlap other skin colored regions, such as the arms and the face. To make the detection problem easier, markers on the hand and fingers are widely used in the literature. In this project, we have used differently colored gloves worn on the two hands. Once the hands are detected, a complete hand gesture recognition system must be able to extract the hand shape, and the hand motion. We have extracted simple hand shape features and combined them with hand motion and position information to obtain a combined feature vector. A left-to-right continuous HMM model with no

This report, as well as the source code for the software developed during the project, is available online from the eNTERFACE’06 web site: www.enterface.net .

eNTERFACE’06, July 17th – August 11th, Dubrovnik, Croatia ⎯ Final Project Report

state skips is trained for each sign. These HMM models could be directly used for recognition if we were to recognize only the manual signs. However, some signs involve non-manual components. Thus further analysis of head movements and facial expressions must be done to recognize non-manual signs. Head movement analys

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut