Current Mood: studious
Blogs I Commented On:
Summary:
This is one of the older gesture segmentation and classifier papers, this time for the domain of Japanese Sign Language. Signed words, consisting solely of hand gestures, consist of combining gesture primitives such as hand shape, palm direction, linear motion, and circular motion. During recognition, gesture primitives are identified from the inputted gesture, and then the signed word is recognized by the time and spatial relationship between the gesture primitives. To segment, they used a hand velocity and a hand movement parameter. To correct differences between the two parameters, the hand movement segment border is considered the one closest to the hand velocity border. Several more parameters are used to determine if the gesture was one-handed or two-handed, and are classified in four types involving a combination of one- or two-handed and left- or right-dominant. To distinguish between word and transition segments, the authors extracted various features and discovered that the best distinction was minimum acceleration divided by maximum velocity. If the parameter is minimal, then it’s a word, otherwise it’s a transition. Evaluating their system of 100 JSL sentences, they achieved 86.6% accuracy for wor recognition and 58% accuracy for sentence recognition.
Discussion:
There were several things I liked about this paper. It was an easy read, had sound methods, and seemed logical in proceeding with creating their system. Unfortunately, the system is still a work in progress because of very bad accuracy in the end. It’s almost a decade since this paper came out, so it was still in unfamiliar territory at the time. I wish there was a follow-up paper that improved the accuracy rates.
0 comments:
Post a Comment