Current Mood: studious
Blogs I Commented On:
Summary:
This is another gesture segmentation and recognition technique paper that uses something called forward spotting accumulative HMMs. The domain was for the application of controlling curtains and lighting of smart homes using upper body gestures. The first main idea presented is that of a sliding window technique, one which computes the observation probabilities of a gesture or non-gesture using a number of continuing observations within a sliding window of a particular size k. From empirical testing, the authors chose k = 3. In this technique, the partial observation probability of a segment of a particular sequence of observation is computed by induction. The second idea involves forward spotting, which uses the prior technique to compute the competitive difference observation probability from a continuous frame of gesture images. The basic idea behind this idea is that every possible gesture class and also a non-gesture class has an HMM. After a partial observation, the value of a gesture class HMM that gives the highest probability is compared with the value of the non-gesture class HMM. Whichever HMM gives the highest value of the two is basically chosen, Accumulative HMMs, which are HMMs that accept all possible partial posture segments for determining the gesture type, are additionally used in the paper to improve gesture recognition performance. During testing, two gesture techniques were used on the eight gesture classes for their particular domain: manual and automatic threshold spotting. The latter performed better by generating mostly accurate rates in the 90s to perfect.
Discussion:
It’s hard to gauge the quality of the gesture segmentation technique for this paper. It seems like the technique in this paper offers a solution common in using generic HMMs in haptics, and such as using partial observations. Also, the use of their “junk” class, while nothing special, wouldn’t hurt. On the other hand, they tested their technique on such a simple domain without comparison to other techniques in the process. It looks like the only way to judge their technique’s merits is to actually implement their gesture segmentation algorithm on a more complex domain. The jury is still out on this one.
1 comments:
I do like the idea of a junk class when using HMMs. This feels like it automatically gives me the ability to classify transitional gestures between the "meat" of my gestures. A neat idea, but possibly very computationally intensive, would be to use sliding windows. Classify with an HMM and don't spit out a result right away. Keep sliding and see where you get the highest probability. Segment at /that/ point, throw everything else away, and start again. Junk classes for transitions would work here. One problem would be if your gestures were of different length, a fixed-size window would fail. :(
Post a Comment