Abstract
We present an evaluation of a hybrid gesture interface framework that combines on-line adaptive gesture recognition with a command predictor. Machine learning techniques enable on-line adaptation to differences in users' input patterns when making gestures, and exploit regularities in command sequences to improve recognition performance. A prototype using 2D single-stroke gestures was implemented with a minimally intrusive user interface for on-line re-training. Results of a controlled user experiment show that the hybrid adaptive system significantly improved overall gesture recognition performance, and reduced users' need to practice making the gestures before achieving good results.





















































