Proceedings: GI 2015

Penny pincher: a blazing fast, highly accurate $-family recognizer

Eugene Taranta , Joseph LaViola

Proceedings of Graphics Interface 2015: Halifax, Nova Scotia, Canada, 3 - 5 June 2015, 195-202

DOI 10.20380/GI2015.25

  • Bibtex

    author = {Taranta, Eugene and LaViola, Joseph},
    title = {Penny pincher: a blazing fast, highly accurate $-family recognizer},
    booktitle = {Proceedings of Graphics Interface 2015},
    series = {GI 2015},
    year = {2015},
    issn = {0713-5424},
    isbn = {978-1-4822-6003-8},
    location = {Halifax, Nova Scotia, Canada},
    pages = {195--202},
    numpages = {8},
    doi = {10.20380/GI2015.25},
    publisher = {Canadian Human-Computer Communications Society},
    address = {Toronto, Ontario, Canada},


The $-family of recognizers ($1, Protractor $N, $P, 1¢, and variants) are an easy to understand, easy to implement, accurate set of gesture recognizers designed for non-experts and rapid prototyping. They use template matching to classify candidate gestures and as the number of available templates increase, so do their accuracies. This, of course, is at the cost of higher latencies, which can be prohibitive in certain cases. Our recognizer Penny Pincher achieves high accuracy by being able to process a large number of templates in a short amount of time. If, for example, a recognition task is given a 50μs budget to complete its work, a fast recognizer that can process more templates within this constraint can potentially outperform its rival recognizers. Penny Pincher achieves this goal by reducing the template matching process to merely addition and multiplication, by avoiding translation, scaling, and rotation; and by avoiding calls to expensive geometric functions. Despite Penny Pincher's deceptive simplicity, our recognizer, with a limited number of templates, still performs remarkably well. In an evaluation compared against four other $-family recognizers, in three of our six datasets, Penny Pincher achieves the highest accuracy of all recognizers reaching 97.5%, 99.8%, and 99.9% user independent recognition accuracy, while remaining competitive with the three remaining datasets. Further, when a time constraint is imposed, our recognizer always exhibits the highest accuracy, realizing a reduction in recognition error of between 83% to 99% in most cases.