Proceedings: GI 2014

Task efficient contact configurations for arbitrary virtual creatures

Steve Tonneau , Julien Pettré , Franck Multon

Proceedings of Graphics Interface 2014: Montréal, Québec, Canada, 7 - 9 May 2014, 9-16

DOI 10.20380/GI2014.02

  • Bibtex

    @inproceedings{Tonneau:2014:10.20380/GI2014.02,
    author = {Tonneau, Steve and Pettr{\'e}, Julien and Multon, Franck},
    title = {Task efficient contact configurations for arbitrary virtual creatures},
    booktitle = {Proceedings of Graphics Interface 2014},
    series = {GI 2014},
    year = {2014},
    issn = {0713-5424},
    isbn = {978-1-4822-6003-8},
    location = {Montr{\'e}al, Qu{\'e}bec, Canada},
    pages = {9--16},
    numpages = {8},
    doi = {10.20380/GI2014.02},
    publisher = {Canadian Human-Computer Communications Society},
    address = {Toronto, Ontario, Canada},
    }

Abstract

A common issue in three-dimensional animation is the creation of contacts between a virtual creature and the environment. Contacts allow force exertion, which produces motion. This paper addresses the problem of computing contact configurations allowing to perform motion tasks such as getting up from a sofa, pushing an object or climbing. We propose a two-step method to generate contact configurations suitable for such tasks. The first step is an offline sampling of the reachable workspace of a virtual creature. The second step is a run time request confronting the samples with the current environment. The best contact configurations are then selected according to a heuristic for task efficiency. The heuristic is inspired by the force transmission ratio. Given a contact configuration, it measures the potential force that can be exerted in a given direction. Our method is automatic and does not require examples or motion capture data. It is suitable for real time applications and applies to arbitrary creatures in arbitrary environments. Various scenarios (such as climbing, crawling, getting up, pushing or pulling objects) are used to demonstrate that our method enhances motion autonomy and interactivity in constrained environments.