Proceedings: GI 2019

SkelSeg: Segmentation and Rigging of Raw-Scanned 3D Volume with User-Specified Skeleton

Seung-Tak Noh (The University of Tokyo), Kenichi Takahashi (Kabuku Inc.), Masahiko Adachi (Kabuku Inc.), Takeo Igarashi (The University of Tokyo)

Proceedings of Graphics Interface 2019: Kingston, Ontario, 28 - 31 May 2019

DOI 10.20380/GI2019.17

  • BibTex

    @inproceedings{Noh:2019:10.20380/GI2019.17,
    author = {Noh, Seung-Tak and Takahashi, Kenichi and Adachi, Masahiko and Igarashi, Takeo},
    title = {SkelSeg: Segmentation and Rigging of Raw-Scanned 3D Volume with User-Specified Skeleton},
    booktitle = {Proceedings of Graphics Interface 2019},
    series = {GI 2019},
    year = {2019},
    issn = {0713-5424},
    isbn = {978-0-9947868-4-5},
    location = {Kingston, Ontario},
    numpages = {8},
    doi = {10.20380/GI2019.17},
    publisher = {Canadian Information Processing Society},
    keywords = {User-specified skeleton, segmentation, rigging},
    }
  • Supplementary Media

Abstract

Although RGB-D camera-based scanning has become popular, a raw-scanned 3D model contains several problems that hinder animation such as fused arms and legs. We propose a system that allows a user to generate a rigged 3D mesh from a raw-scanned 3D volume with simple annotations. The user annotates the skeleton structure on the calibrated images captured at the scanning step, and our system automatically segments the raw-scanned volume into parts, generating a skinned 3D mesh based on the user-specified 3D skeleton. We tested our method with several raw-scanned 3D plush toy models, and successfully generated plausible animations.