BibTex
@inproceedings{Liu:2015:10.20380/GI2015.04,
author = {Liu, Zicheng and Zhang, Yan and Wu, Wentao and Liu, Kai and Sun, Zhengxing},
title = {Model-driven indoor scenes modeling from a single image},
booktitle = {Proceedings of Graphics Interface 2015},
series = {GI 2015},
year = {2015},
issn = {0713-5424},
isbn = {978-1-4822-6003-8},
location = {Halifax, Nova Scotia, Canada},
pages = {25--32},
numpages = {8},
doi = {10.20380/GI2015.04},
publisher = {Canadian Human-Computer Communications Society},
address = {Toronto, Ontario, Canada},
}
Abstract
In this paper, we present a new approach of 3D indoor scenes modeling on single image. With a single input indoor image (including sofa, tea table, etc.), a 3D scene can be reconstructed using existing model library in two stages: image analysis and model retrieval. In the image analysis stage, we obtain the object information from input image using geometric reasoning technology combined with image segmentation method. In the model retrieval stage, line drawings are extracted from 2D objects and 3D models by using different line rendering methods. We exploit various tokens to represent local features and then organize them together as a star-graph to show a global description. Finally, by comparing similarity among the encoded line drawings, models are retrieved from the model library and then the scene is reconstructed. Experimental results show that, driven by the given model library, indoor scenes modeling from a single image could be achieved automatically and efficiently.