Proceedings: GI 2019

Image Acquisition for High Quality Architectural Reconstruction

Ojaswa Sharma (Indraprastha Institute of Information Technology), Nishima Arora (Indraprastha Institute of Information Technology), Himanshu Sagar (Indraprastha Institute of Information Technology)

Proceedings of Graphics Interface 2019: Kingston, Ontario, 28 - 31 May 2019

DOI 10.20380/GI2019.18

  • BibTex

    @inproceedings{Sharma:2019:10.20380/GI2019.18,
    author = {Sharma, Ojaswa and Arora, Nishima and Sagar, Himanshu},
    title = {Image Acquisition for High Quality Architectural Reconstruction},
    booktitle = {Proceedings of Graphics Interface 2019},
    series = {GI 2019},
    year = {2019},
    issn = {0713-5424},
    isbn = {978-0-9947868-4-5},
    location = {Kingston, Ontario},
    numpages = {9},
    doi = {10.20380/GI2019.18},
    publisher = {Canadian Information Processing Society},
    keywords = {Architectural reconstruction, UAV path planning, 3D multi-view reconstruction},
    }

Abstract

In this work we propose a simple optimization based technique to compute camera poses for drone assisted automated image acquisition. We use this technique to create highly detailed 3D models of buildings using multi-view reconstruction. Our reconstructed models are great for use in virtual reality (VR) environments since they exhibit good amount of detail that is useful for creating realistic virtual walkthroughs. Creating a good 3D reconstruction with a set of nadir images is difficult since the vertical surfaces of buildings are not captured very well and are therefore not reconstructed accurately. Acquisition of non-nadir images require avoiding obstacles around the structure. Our technique is based on mathematical optimization, and is capable of calculating camera positions and orientations to maximally cover horizontal as well as vertical surface patches while avoiding obstacles around the building. We present a complete pipeline for a mostly automated and robust approach via a camera mounted quadcopter drone. We also validate our approach via a graphics-based simulation.