Proceedings: GI 2003

CInDeR: Collision and Interference Detection in Real-time Using graphics hardware

Dave Knott , Dinesh Pai

Proceedings of Graphics Interface 2003: Halifax, Nova Scotia, Canada, 11 - 13 June 2003, 73-80

DOI 10.20380/GI2003.09

  • BibTeXex

    @inproceedings@inproceedings{Knott:gi2003:CInDeR,
     title = {{CInDeR}: Collision and Interference Detection in Real-time Using graphics hardware},
     author = {Dave Knott and Dinesh K. Pai},
     booktitle = {Proceedings of the Graphics Interface 2003 Conference, June 11-13, 2003, Halifax, Nova Scotia, Canada},
     organization = {CIPS, Canadian Human-Computer Communication Society},
     publisher = {Canadian Human-Computer Communications Society and A K Peters Ltd.},
     issn = {0713-5424},
     isbn = {1-56881-207-8},
     location = {Halifax, Nova Scotia},
     url = {http://graphicsinterface.org/wp-content/uploads/gi2003-9.pdf},
     year = {2003},
     month = {June},
     pages = {73--80}
    }
    

Abstract

To apply empty space skipping in texture-based volume rendering, we partition the texture space with a box-growing algorithm. Each sub-texture comprises of neighboring voxels with similar densities and gradient magnitudes. Sub-textures with similar range of density and gradient magnitude are then packed into larger ones to reduce the number of textures. The partitioning and packing is independent on the transfer function. During rendering, the visibility of the boxes are determined by whether any of the enclosed voxel is assigned a non-zero opacity by the current transfer function. Only the sub-textures from the visible boxes are blended and only the packed textures containing visible sub-textures reside in the texture memory. We arrange the densities and the gradients into separate textures to avoid storing the empty regions in the gradient texture, which is transfer function independent. The partitioning and packing can be considered as a lossless texture compression with an average compression rate of 3.1:1 for the gradient textures. Running on the same hardware and generating exactly the same images, the proposed method however renders 3 to 6 times faster on average than traditional approaches for various datasets in different rendering modes.