Home » Proceedings » GI 2022 » Future Frame Synthesis for Fast Monte Carlo Rendering

Future Frame Synthesis for Fast Monte Carlo Rendering

Zhan Li (Portland State University), Carl S. Marshall (Intel), Deepak S. Vembar (Intel), Feng Liu (Portland State University)


Proceedings of Graphics Interface 2022:
Montréal, Quebec,
16 – 19 May 2022, pp. 74 – 83

Abstract

Monte Carlo rendering algorithms can generate high-quality images; however they need to sample many rays per pixel and thus are computationally expensive. In this paper, we present a method to speed up Monte Carlo rendering by significantly reducing the number of pixels that we need to sample rays for. Specifically, we develop a neural future frame synthesis method that quickly predicts future frames from frames that have already been rendered. In each future frame, there are pixels that cannot be predicted correctly from previous frames in challenging scenarios, such as quick camera motion, object motion, and large occlusion. Therefore, our method estimates a mask together with each future frame that indicates the subset of pixels that need ray samples to correct the prediction results. To train and evaluate our neural future frame synthesis method, we develop a large ray-tracing animation dataset. Our experiments show that our method can significantly reduce the number of pixels that we need to render while maintaining high rendering quality.

Michael A. J. Sweeney Award

Alain Fournier Awards

Bill Buxton Awards

CHCCS Service Awards

CHCCS Achievement Awards

Canadian Digital Media Pioneer Awards

Connect with us

Prix Pionnier des médias numériques

Early Career Researcher Award

primary_navigation_menu