Modeling Emitters in Indoor Scenes for Inverse Rendering
-
Graphical Abstract
-
Abstract
Recent neural inverse rendering methods represent object geometry and materials with neural networks and learn network parameters from multi-view images through physically based rendering. However, these methods typically assume that the light source is located at an infinite distance, which seldomly holds in indoor scenarios that exhibit complex illumination. To address this issue, we propose a point cloud-based lighting representation method to model the spatially-varying and high-frequency lighting effects in indoor scenes. Our method first detects 2D light source masks in the input multi-view images, and then obtains a set of emitters through 3D reconstruction algorithm. We explicitly incorporate emitters in the Monte Carlo sampling, which improves the ability to model the specular effects, thus effectively alleviating the ambiguity in the inverse rendering process. Experiments on real and synthetic datasets demonstrate that the proposed method achieves the best performance in inverse rendering and can produce realistic relighting results.
-
-