Abstract:
Novel view synthesis based on 2D images has always been a key issue in the fields of computer vision and computer graphics, aiming to synthesize the images from a novel viewpoint utilizing a set of 2D images of the target scene. Neural radiance field, as a novel implicit scene representation, has attracted great attention from researchers due to its excellent visual effects. The development of neural radiance fields is traced, introducing relevant research from aspects of theoretical foundation, optimization and extension, and applications. In terms of optimization and extension, some work focuses on accelerating training and rendering processes through optimizing network structures, model compression, and reducing the requirements for input images as well as improving rendering quality. Neural radiance fields demonstrate great potential in people, objects, and scene modeling, and some work extends their application to dynamic scene representation. Additionally, by combining neural radiance fields with generative models, it is possible to guide the generation of 3D models through text or images. Finally, the shortcomings of existing research are summarized, pointing out that accelerating the training and rendering of neural radiance fields, optimizing rendering results, and further expanding application scenarios are still the research directions for future work in this field.