The problem of novel view synthesis, a significant area of research in computer vision and graphics, involves the generation of an image from a novel viewpoint, utilizing a set of 2D images of the target scene. The proliferation of machine learning applications across diverse fields and the notable advancements in solving previously intractable problems have incentivized researchers to explore the integration of deep learning methods with novel view synthesis. Recently, the advent of neural radiance fields, an innovative implicit scene representation, has distinguished itself from traditional explicit representations through its unique modeling and rendering processes. Owing to its superior visual results, it has garnered significant interest from the research community. Consequently, numerous studies have embarked on optimizing the architecture of neural radiance fields, augmenting it with other models to enhance its capabilities, and exploring its application to specific scenarios. This paper aims to provide a comprehensive overview of the development of neural radiance fields, and highlights key contributions that have led to optimizing and expanding neural radiance fields, along with their practical applications. In addition, this paper discusses possible future research directions, hoping to provide a valuable reference for researchers in this field.