Abstract:
Dynamic point cloud data, characterized by temporal continuity and high-dimensional features, can accurately capture dynamic changes of objects and are thus widely applied in fields such as autonomous driving, augmented reality, virtual reality, robotic navigation, and 3D video. Dynamic point cloud compression plays a crucial role in efficiently managing the storage, transmission, and perception of the ever-increasing volume of point cloud data. Despite this demand, there remains a clear lack of comprehensive surveys specifically focused on dynamic point cloud compression methods. To address this gap, this paper presents a systematic review of existing approaches, aiming to summarize recent advancements in the field and provide insights for future research. The paper first introduces the significance and theoretical foundations of dynamic point cloud compression. It then elaborates on the fundamental principles and advantages of mainstream models from three perspectives: geometry compression, attribute compression, and joint compression. Subsequently, commonly used datasets and evaluation metrics are reviewed, and the performance of various methods across different datasets is summarized. Finally, the paper analyzes the limitations of current methods in terms of algorithmic complexity and scene adaptability, and points out that lightweight network design, joint spatiotemporal modeling, and cross-modal information fusion are major directions for future research. This survey is expected to facilitate a deeper understanding of dynamic point cloud compression and promote further development of related systems.