Advanced Search
Chen Chang'an, Wu Xiaofeng, Wang Bin, Zhang Liming. Video Saliency Detection Using Dynamic Fusion of Spatial-Temporal Features in Complex Background with Disturbance[J]. Journal of Computer-Aided Design & Computer Graphics, 2016, 28(5): 802-812.
Citation: Chen Chang'an, Wu Xiaofeng, Wang Bin, Zhang Liming. Video Saliency Detection Using Dynamic Fusion of Spatial-Temporal Features in Complex Background with Disturbance[J]. Journal of Computer-Aided Design & Computer Graphics, 2016, 28(5): 802-812.

Video Saliency Detection Using Dynamic Fusion of Spatial-Temporal Features in Complex Background with Disturbance

  • In recent years, most existing video saliency detection methods failed to find salient regions in complex background with disturbance(e.g. waving leaves, rippling water, etc.). In this paper, a framework based on dynamic fusion of spatial-temporal features for detecting video saliency is proposed. Firstly, the spatial saliency of current frame is computed by using the simple linear iterative clustering(SLIC) algorithm. Secondly the motion entropy in multiple continuous frames is calculated to represent the motion coherence of salient object over time. At the same time, center-surround difference is used to separate target motion from its neighbors. As the human visual system is more sensitive to the motion information, a dynamic fusion strategy is adopted to combine the spatial saliency map and temporal saliency maps in this paper, such that the final saliency map can take both static and moving objects into account. The experimental results in four video databases demonstrate that the proposed method performs better than traditional methods in video saliency detection.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return