Enhanced Fusion Network through Redundancy Elimination for Deep Multi-view Clustering
-
-
Abstract
Multi-view clustering has received much attention by exploring the semantic information of multiple views in an unsupervised manner. However, most existing methods only consider exploring the common semantics across multiple views while neglecting to promote the diversity of representations. Thus, they cannot take advantage of complementary information across views, which may inhibit the capability of multi-view representation learning. At the same time, they ignore the impact of redundant information among views. To address these issues, a novel enhanced fusion network through redundancy elimination for deep multi-view clustering is proposed, called REMVC. Specifically, a feature aggregation module is introduced to learn the global structural relationships between samples and generate consistent representations using global features. To address the issue of redundancy in multiple views, a redundancy elimination strategy is employed to reduce the information correlation between latent space features, enhancing the discriminative capability of the embedding representation. Furthermore, to explore the complementary information of instance features across multiple views, a regularization constraint module for the common representation is proposed, which consists of variance and covariance on dimensions. This module maximized the variance of each dimension in the common representation while minimizing the covariance between different dimensions. Finally, the proposed network jointly optimizes the representation learning, redundancy elimination, and regularization constraint modules to enhance clustering performance. Experimental results on four widely used datasets demonstrate that REMVC outperforms several state-of-the-art multi-view clustering methods.
-
-