Supervised Multi-modal Dictionary Learning for Image Representation
-
Graphical Abstract
-
Abstract
Leveraging multi-modal visual features and category label,a supervised multi-modal dictionary learning method is proposed for image representation.First,a multi-modal dictionary learning algorithm is used to learn a "shared+private" kind of sparse feature from four different visual modalities including texture,color,shape and structure; then,a Laplacian style regularization term is used to let the feature reflect semantic relationships between samples,which could enhance the discrimination power of the feature.Experiments on the image classification task show that the method proposed in this paper outperforms baseline methods.
-
-