Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Feature Concatenation Multi-view Subspace Clustering (1901.10657v6)

Published 30 Jan 2019 in cs.LG and stat.ML

Abstract: Multi-view clustering is a learning paradigm based on multi-view data. Since statistic properties of different views are diverse, even incompatible, few approaches implement multi-view clustering based on the concatenated features straightforward. However, feature concatenation is a natural way to combine multi-view data. To this end, this paper proposes a novel multi-view subspace clustering approach dubbed Feature Concatenation Multi-view Subspace Clustering (FCMSC), which boosts the clustering performance by exploring the consensus information of multi-view data. Specifically, multi-view data are concatenated into a joint representation firstly, then, $l_{2,1}$-norm is integrated into the objective function to deal with the sample-specific and cluster-specific corruptions of multiple views. Moreover, a graph regularized FCMSC is also proposed in this paper to explore both the consensus information and complementary information of multi-view data for clustering. It is noteworthy that the obtained coefficient matrix is not derived by simply applying the Low-Rank Representation (LRR) to concatenated features directly. Finally, an effective algorithm based on the Augmented Lagrangian Multiplier (ALM) is designed to optimize the objective functions. Comprehensive experiments on six real-world datasets illustrate the superiority of the proposed methods over several state-of-the-art approaches for multi-view clustering.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qinghai Zheng (16 papers)
  2. Jihua Zhu (61 papers)
  3. Zhongyu Li (72 papers)
  4. Shanmin Pang (19 papers)
  5. Jun Wang (992 papers)
  6. Yaochen Li (16 papers)
Citations (105)

Summary

We haven't generated a summary for this paper yet.