Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Riemannian Manifold Optimization for Discriminant Subspace Learning (2101.08032v3)

Published 20 Jan 2021 in cs.LG, eess.IV, and eess.SP

Abstract: Linear discriminant analysis (LDA) is a widely used algorithm in machine learning to extract a low-dimensional representation of high-dimensional data, it features to find the orthogonal discriminant projection subspace by using the Fisher discriminant criterion. However, the traditional Euclidean-based methods for solving LDA are easily convergent to spurious local minima and hardly obtain an optimal solution. To address such a problem, in this paper, we propose a novel algorithm namely Riemannian-based discriminant analysis (RDA) for subspace learning. In order to obtain an explicit solution, we transform the traditional Euclidean-based methods to the Riemannian manifold space and use the trust-region method to learn the discriminant projection subspace. We compare the proposed algorithm to existing variants of LDA, as well as the unsupervised tensor decomposition methods on image classification tasks. The numerical results suggest that RDA achieves state-of-the-art performance in classification accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Wanguang Yin (5 papers)
  2. Zhengming Ma (3 papers)
  3. Quanying Liu (40 papers)

Summary

We haven't generated a summary for this paper yet.