Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Multi-Metric Latent Factor Model for Analyzing High-Dimensional and Sparse data (2204.07819v1)

Published 16 Apr 2022 in cs.LG

Abstract: High-dimensional and sparse (HiDS) matrices are omnipresent in a variety of big data-related applications. Latent factor analysis (LFA) is a typical representation learning method that extracts useful yet latent knowledge from HiDS matrices via low-rank approximation. Current LFA-based models mainly focus on a single-metric representation, where the representation strategy designed for the approximation Loss function, is fixed and exclusive. However, real-world HiDS matrices are commonly heterogeneous and inclusive and have diverse underlying patterns, such that a single-metric representation is most likely to yield inferior performance. Motivated by this, we in this paper propose a multi-metric latent factor (MMLF) model. Its main idea is two-fold: 1) two vector spaces and three Lp-norms are simultaneously employed to develop six variants of LFA model, each of which resides in a unique metric representation space, and 2) all the variants are ensembled with a tailored, self-adaptive weighting strategy. As such, our proposed MMLF enjoys the merits originated from a set of disparate metric spaces all at once, achieving the comprehensive and unbiased representation of HiDS matrices. Theoretical study guarantees that MMLF attains a performance gain. Extensive experiments on eight real-world HiDS datasets, spanning a wide range of industrial and science domains, verify that our MMLF significantly outperforms ten state-of-the-art, shallow and deep counterparts.

Summary

We haven't generated a summary for this paper yet.