Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Methods for High-dimensional and Large-scale Tensor Factorization (1410.5209v3)

Published 20 Oct 2014 in cs.NA, cs.DB, and cs.IR

Abstract: Given a high-dimensional large-scale tensor, how can we decompose it into latent factors? Can we process it on commodity computers with limited memory? These questions are closely related to recommender systems, which have modeled rating data not as a matrix but as a tensor to utilize contextual information such as time and location. This increase in the dimension requires tensor factorization methods scalable with both the dimension and size of a tensor. In this paper, we propose two distributed tensor factorization methods, SALS and CDTF. Both methods are scalable with all aspects of data, and they show an interesting trade-off between convergence speed and memory requirements. SALS updates a subset of the columns of a factor matrix at a time, and CDTF, a special case of SALS, updates one column at a time. In our experiments, only our methods factorize a 5-dimensional tensor with 1 billion observable entries, 10M mode length, and 1K rank, while all other state-of-the-art methods fail. Moreover, our methods require several orders of magnitude less memory than our competitors. We implement our methods on MapReduce with two widely-applicable optimization techniques: local disk caching and greedy row assignment. They speed up our methods up to 98.2X and also the competitors up to 5.9X.

Citations (65)

Summary

We haven't generated a summary for this paper yet.