Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Low-Rank Matrix Completion: A Contemporary Survey (1907.11705v1)

Published 27 Jul 2019 in cs.DS, cs.IT, math.IT, and math.OC

Abstract: As a paradigm to recover unknown entries of a matrix from partial observations, low-rank matrix completion (LRMC) has generated a great deal of interest. Over the years, there have been lots of works on this topic but it might not be easy to grasp the essential knowledge from these studies. This is mainly because many of these works are highly theoretical or a proposal of new LRMC technique. In this paper, we give a contemporary survey on LRMC. In order to provide better view, insight, and understanding of potentials and limitations of LRMC, we present early scattered results in a structured and accessible way. Specifically, we classify the state-of-the-art LRMC techniques into two main categories and then explain each category in detail. We next discuss issues to be considered when one considers using LRMC techniques. These include intrinsic properties required for the matrix recovery and how to exploit a special structure in LRMC design. We also discuss the convolutional neural network (CNN) based LRMC algorithms exploiting the graph structure of a low-rank matrix. Further, we present the recovery performance and the computational complexity of the state-of-the-art LRMC techniques. Our hope is that this survey article will serve as a useful guide for practitioners and non-experts to catch the gist of LRMC.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Luong Trung Nguyen (5 papers)
  2. Junhan Kim (42 papers)
  3. Byonghyo Shim (56 papers)
Citations (148)

Summary

We haven't generated a summary for this paper yet.