Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse power methods for large-scale higher-order PageRank problems (2105.03874v1)

Published 9 May 2021 in math.NA and cs.NA

Abstract: A commonly used technique for the higher-order PageRank problem is the power method that is computationally intractable for large-scale problems. The truncated power method proposed recently provides us with another idea to solve this problem, however, its accuracy and efficiency can be poor in practical computations. In this work, we revisit the higher-order PageRank problem and consider how to solve it efficiently. The contribution of this work is as follows. First, we accelerate the truncated power method for high-order PageRank. In the improved version, it is neither to form and store the vectors arising from the dangling states, nor to store an auxiliary matrix. Second, we propose a truncated power method with partial updating to further release the overhead, in which one only needs to update some important columns of the approximation in each iteration. On the other hand, the truncated power method solves a modified high-order PageRank model for convenience, which is not mathematically equivalent to the original one. Thus, the third contribution of this work is to propose a sparse power method with partial updating for the original higher-order PageRank problem. The convergence of all the proposed methods are discussed. Numerical experiments on large and sparse real-world and synthetic data sets are performed. The numerical results show the superiority of our new algorithms over some state-of-the-art ones for large and sparse higher-order PageRank problems.

Summary

We haven't generated a summary for this paper yet.