Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Revisit of Block Power Methods for Finite State Markov Chain Applications (1610.08881v1)

Published 27 Oct 2016 in cs.NA

Abstract: In this paper, we revisit the generalized block power methods for approximating the eigenvector associated with $\lambda_1 = 1$ of a Markov chain transition matrix. Our analysis of the block power method shows that when $s$ linearly independent probability vectors are used as the initial block, the convergence of the block power method to the stationary distribution depends on the magnitude of the $(s+1)$th dominant eigenvalue $\lambda_{s+1}$ of $P$ instead of that of $\lambda_2$ in the power method. Therefore, the block power method with block size $s$ is particularly effective for transition matrices where $|\lambda_{s+1}|$ is well separated from $\lambda_1 = 1$ but $|\lambda_2|$ is not. This approach is particularly useful when visiting the elements of a large transition matrix is the main computational bottleneck over matrix--vector multiplications, where the block power method can effectively reduce the total number of times to pass over the matrix. To further reduce the overall computational cost, we combine the block power method with a sliding window scheme, taking advantage of the subsequent vectors of the latest $s$ iterations to assemble the block matrix. The sliding window scheme correlates vectors in the sliding window to quickly remove the influences from the eigenvalues whose magnitudes are smaller than $|\lambda_{s}|$ to reduce the overall number of matrix--vector multiplications to reach convergence. Finally, we compare the effectiveness of these methods in a Markov chain model representing a stochastic luminal calcium release site.

Citations (1)

Summary

We haven't generated a summary for this paper yet.