Numerical Performance of the Implicitly Restarted Arnoldi Method in OFP8, Bfloat16, Posit, and Takum Arithmetics
Abstract: The computation of select eigenvalues and eigenvectors of large, sparse matrices is fundamental to a wide range of applications. Accordingly, evaluating the numerical performance of emerging alternatives to the IEEE 754 floating-point standard -- such as OFP8 (E4M3 and E5M2), bfloat16, and the tapered-precision posit and takum formats -- is of significant interest. Among the most widely used methods for this task is the implicitly restarted Arnoldi method, as implemented in ARPACK. This paper presents a comprehensive and untailored evaluation based on two real-world datasets: the SuiteSparse Matrix Collection, which includes matrices of varying sizes and condition numbers, and the Network Repository, a large collection of graphs from practical applications. The results demonstrate that the tapered-precision posit and takum formats provide improved numerical performance, with takum arithmetic avoiding several weaknesses observed in posits. While bfloat16 performs consistently better than float16, the OFP8 types are generally unsuitable for general-purpose computations.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.