Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
95 tokens/sec
Gemini 2.5 Pro Premium
32 tokens/sec
GPT-5 Medium
18 tokens/sec
GPT-5 High Premium
20 tokens/sec
GPT-4o
97 tokens/sec
DeepSeek R1 via Azure Premium
87 tokens/sec
GPT OSS 120B via Groq Premium
468 tokens/sec
Kimi K2 via Groq Premium
202 tokens/sec
2000 character limit reached

Iterative Refinement with Low-Precision Posits (2408.13400v2)

Published 23 Aug 2024 in math.NA and cs.NA

Abstract: This research investigates using a mixed-precision iterative refinement method using posit numbers instead of the standard IEEE floating-point format. The method is applied to solve a general linear system represented by the equation $Ax = b$, where $A$ is a large sparse matrix. Various scaling techniques, such as row and column equilibration, map the matrix entries to higher-density regions of machine numbers before performing the $O(n3)$ factorization operation. Low-precision LU factorization followed by forward/backward substitution provides an initial estimate. The results demonstrate that a 16-bit posit configuration combined with equilibration produces accuracy comparable to IEEE half-precision (fp16), indicating a potential for achieving a balance between efficiency and accuracy.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

HackerNews