Papers
Topics
Authors
Recent
2000 character limit reached

PIM-LLM: A High-Throughput Hybrid PIM Architecture for 1-bit LLMs (2504.01994v1)

Published 31 Mar 2025 in cs.AR and cs.AI

Abstract: In this paper, we propose PIM-LLM, a hybrid architecture developed to accelerate 1-bit LLMs. PIM-LLM leverages analog processing-in-memory (PIM) architectures and digital systolic arrays to accelerate low-precision matrix multiplication (MatMul) operations in projection layers and high-precision MatMul operations in attention heads of 1-bit LLMs, respectively. Our design achieves up to roughly 80x improvement in tokens per second and a 70% increase in tokens per joule compared to conventional hardware accelerators. Additionally, PIM-LLM outperforms previous PIM-based LLM accelerators, setting a new benchmark with at least 2x and 5x improvement in GOPS and GOPS/W, respectively.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.