|国家预印本平台
首页|PIM-LLM: A High-Throughput Hybrid PIM Architecture for 1-bit LLMs

PIM-LLM: A High-Throughput Hybrid PIM Architecture for 1-bit LLMs

PIM-LLM: A High-Throughput Hybrid PIM Architecture for 1-bit LLMs

来源:Arxiv_logoArxiv
英文摘要

In this paper, we propose PIM-LLM, a hybrid architecture developed to accelerate 1-bit large language models (LLMs). PIM-LLM leverages analog processing-in-memory (PIM) architectures and digital systolic arrays to accelerate low-precision matrix multiplication (MatMul) operations in projection layers and high-precision MatMul operations in attention heads of 1-bit LLMs, respectively. Our design achieves up to roughly 80x improvement in tokens per second and a 70% increase in tokens per joule compared to conventional hardware accelerators. Additionally, PIM-LLM outperforms previous PIM-based LLM accelerators, setting a new benchmark with at least 2x and 5x improvement in GOPS and GOPS/W, respectively.

Jinendra Malekar、Peyton Chandarana、Md Hasibul Amin、Mohammed E. Elbtity、Ramtin Zand

计算技术、计算机技术

Jinendra Malekar,Peyton Chandarana,Md Hasibul Amin,Mohammed E. Elbtity,Ramtin Zand.PIM-LLM: A High-Throughput Hybrid PIM Architecture for 1-bit LLMs[EB/OL].(2025-03-31)[2025-06-06].https://arxiv.org/abs/2504.01994.点此复制

评论