|国家预印本平台
首页|LatentLLM: Attention-Aware Joint Tensor Compression

LatentLLM: Attention-Aware Joint Tensor Compression

LatentLLM: Attention-Aware Joint Tensor Compression

来源:Arxiv_logoArxiv
英文摘要

Modern foundation models such as large language models (LLMs) and large multi-modal models (LMMs) require a massive amount of computational and memory resources. We propose a new framework to convert such LLMs/LMMs into a reduced-dimension latent structure. Our method extends a local activation-aware tensor decomposition to a global attention-aware joint tensor de-composition. Our framework can significantly improve the model accuracy over the existing model compression methods when reducing the latent dimension to realize computationally/memory-efficient LLMs/LLMs. We show the benefit on several benchmark including multi-modal reasoning tasks.

Toshiaki Koike-Akino、Xiangyu Chen、Jing Liu、Ye Wang、Pu、Wang、Matthew Brand

PerryPerryPerryPerryPerry

计算技术、计算机技术

Toshiaki Koike-Akino,Xiangyu Chen,Jing Liu,Ye Wang,Pu,Wang,Matthew Brand.LatentLLM: Attention-Aware Joint Tensor Compression[EB/OL].(2025-05-23)[2025-06-14].https://arxiv.org/abs/2505.18413.点此复制

评论