|国家预印本平台
首页|Share Your Attention: Transformer Weight Sharing via Matrix-based Dictionary Learning

Share Your Attention: Transformer Weight Sharing via Matrix-based Dictionary Learning

Share Your Attention: Transformer Weight Sharing via Matrix-based Dictionary Learning

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have revolutionized AI applications, yet their high computational and memory demands hinder their widespread deployment. Existing compression techniques focus on intra-block optimizations (e.g. low-rank approximation, attention head pruning), while the repetitive layered structure of transformers implies significant inter-block redundancy - a dimension largely unexplored beyond key-value (KV) caching. Inspired by dictionary learning in CNNs, we propose a framework for structured weight sharing across transformer layers. Our approach decomposes attention projection matrices into shared dictionary atoms, reducing the attention module's parameters by 66.7% while achieving on-par performance. Unlike complex methods requiring distillation or architectural changes, MASA (Matrix Atom Sharing in Attention) operates as a drop-in replacement - trained with standard optimizers - and represents each layer's weights as linear combinations of shared matrix atoms. Experiments across scales (100M-700M parameters) show that MASA achieves better benchmark accuracy and perplexity than grouped-query attention (GQA), low-rank baselines and recently proposed Repeat-all-over/Sequential sharing at comparable parameter budgets. Ablation studies confirm robustness to the dictionary size and the efficacy of shared representations in capturing cross-layer statistical regularities. Extending to Vision Transformers (ViT), MASA matches performance metrics on image classification and detection tasks with 66.7% fewer attention parameters. By combining dictionary learning strategies with transformer efficiency, MASA offers a scalable blueprint for parameter-efficient models without sacrificing performance. Finally, we investigate the possibility of employing MASA on pretrained LLMs to reduce their number of parameters without experiencing any significant drop in their performance.

Magauiya Zhussip、Dmitriy Shopkhoev、Ammar Ali、Stamatios Lefkimmiatis

计算技术、计算机技术

Magauiya Zhussip,Dmitriy Shopkhoev,Ammar Ali,Stamatios Lefkimmiatis.Share Your Attention: Transformer Weight Sharing via Matrix-based Dictionary Learning[EB/OL].(2025-08-06)[2025-08-16].https://arxiv.org/abs/2508.04581.点此复制

评论