Towards Efficient LLM Storage Reduction via Tensor Deduplication and Delta Compression
Towards Efficient LLM Storage Reduction via Tensor Deduplication and Delta Compression
Modern model hubs, such as Hugging Face, store tens of petabytes of LLMs, with fine-tuned variants vastly outnumbering base models and dominating storage consumption. Existing storage reduction techniques -- such as deduplication and compression -- are either LLM oblivious or not compatible with each other, limiting data reduction effectiveness. Our large-scale characterization study across all publicly available Hugging Face LLM repositories reveals several key insights: (1) fine-tuned models within the same family exhibit highly structured, sparse parameter differences suitable for delta compression; (2) bitwise similarity enables LLM family clustering; and (3) tensor-level deduplication offers strong synergy with model aware compressors. Building on these insights, we present BitX, an effective, fast, lossless delta compression algorithm that compresses XORed redundancy between fine-tuned and base LLMs. We build zLLM, a model storage reduction pipeline that unifies tensor-level deduplication and lossless BitX compression. By synergizing deduplication and compression around LLM family clustering, zLLM reduces model storage consumption by 49.5 percent, over 20 percent more than state-of-the-art deduplication and compression designs.
Zirui Wang、Tingfeng Lan、Zhaoyuan Su、Juncheng Yang、Yue Cheng
计算技术、计算机技术
Zirui Wang,Tingfeng Lan,Zhaoyuan Su,Juncheng Yang,Yue Cheng.Towards Efficient LLM Storage Reduction via Tensor Deduplication and Delta Compression[EB/OL].(2025-04-30)[2025-06-29].https://arxiv.org/abs/2505.06252.点此复制
评论