|国家预印本平台
首页|LoRMA: Low-Rank Multiplicative Adaptation for LLMs

LoRMA: Low-Rank Multiplicative Adaptation for LLMs

LoRMA: Low-Rank Multiplicative Adaptation for LLMs

来源:Arxiv_logoArxiv
英文摘要

Large Language Models have shown remarkable capabilities in the NLP domain. Their effectiveness can mainly be attributed to their ability to adapt to an array of downstream tasks. However, generally, full fine-tuning is a computationally expensive job. To mitigate this, many techniques have been developed that prime efficiency, a prominent one being Low-Rank Adaptation (LoRA). However, LoRA and its variants employ re-parametrized additive updates. In this paper, we propose Low-Rank Multiplicative Adaptation (LoRMA), which shifts the paradigm of additive updates to a richer space of matrix multiplicative transformations. We tackle challenges such as computational complexity and rank bottleneck of matrix multiplication by effectively re-ordering operations and introducing rank inflation strategies. We conduct extensive experiments to demonstrate the effectiveness of our approach in terms of various evaluation metrics.

Harsh Bihany、Shubham Patel、Ashutosh Modi

计算技术、计算机技术

Harsh Bihany,Shubham Patel,Ashutosh Modi.LoRMA: Low-Rank Multiplicative Adaptation for LLMs[EB/OL].(2025-06-09)[2025-06-28].https://arxiv.org/abs/2506.07621.点此复制

评论