|国家预印本平台
首页|HOFT: Householder Orthogonal Fine-tuning

HOFT: Householder Orthogonal Fine-tuning

HOFT: Householder Orthogonal Fine-tuning

来源:Arxiv_logoArxiv
英文摘要

Adaptation of foundation models using low-rank methods is a widespread approach. Another way to adapt these models is to employ orthogonal fine-tuning methods, which are less time and memory efficient despite their good generalization properties. In this work, we propose Householder Orthogonal Fine-tuning (HOFT), a novel orthogonal fine-tuning method that aims to alleviate time and space complexity. Moreover, some theoretical properties of the orthogonal fine-tuning paradigm are explored. From this exploration, Scaled Householder Orthogonal Fine-tuning (SHOFT) is proposed. Both HOFT and SHOFT are evaluated in downstream tasks, namely commonsense reasoning, machine translation, subject-driven generation and mathematical reasoning. Compared with state-of-the-art adaptation methods, HOFT and SHOFT show comparable or better results.

Alejandro Moreno Arcas、Albert Sanchis、Jorge Civera、Alfons Juan

计算技术、计算机技术

Alejandro Moreno Arcas,Albert Sanchis,Jorge Civera,Alfons Juan.HOFT: Householder Orthogonal Fine-tuning[EB/OL].(2025-05-22)[2025-06-08].https://arxiv.org/abs/2505.16531.点此复制

评论