C-LoRA: Contextual Low-Rank Adaptation for Uncertainty Estimation in Large Language Models
C-LoRA: Contextual Low-Rank Adaptation for Uncertainty Estimation in Large Language Models
Low-Rank Adaptation (LoRA) offers a cost-effective solution for fine-tuning large language models (LLMs), but it often produces overconfident predictions in data-scarce few-shot settings. To address this issue, several classical statistical learning approaches have been repurposed for scalable uncertainty-aware LoRA fine-tuning. However, these approaches neglect how input characteristics affect the predictive uncertainty estimates. To address this limitation, we propose Contextual Low-Rank Adaptation (\textbf{C-LoRA}) as a novel uncertainty-aware and parameter efficient fine-tuning approach, by developing new lightweight LoRA modules contextualized to each input data sample to dynamically adapt uncertainty estimates. Incorporating data-driven contexts into the parameter posteriors, C-LoRA mitigates overfitting, achieves well-calibrated uncertainties, and yields robust predictions. Extensive experiments demonstrate that C-LoRA consistently outperforms the state-of-the-art uncertainty-aware LoRA methods in both uncertainty quantification and model generalization. Ablation studies further confirm the critical role of our contextual modules in capturing sample-specific uncertainties. C-LoRA sets a new standard for robust, uncertainty-aware LLM fine-tuning in few-shot regimes.
Amir Hossein Rahmati、Sanket Jantre、Weifeng Zhang、Yucheng Wang、Byung-Jun Yoon、Nathan M. Urban、Xiaoning Qian
计算技术、计算机技术
Amir Hossein Rahmati,Sanket Jantre,Weifeng Zhang,Yucheng Wang,Byung-Jun Yoon,Nathan M. Urban,Xiaoning Qian.C-LoRA: Contextual Low-Rank Adaptation for Uncertainty Estimation in Large Language Models[EB/OL].(2025-05-23)[2025-06-25].https://arxiv.org/abs/2505.17773.点此复制
评论