|国家预印本平台
首页|Quantum Large Language Model Fine-Tuning

Quantum Large Language Model Fine-Tuning

Quantum Large Language Model Fine-Tuning

来源:Arxiv_logoArxiv
英文摘要

We introduce a hybrid quantum-classical deep learning architecture for large language model fine-tuning. The classical portion of the architecture is a sentence transformer that is powerful enough to display significant accuracy for complex tasks such as sentiment prediction. The quantum portion of the architecture consists of parameterized quantum circuits that utilize long-range connections between qubits. We analyze the performance of the hybrid models for various settings of hyperparameters, including the number of qubits, the depth of the quantum circuits, learning rate, number of re-uploading steps, etc. Based on a screening study of main effects, we show an overall improvement in prediction accuracy over a comparable classical baseline, with a trend of increasing accuracy with number of qubits. We observe up to $3.14\%$ improvements in accuracy over classical architectures of comparable model size, within the set of hyperparameters probed in this study. We demonstrate the contribution of each module in our architecture through ablation studies. Our studies are based on finite shot-counts and include simulations based on noisy quantum gates.

Sang Hyub Kim、Jonathan Mei、Claudio Girotto、Masako Yamada、Martin Roetteler

计算技术、计算机技术

Sang Hyub Kim,Jonathan Mei,Claudio Girotto,Masako Yamada,Martin Roetteler.Quantum Large Language Model Fine-Tuning[EB/OL].(2025-04-11)[2025-05-07].https://arxiv.org/abs/2504.08732.点此复制

评论