|国家预印本平台
首页|Continued domain-specific pre-training of protein language models for pMHC-I binding prediction

Continued domain-specific pre-training of protein language models for pMHC-I binding prediction

Continued domain-specific pre-training of protein language models for pMHC-I binding prediction

来源:Arxiv_logoArxiv
英文摘要

Predicting peptide--major histocompatibility complex I (pMHC-I) binding affinity remains challenging due to extreme allelic diversity ($\sim$30,000 HLA alleles), severe data scarcity for most alleles, and noisy experimental measurements. Current methods particularly struggle with underrepresented alleles and quantitative binding prediction. We test whether domain-specific continued pre-training of protein language models is beneficial for their application to pMHC-I binding affinity prediction. Starting from ESM Cambrian (300M parameters), we perform masked-language modeling (MLM)-based continued pre-training on HLA-associated peptides (epitopes), testing two input formats: epitope sequences alone versus epitopes concatenated with HLA heavy chain sequences. We then fine-tune for functional IC$_{50}$ binding affinity prediction using only high-quality quantitative data, avoiding mass spectrometry biases that are inherited by existing methods.

Sergio E. Mares、Ariel Espinoza Weinberger、Nilah M. Ioannidis

生物科学研究方法、生物科学研究技术分子生物学

Sergio E. Mares,Ariel Espinoza Weinberger,Nilah M. Ioannidis.Continued domain-specific pre-training of protein language models for pMHC-I binding prediction[EB/OL].(2025-07-16)[2025-08-10].https://arxiv.org/abs/2507.13077.点此复制

评论