Leveraging Large Language Models for enzymatic reaction prediction and characterization
Leveraging Large Language Models for enzymatic reaction prediction and characterization
Predicting enzymatic reactions is crucial for applications in biocatalysis, metabolic engineering, and drug discovery, yet it remains a complex and resource-intensive task. Large Language Models (LLMs) have recently demonstrated remarkable success in various scientific domains, e.g., through their ability to generalize knowledge, reason over complex structures, and leverage in-context learning strategies. In this study, we systematically evaluate the capability of LLMs, particularly the Llama-3.1 family (8B and 70B), across three core biochemical tasks: Enzyme Commission number prediction, forward synthesis, and retrosynthesis. We compare single-task and multitask learning strategies, employing parameter-efficient fine-tuning via LoRA adapters. Additionally, we assess performance across different data regimes to explore their adaptability in low-data settings. Our results demonstrate that fine-tuned LLMs capture biochemical knowledge, with multitask learning enhancing forward- and retrosynthesis predictions by leveraging shared enzymatic information. We also identify key limitations, for example challenges in hierarchical EC classification schemes, highlighting areas for further improvement in LLM-driven biochemical modeling.
Lorenzo Di Fruscia、Jana Marie Weber
生物化学生物科学研究方法、生物科学研究技术
Lorenzo Di Fruscia,Jana Marie Weber.Leveraging Large Language Models for enzymatic reaction prediction and characterization[EB/OL].(2025-05-08)[2025-06-06].https://arxiv.org/abs/2505.05616.点此复制
评论