ModuLM: Enabling Modular and Multimodal Molecular Relational Learning with Large Language Models
ModuLM: Enabling Modular and Multimodal Molecular Relational Learning with Large Language Models
Molecular Relational Learning (MRL) aims to understand interactions between molecular pairs, playing a critical role in advancing biochemical research. With the recent development of large language models (LLMs), a growing number of studies have explored the integration of MRL with LLMs and achieved promising results. However, the increasing availability of diverse LLMs and molecular structure encoders has significantly expanded the model space, presenting major challenges for benchmarking. Currently, there is no LLM framework that supports both flexible molecular input formats and dynamic architectural switching. To address these challenges, reduce redundant coding, and ensure fair model comparison, we propose ModuLM, a framework designed to support flexible LLM-based model construction and diverse molecular representations. ModuLM provides a rich suite of modular components, including 8 types of 2D molecular graph encoders, 11 types of 3D molecular conformation encoders, 7 types of interaction layers, and 7 mainstream LLM backbones. Owing to its highly flexible model assembly mechanism, ModuLM enables the dynamic construction of over 50,000 distinct model configurations. In addition, we provide comprehensive results to demonstrate the effectiveness of ModuLM in supporting LLM-based MRL tasks.
Zhuo Chen、Yizhen Zheng、Huan Yee Koh、Hongxin Xiang、Linjiang Chen、Wenjie Du、Yang Wang
生物科学研究方法、生物科学研究技术计算技术、计算机技术
Zhuo Chen,Yizhen Zheng,Huan Yee Koh,Hongxin Xiang,Linjiang Chen,Wenjie Du,Yang Wang.ModuLM: Enabling Modular and Multimodal Molecular Relational Learning with Large Language Models[EB/OL].(2025-06-01)[2025-07-16].https://arxiv.org/abs/2506.00880.点此复制
评论