|国家预印本平台
首页|On the synchronization between Hugging Face pre-trained language models and their upstream GitHub repository

On the synchronization between Hugging Face pre-trained language models and their upstream GitHub repository

On the synchronization between Hugging Face pre-trained language models and their upstream GitHub repository

来源:Arxiv_logoArxiv
英文摘要

Pretrained language models (PTLMs) have advanced natural language processing (NLP), enabling progress in tasks like text generation and translation. Like software package management, PTLMs are trained using code and environment scripts in upstream repositories (e.g., GitHub, GH) and distributed as variants via downstream platforms like Hugging Face (HF). Coordinating development between GH and HF poses challenges such as misaligned release timelines, inconsistent versioning, and limited reuse of PTLM variants. We conducted a mixed-method study of 325 PTLM families (904 HF variants) to examine how commit activities are coordinated. Our analysis reveals that GH contributors typically make changes related to specifying the version of the model, improving code quality, performance optimization, and dependency management within the training scripts, while HF contributors make changes related to improving model descriptions, data set handling, and setup required for model inference. Furthermore, to understand the synchronization aspects of commit activities between GH and HF, we examined three dimensions of these activities -- lag (delay), type of synchronization, and intensity -- which together yielded eight distinct synchronization patterns. The prevalence of partially synchronized patterns, such as Disperse synchronization and Sparse synchronization, reveals structural disconnects in current cross-platform release practices. These patterns often result in isolated changes -- where improvements or fixes made on one platform are never replicated on the other -- and in some cases, indicate an abandonment of one repository in favor of the other. Such fragmentation risks exposing end users to incomplete, outdated, or behaviorally inconsistent models. Hence, recognizing these synchronization patterns is critical for improving oversight and traceability in PTLM release workflows.

Ajibode Adekunle、Abdul Ali Bangash、Bram Adams、Ahmed E. Hassan

计算技术、计算机技术

Ajibode Adekunle,Abdul Ali Bangash,Bram Adams,Ahmed E. Hassan.On the synchronization between Hugging Face pre-trained language models and their upstream GitHub repository[EB/OL].(2025-08-13)[2025-08-24].https://arxiv.org/abs/2508.10157.点此复制

评论