|国家预印本平台
首页|Incentivizing Permissionless Distributed Learning of LLMs

Incentivizing Permissionless Distributed Learning of LLMs

Incentivizing Permissionless Distributed Learning of LLMs

来源:Arxiv_logoArxiv
英文摘要

We describe an incentive system for distributed deep learning of foundational models where peers are rewarded for contributions. The incentive system, \textit{Gauntlet}, has been deployed on the bittensor blockchain and used to train a 1.2B LLM with completely permissionless contributions of pseudo-gradients: no control over the users that can register or their hardware. \textit{Gauntlet} can be applied to any synchronous distributed training scheme that relies on aggregating updates or pseudo-gradients. We rely on a two-stage mechanism for fast filtering of peer uptime, reliability, and synchronization, combined with the core component that estimates the loss before and after individual pseudo-gradient contributions. We utilized an OpenSkill rating system to track competitiveness of pseudo-gradient scores across time. Finally, we introduce a novel mechanism to ensure peers on the network perform unique computations. Our live 1.2B run, which has paid out real-valued tokens to participants based on the value of their contributions, yielded a competitive (on a per-iteration basis) 1.2B model that demonstrates the utility of our incentive system.

Joel Lidin、Amir Sarfi、Evangelos Pappas、Samuel Dare、Eugene Belilovsky、Jacob Steeves

计算技术、计算机技术

Joel Lidin,Amir Sarfi,Evangelos Pappas,Samuel Dare,Eugene Belilovsky,Jacob Steeves.Incentivizing Permissionless Distributed Learning of LLMs[EB/OL].(2025-05-27)[2025-07-21].https://arxiv.org/abs/2505.21684.点此复制

评论