Incentivised Orchestrated Training Architecture (IOTA): A Technical Primer for Release
Incentivised Orchestrated Training Architecture (IOTA): A Technical Primer for Release
In August 2024, Bittensor's Subnet 9 (SN9) demonstrated that a distributed network of incentivized, permissionless actors could each pretrain large language models (LLMs) ranging from 700 million to 14 billion parameters, while surpassing established baselines. While that work validated blockchain-based decentralized pretraining as viable, it contained core issues: (i) every miner had to fit an entire model locally, and (ii) "winner-takes-all" rewards encouraged model hoarding. Here we introduce IOTA (Incentivized Orchestrated Training Architecture), an architecture that addresses these limitations by transforming SN9's previously isolated competitors into a single cooperating unit that can scale arbitrarily while still rewarding each contributor fairly. Key preliminary results: (1) Data- and Pipeline-parallel SWARM architecture - An orchestrator distributes model layers across heterogeneous miners and streams activations between them, enabling model sizes to scale with the number of participants rather than being constrained by the VRAM of a single machine; (2) Granular, continuous incentives - Validators measure each miner's contribution and allocate token emissions proportionally; (3) Activation compression - We used model-bottlenecks to cut communication bandwidths of activations by up to 128x, vastly improving training speed; (4) Butterfly All-Reduce - Miners average disjoint parameter slices in O(1) bandwidth, offering linear scalability, redundancy and built-in collusion detection; (5) CLASP (Contribution Loss Assessment via Sampling of Pathways) - A fair attribution scheme assigns credit to miners proportional to their marginal utility and detects exploits, even when contributions are interdependent across the pipeline.
Felix Quinque、Alan Aboudib、Szymon Fonau、Rodrigo Lopez Portillo Alcocer、Brian McCrindle、Steffen Cruz
计算技术、计算机技术
Felix Quinque,Alan Aboudib,Szymon Fonau,Rodrigo Lopez Portillo Alcocer,Brian McCrindle,Steffen Cruz.Incentivised Orchestrated Training Architecture (IOTA): A Technical Primer for Release[EB/OL].(2025-07-16)[2025-08-10].https://arxiv.org/abs/2507.17766.点此复制
评论