|国家预印本平台
首页|PaPaformer: Language Model from Pre-trained Paraller Paths

PaPaformer: Language Model from Pre-trained Paraller Paths

PaPaformer: Language Model from Pre-trained Paraller Paths

来源:Arxiv_logoArxiv
英文摘要

The training of modern large-language models requires an increasingly amount of computation power and time. Even smaller variants, such as small-language models (SLMs), take several days to train in the best-case scenarios, often requiring multiple GPUs. This paper explores methods to train and evaluate decoder-only transformer-based language models in hours instead of days/weeks. We introduces \textit{PaPaformer}, a decoder-only transformer architecture variant, whose lower-dimensional parallel paths are combined into larger model. The paper shows that these lower-dimensional paths can be trained individually with different types of training data and then combined into one larger model. This method gives the option to reduce the total number of model parameters and the training time with increasing performance. Moreover, the use of parallel path structure opens interesting possibilities to customize paths to accommodate specific task requirements.

Joonas Tapaninaho、Mourad Oussala

计算技术、计算机技术

Joonas Tapaninaho,Mourad Oussala.PaPaformer: Language Model from Pre-trained Paraller Paths[EB/OL].(2025-08-01)[2025-08-11].https://arxiv.org/abs/2508.00544.点此复制

评论