|国家预印本平台
首页|Learning without training: The implicit dynamics of in-context learning

Learning without training: The implicit dynamics of in-context learning

Learning without training: The implicit dynamics of in-context learning

来源:Arxiv_logoArxiv
英文摘要

One of the most striking features of Large Language Models (LLM) is their ability to learn in context. Namely at inference time an LLM is able to learn new patterns without any additional weight update when these patterns are presented in the form of examples in the prompt, even if these patterns were not seen during training. The mechanisms through which this can happen are still largely unknown. In this work, we show that the stacking of a self-attention layer with an MLP, allows the transformer block to implicitly modify the weights of the MLP layer according to the context. We argue through theory and experimentation that this simple mechanism may be the reason why LLMs can learn in context and not only during training. Specifically, we show under mild simplifying assumptions how a transformer block implicitly transforms a context into a low-rank weight-update of the MLP layer.

Benoit Dherin、Michael Munn、Hanna Mazzawi、Michael Wunder、Javier Gonzalvo

计算技术、计算机技术

Benoit Dherin,Michael Munn,Hanna Mazzawi,Michael Wunder,Javier Gonzalvo.Learning without training: The implicit dynamics of in-context learning[EB/OL].(2025-07-21)[2025-08-10].https://arxiv.org/abs/2507.16003.点此复制

评论