GLU Attention Improve Transformer
GLU Attention Improve Transformer
Gated Linear Units (GLU) have shown great potential in enhancing neural network performance. In this paper, I introduce a novel attention mechanism called GLU Attention, which introduces nonlinearity into the values of Attention. My experiments demonstrate that GLU Attention improves both model performance and convergence speed across text and vision modalities with zero additional parameters and negligible computational costs. GLU Attention is lightweight and can seamlessly integrate with other technologies, such as Flash Attention, Rotary Position Embedding (RoPE), and various Multi-Head Attention (MHA) variants such as Grouped-Query Attention (GQA). This project is open-sourced at github.
Zehao Wang
计算技术、计算机技术
Zehao Wang.GLU Attention Improve Transformer[EB/OL].(2025-07-06)[2025-07-21].https://arxiv.org/abs/2507.00022.点此复制
评论