Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
We introduce softpick, a rectified, not sum-to-one, drop-in replacement for softmax in transformer attention mechanisms that eliminates attention sink and massive activations. Our experiments with 340M parameter models demonstrate that softpick maintains performance parity with softmax on standard benchmarks while achieving 0% sink rate. The softpick transformer produces hidden states with significantly lower kurtosis (340 vs 33,510) and creates sparse attention maps (46.97% sparsity). Models using softpick consistently outperform softmax when quantized, with particularly pronounced advantages at lower bit precisions. Our analysis and discussion shows how softpick has the potential to open new possibilities for quantization, low-precision training, sparsity optimization, pruning, and interpretability. Our code is available at https://github.com/zaydzuhri/softpick-attention.
Zayd M. K. Zuhri、Erland Hilman Fuadi、Alham Fikri Aji
计算技术、计算机技术
Zayd M. K. Zuhri,Erland Hilman Fuadi,Alham Fikri Aji.Softpick: No Attention Sink, No Massive Activations with Rectified Softmax[EB/OL].(2025-04-29)[2025-05-25].https://arxiv.org/abs/2504.20966.点此复制
评论