|国家预印本平台
首页|SG-Blend: Learning an Interpolation Between Improved Swish and GELU for Robust Neural Representations

SG-Blend: Learning an Interpolation Between Improved Swish and GELU for Robust Neural Representations

SG-Blend: Learning an Interpolation Between Improved Swish and GELU for Robust Neural Representations

来源:Arxiv_logoArxiv
英文摘要

The design of activation functions remains a pivotal component in optimizing deep neural networks. While prevailing choices like Swish and GELU demonstrate considerable efficacy, they often exhibit domain-specific optima. This work introduces SG-Blend, a novel activation function that blends our proposed SSwish, a first-order symmetric variant of Swish and the established GELU through dynamic interpolation. By adaptively blending these constituent functions via learnable parameters, SG-Blend aims to harness their complementary strengths: SSwish's controlled non-monotonicity and symmetry, and GELU's smooth, probabilistic profile, to achieve a more universally robust balance between model expressivity and gradient stability. We conduct comprehensive empirical evaluations across diverse modalities and architectures, showing performance improvements across all considered natural language and computer vision tasks and models. These results, achieved with negligible computational overhead, underscore SG-Blend's potential as a versatile, drop-in replacement that consistently outperforms strong contemporary baselines. The code is available at https://anonymous.4open.science/r/SGBlend-6CBC.

Gaurav Sarkar、Jay Gala、Subarna Tripathi

计算技术、计算机技术

Gaurav Sarkar,Jay Gala,Subarna Tripathi.SG-Blend: Learning an Interpolation Between Improved Swish and GELU for Robust Neural Representations[EB/OL].(2025-05-29)[2025-07-02].https://arxiv.org/abs/2505.23942.点此复制

评论