|国家预印本平台
首页|SOSAE: Self-Organizing Sparse AutoEncoder

SOSAE: Self-Organizing Sparse AutoEncoder

SOSAE: Self-Organizing Sparse AutoEncoder

来源:Arxiv_logoArxiv
英文摘要

The process of tuning the size of the hidden layers for autoencoders has the benefit of providing optimally compressed representations for the input data. However, such hyper-parameter tuning process would take a lot of computation and time effort with grid search as the default option. In this paper, we introduce the Self-Organization Regularization for Autoencoders that dynamically adapts the dimensionality of the feature space to the optimal size. Inspired by physics concepts, Self-Organizing Sparse AutoEncoder (SOSAE) induces sparsity in feature space in a structured way that permits the truncation of the non-active part of the feature vector without any loss of information. This is done by penalizing the autoencoder based on the magnitude and the positional index of the feature vector dimensions, which during training constricts the feature space in both terms. Extensive experiments on various datasets show that our SOSAE can tune the feature space dimensionality up to 130 times lesser Floating-point Operations (FLOPs) than other baselines while maintaining the same quality of tuning and performance.

Sarthak Ketanbhai Modi、Zi Pong Lim、Yushi Cao、Yupeng Cheng、Yon Shin Teo、Shang-Wei Lin

计算技术、计算机技术

Sarthak Ketanbhai Modi,Zi Pong Lim,Yushi Cao,Yupeng Cheng,Yon Shin Teo,Shang-Wei Lin.SOSAE: Self-Organizing Sparse AutoEncoder[EB/OL].(2025-07-07)[2025-08-02].https://arxiv.org/abs/2507.04644.点此复制

评论