|国家预印本平台
首页|TSRM: A Lightweight Temporal Feature Encoding Architecture for Time Series Forecasting and Imputation

TSRM: A Lightweight Temporal Feature Encoding Architecture for Time Series Forecasting and Imputation

TSRM: A Lightweight Temporal Feature Encoding Architecture for Time Series Forecasting and Imputation

来源:Arxiv_logoArxiv
英文摘要

We introduce a temporal feature encoding architecture called Time Series Representation Model (TSRM) for multivariate time series forecasting and imputation. The architecture is structured around CNN-based representation layers, each dedicated to an independent representation learning task and designed to capture diverse temporal patterns, followed by an attention-based feature extraction layer and a merge layer, designed to aggregate extracted features. The architecture is fundamentally based on a configuration that is inspired by a Transformer encoder, with self-attention mechanisms at its core. The TSRM architecture outperforms state-of-the-art approaches on most of the seven established benchmark datasets considered in our empirical evaluation for both forecasting and imputation tasks. At the same time, it significantly reduces complexity in the form of learnable parameters. The source code is available at https://github.com/RobertLeppich/TSRM.

Robert Leppich、Michael Stenger、Daniel Grillmeyer、Vanessa Borst、Samuel Kounev

计算技术、计算机技术

Robert Leppich,Michael Stenger,Daniel Grillmeyer,Vanessa Borst,Samuel Kounev.TSRM: A Lightweight Temporal Feature Encoding Architecture for Time Series Forecasting and Imputation[EB/OL].(2025-04-26)[2025-05-17].https://arxiv.org/abs/2504.18878.点此复制

评论