|国家预印本平台
首页|Haphazard Inputs as Images in Online Learning

Haphazard Inputs as Images in Online Learning

Haphazard Inputs as Images in Online Learning

来源:Arxiv_logoArxiv
英文摘要

The field of varying feature space in online learning settings, also known as haphazard inputs, is very prominent nowadays due to its applicability in various fields. However, the current solutions to haphazard inputs are model-dependent and cannot benefit from the existing advanced deep-learning methods, which necessitate inputs of fixed dimensions. Therefore, we propose to transform the varying feature space in an online learning setting to a fixed-dimension image representation on the fly. This simple yet novel approach is model-agnostic, allowing any vision-based models to be applicable for haphazard inputs, as demonstrated using ResNet and ViT. The image representation handles the inconsistent input data seamlessly, making our proposed approach scalable and robust. We show the efficacy of our method on four publicly available datasets. The code is available at https://github.com/Rohit102497/HaphazardInputsAsImages.

Rohit Agarwal、Aryan Dessai、Arif Ahmed Sekh、Krishna Agarwal、Alexander Horsch、Dilip K. Prasad

计算技术、计算机技术

Rohit Agarwal,Aryan Dessai,Arif Ahmed Sekh,Krishna Agarwal,Alexander Horsch,Dilip K. Prasad.Haphazard Inputs as Images in Online Learning[EB/OL].(2025-04-03)[2025-05-09].https://arxiv.org/abs/2504.02912.点此复制

评论