|国家预印本平台
首页|A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge

A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge

A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge

来源:Arxiv_logoArxiv
英文摘要

We introduce Layered Self-Supervised Knowledge Distillation (LSSKD) framework for training compact deep learning models. Unlike traditional methods that rely on pre-trained teacher networks, our approach appends auxiliary classifiers to intermediate feature maps, generating diverse self-supervised knowledge and enabling one-to-one transfer across different network stages. Our method achieves an average improvement of 4.54\% over the state-of-the-art PS-KD method and a 1.14% gain over SSKD on CIFAR-100, with a 0.32% improvement on ImageNet compared to HASSKD. Experiments on Tiny ImageNet and CIFAR-100 under few-shot learning scenarios also achieve state-of-the-art results. These findings demonstrate the effectiveness of our approach in enhancing model generalization and performance without the need for large over-parameterized teacher networks. Importantly, at the inference stage, all auxiliary classifiers can be removed, yielding no extra computational cost. This makes our model suitable for deploying small language models on affordable low-computing devices. Owing to its lightweight design and adaptability, our framework is particularly suitable for multimodal sensing and cyber-physical environments that require efficient and responsive inference. LSSKD facilitates the development of intelligent agents capable of learning from limited sensory data under weak supervision.

Tarique Dahri、Zulfiqar Ali Memon、Zhenyu Yu、Mohd. Yamani Idna Idris、Sheheryar Khan、Sadiq Ahmad、Maged Shoman、Saddam Aziz、Rizwan Qureshi

计算技术、计算机技术

Tarique Dahri,Zulfiqar Ali Memon,Zhenyu Yu,Mohd. Yamani Idna Idris,Sheheryar Khan,Sadiq Ahmad,Maged Shoman,Saddam Aziz,Rizwan Qureshi.A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge[EB/OL].(2025-06-08)[2025-06-29].https://arxiv.org/abs/2506.07055.点此复制

评论