|国家预印本平台
首页|Tin-Tin: Towards Tiny Learning on Tiny Devices with Integer-based Neural Network Training

Tin-Tin: Towards Tiny Learning on Tiny Devices with Integer-based Neural Network Training

Tin-Tin: Towards Tiny Learning on Tiny Devices with Integer-based Neural Network Training

来源:Arxiv_logoArxiv
英文摘要

Recent advancements in machine learning (ML) have enabled its deployment on resource-constrained edge devices, fostering innovative applications such as intelligent environmental sensing. However, these devices, particularly microcontrollers (MCUs), face substantial challenges due to limited memory, computing capabilities, and the absence of dedicated floating-point units (FPUs). These constraints hinder the deployment of complex ML models, especially those requiring lifelong learning capabilities. To address these challenges, we propose Tin-Tin, an integer-based on-device training framework designed specifically for low-power MCUs. Tin-Tin introduces novel integer rescaling techniques to efficiently manage dynamic ranges and facilitate efficient weight updates using integer data types. Unlike existing methods optimized for devices with FPUs, GPUs, or FPGAs, Tin-Tin addresses the unique demands of tiny MCUs, prioritizing energy efficiency and optimized memory utilization. We validate the effectiveness of Tin-Tin through end-to-end application examples on real-world tiny devices, demonstrating its potential to support energy-efficient and sustainable ML applications on edge platforms.

Yi Hu、Jinhang Zuo、Eddie Zhang、Bob Iannucci、Carlee Joe-Wong

微电子学、集成电路电子技术应用

Yi Hu,Jinhang Zuo,Eddie Zhang,Bob Iannucci,Carlee Joe-Wong.Tin-Tin: Towards Tiny Learning on Tiny Devices with Integer-based Neural Network Training[EB/OL].(2025-04-12)[2025-05-02].https://arxiv.org/abs/2504.09405.点此复制

评论