|国家预印本平台
首页|Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI Perspective

Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI Perspective

Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI Perspective

来源:Arxiv_logoArxiv
英文摘要

Deploying large scale language models on edge devices faces inherent challenges such as high computational demands, energy consumption, and potential data privacy risks. This paper introduces the Shakti Small Language Models (SLMs) Shakti-100M, Shakti-250M, and Shakti-500M which target these constraints headon. By combining efficient architectures, quantization techniques, and responsible AI principles, the Shakti series enables on-device intelligence for smartphones, smart appliances, IoT systems, and beyond. We provide comprehensive insights into their design philosophy, training pipelines, and benchmark performance on both general tasks (e.g., MMLU, Hellaswag) and specialized domains (healthcare, finance, and legal). Our findings illustrate that compact models, when carefully engineered and fine-tuned, can meet and often exceed expectations in real-world edge-AI scenarios.

Syed Abdul Gaffar Shakhadri、Kruthika KR、Kartik Basavaraj Angadi、Rakshit Aralimatti

计算技术、计算机技术

Syed Abdul Gaffar Shakhadri,Kruthika KR,Kartik Basavaraj Angadi,Rakshit Aralimatti.Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI Perspective[EB/OL].(2025-03-02)[2025-08-02].https://arxiv.org/abs/2503.01933.点此复制

评论