Improving QA Efficiency with DistilBERT: Fine-Tuning and Inference on mobile Intel CPUs
Improving QA Efficiency with DistilBERT: Fine-Tuning and Inference on mobile Intel CPUs
This study presents an efficient transformer-based question-answering (QA) model optimized for deployment on a 13th Gen Intel i7-1355U CPU, using the Stanford Question Answering Dataset (SQuAD) v1.1. Leveraging exploratory data analysis, data augmentation, and fine-tuning of a DistilBERT architecture, the model achieves a validation F1 score of 0.6536 with an average inference time of 0.1208 seconds per question. Compared to a rule-based baseline (F1: 0.3124) and full BERT-based models, our approach offers a favorable trade-off between accuracy and computational efficiency. This makes it well-suited for real-time applications on resource-constrained systems. The study includes systematic evaluation of data augmentation strategies and hyperparameter configurations, providing practical insights into optimizing transformer models for CPU-based inference.
Ngeyen Yinkfu
计算技术、计算机技术
Ngeyen Yinkfu.Improving QA Efficiency with DistilBERT: Fine-Tuning and Inference on mobile Intel CPUs[EB/OL].(2025-05-28)[2025-07-16].https://arxiv.org/abs/2505.22937.点此复制
评论