Evaluating the Performance and Efficiency of Sentence-BERT for Code Comment Classification
Evaluating the Performance and Efficiency of Sentence-BERT for Code Comment Classification
This work evaluates Sentence-BERT for a multi-label code comment classification task seeking to maximize the classification performance while controlling efficiency constraints during inference. Using a dataset of 13,216 labeled comment sentences, Sentence-BERT models are fine-tuned and combined with different classification heads to recognize comment types. While larger models outperform smaller ones in terms of F1, the latter offer outstanding efficiency, both in runtime and GFLOPS. As result, a balance between a reasonable F1 improvement (+0.0346) and a minimal efficiency degradation (+1.4x in runtime and +2.1x in GFLOPS) is reached.
Fabian C. Pe?a、Steffen Herbold
计算技术、计算机技术
Fabian C. Pe?a,Steffen Herbold.Evaluating the Performance and Efficiency of Sentence-BERT for Code Comment Classification[EB/OL].(2025-06-10)[2025-06-29].https://arxiv.org/abs/2506.08581.点此复制
评论