|国家预印本平台
首页|Bench4KE: Benchmarking Automated Competency Question Generation

Bench4KE: Benchmarking Automated Competency Question Generation

Bench4KE: Benchmarking Automated Competency Question Generation

来源:Arxiv_logoArxiv
英文摘要

The availability of Large Language Models (LLMs) presents a unique opportunity to reinvigorate research on Knowledge Engineering (KE) automation, a trend already evident in recent efforts developing LLM-based methods and tools for the automatic generation of Competency Questions (CQs). However, the evaluation of these tools lacks standardisation. This undermines the methodological rigour and hinders the replication and comparison of results. To address this gap, we introduce Bench4KE, an extensible API-based benchmarking system for KE automation. Its first release focuses on evaluating tools that generate CQs automatically. CQs are natural language questions used by ontology engineers to define the functional requirements of an ontology. Bench4KE provides a curated gold standard consisting of CQ datasets from four real-world ontology projects. It uses a suite of similarity metrics to assess the quality of the CQs generated. We present a comparative analysis of four recent CQ generation systems, which are based on LLMs, establishing a baseline for future research. Bench4KE is also designed to accommodate additional KE automation tasks, such as SPARQL query generation, ontology testing and drafting. Code and datasets are publicly available under the Apache 2.0 license.

Anna Sofia Lippolis、Minh Davide Ragagni、Paolo Ciancarini、Andrea Giovanni Nuzzolese、Valentina Presutti

计算技术、计算机技术自动化技术、自动化技术设备

Anna Sofia Lippolis,Minh Davide Ragagni,Paolo Ciancarini,Andrea Giovanni Nuzzolese,Valentina Presutti.Bench4KE: Benchmarking Automated Competency Question Generation[EB/OL].(2025-05-30)[2025-07-01].https://arxiv.org/abs/2505.24554.点此复制

评论