As ontologies enable advanced intelligent applications, ensuring their correctness is crucial. While many quality aspects can be automatically verified, some evaluation tasks can only be solved with human intervention. Nevertheless, there is currently no generic methodology or tool support available for human-centric evaluation of ontologies. This leads to high efforts for organizing such evaluation campaigns as ontology engineers are neither guided in terms of the activities to follow nor do they benefit from tool support. To address this gap, we propose HERO - a Human-Centric Ontology Evaluation PROcess, capturing all preparation, execution and follow-up activities involved in such verifications. We further propose a reference architecture of a support platform, based on HERO. We perform a case-study-centric evaluation of HERO and its reference architecture and observe a decrease in the manual effort up to 88% when ontology engineers are supported by the proposed artifacts versus a manual preparation of the evaluation.
|Titel des Sammelwerks||Knowledge Engineering and Knowledge Management|
|Untertitel des Sammelwerks||23rd International Conference, EKAW 2022, Bolzano, Italy, September 26–29, 2022, Proceedings|
|Herausgeber*innen||Oscar Corcho, Laura Hollink, Oliver Kutz, Nicolas Troquard, Fajar J. Ekaputra|
|Publikationsstatus||Veröffentlicht - 20 Sept. 2022|
|Reihe||Lecture Notes in Computer Science|
|Reihe||Lecture Notes in Artificial Intelligence|