Human-Centric Ontology Evaluation: Process and Tool Support

Publication: Chapter in book/Conference proceedingContribution to conference proceedings

Abstract

As ontologies enable advanced intelligent applications, ensuring their correctness is crucial. While many quality aspects can be automatically verified, some evaluation tasks can only be solved with human intervention. Nevertheless, there is currently no generic methodology or tool support available for human-centric evaluation of ontologies. This leads to high efforts for organizing such evaluation campaigns as ontology engineers are neither guided in terms of the activities to follow nor do they benefit from tool support. To address this gap, we propose HERO - a Human-Centric Ontology Evaluation PROcess, capturing all preparation, execution and follow-up activities involved in such verifications. We further propose a reference architecture of a support platform, based on HERO. We perform a case-study-centric evaluation of HERO and its reference architecture and observe a decrease in the manual effort up to 88% when ontology engineers are supported by the proposed artifacts versus a manual preparation of the evaluation.
Original languageEnglish
Title of host publicationKnowledge Engineering and Knowledge Management
Subtitle of host publication23rd International Conference, EKAW 2022, Bolzano, Italy, September 26–29, 2022, Proceedings
EditorsOscar Corcho, Laura Hollink, Oliver Kutz, Nicolas Troquard, Fajar J. Ekaputra
Place of PublicationCham
PublisherSpringer
Pages182-197
Number of pages16
Edition1
ISBN (Electronic)978-3-031-17105-5
ISBN (Print)978-3-031-17104-8
DOIs
Publication statusPublished - 20 Sept 2022

Publication series

SeriesLecture Notes in Computer Science
Volume13514
ISSN0302-9743
SeriesLecture Notes in Artificial Intelligence
Number13514

Keywords

  • ontology evaluation
  • process model
  • human-in-the-loop

Cite this