LLM-driven Ontology Evaluation: Verifying Ontology Restrictions with ChatGPT

Stefani Tsaneva*, Stefan Vasic, Marta Sabou

*Corresponding author for this work

Publication: Chapter in book/Conference proceedingContribution to conference proceedings

325 Downloads (Pure)

Abstract

Recent advancements in artificial intelligence, particularly in large language models (LLMs), have sparked interest in their application to knowledge engineering (KE) tasks. While existing research has primarily explored the utilisation of LLMs for constructing and completing semantic resources such as ontologies and knowledge graphs, the evaluation of these resources- addressing quality issues- has not yet been thoroughly investigated. To address this gap, we propose an LLM-driven approach for the verification of ontology restrictions. We replicate our previously conducted human-in-the-loop experiment using ChatGPT-4 instead of human contributors to assess whether comparable ontology verification results can be obtained. We find that (1) ChatGPT-4 achieves intermediate-to-expert scores on an ontology modelling qualification test; (2) the model performs ontology restriction verification with accuracy of 92.22%; (3) combining model answers on the same ontology axiom represented in different formalisms improves the accuracy to 96.67%; and (4) higher accuracy is observed in identifying defects related to the incompleteness of ontology axioms compared to errors due to restrictions misuse. Our results highlight the potential of LLMs in supporting knowledge engineering tasks and outline future research directions in the area.
Original languageEnglish
Title of host publicationData Quality meets Machine Learning and Knowledge Graphs
Subtitle of host publicationDQMLKG Workshop at ESWC 2024
Publication statusAccepted/In press - 2024

Cite this