Explaining Reasoning Results for OWL Ontologies with Evee

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

Abstract

One of the advantages of formalizing domain knowledge in OWL ontologies is that one can use reasoning systems to infer implicit information automatically. However, it is not always straightforward to understand why certain entailments are inferred, and others are not. The popular ontology editor Protégé offers two explanation services to deal with this issue: justifications for OWL 2 DL ontologies, and proofs generated by the reasoner ELK for lightweight OWL 2 EL ontologies. Since justifications are often insufficient for explaining inferences, there is thus only little tool support for more comprehensive explanations in expressive ontology languages, and there is no tool support at all to explain why something was not derived. In this paper, we present Evee, a Java library and a collection of plug-ins for Protégé that offers advanced explanation services for both inferred and missing entailments. Evee explains inferred entailments using proofs in description logics up to ALCH. Missing entailments can be explained using counterexamples and abduction. We evaluated the effectiveness and the interface design of our plug-ins with description logic experts, ontology engineers, and students in two user studies. In these experiments, we were able to not only validate the tool but also gather feedback and insights to improve the existing designs.

Details

Original languageEnglish
Title of host publicationProceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning -- KR in the Wild
Pages709–719
Number of pages11
Publication statusPublished - 2024
Peer-reviewedYes

External IDs

ORCID /0000-0001-9936-0943/work/173984750
unpaywall 10.24963/kr.2024/67
Mendeley 211b8906-28b5-3e07-b5e4-9dc35f0d0dc7

Keywords