Explaining Reasoning Results for OWL Ontologies with Evee
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
One of the advantages of formalizing domain knowledge in OWL ontologies is that one can use reasoning systems to infer implicit information automatically. However, it is not always straightforward to understand why certain entailments are inferred, and others are not. The popular ontology editor Protégé offers two explanation services to deal with this issue: justifications for OWL 2 DL ontologies, and proofs generated by the reasoner ELK for lightweight OWL 2 EL ontologies. Since justifications are often insufficient for explaining inferences, there is thus only little tool support for more comprehensive explanations in expressive ontology languages, and there is no tool support at all to explain why something was not derived. In this paper, we present Evee, a Java library and a collection of plug-ins for Protégé that offers advanced explanation services for both inferred and missing entailments. Evee explains inferred entailments using proofs in description logics up to ALCH. Missing entailments can be explained using counterexamples and abduction. We evaluated the effectiveness and the interface design of our plug-ins with description logic experts, ontology engineers, and students in two user studies. In these experiments, we were able to not only validate the tool but also gather feedback and insights to improve the existing designs.
Details
Originalsprache | Englisch |
---|---|
Titel | Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning -- KR in the Wild |
Seiten | 709–719 |
Publikationsstatus | Veröffentlicht - 2024 |
Peer-Review-Status | Ja |
Externe IDs
ORCID | /0000-0001-9936-0943/work/173984750 |
---|