Modeling and Evaluating Personas with Software Explainability Requirements

Research output: Contribution to book/conference proceedings/anthology/reportChapter in book/anthology/reportContributedpeer-review

Contributors

Abstract

This work focuses on the context of software explainability, which is the production of software capable of explaining to users the dynamics that govern its internal functioning. User models that include information about their requirements and their perceptions of explainability are fundamental when building software with such capability. This study investigates the process of creating personas that include information about users’ explainability perceptions and needs. The proposed approach is based on data collection with questionnaires, modeling of empathy maps, grouping the maps, generating personas from them and evaluation employing the Persona Perception Scale method. In an empirical study, personas are created from 61 users’ response data to a questionnaire. The generated personas are evaluated by 60 users and 38 designers considering attributes of the Persona Perception Scale method. The results include a set of 5 distinct personas that users rate as representative of them at an average level of 3.7 out of 5, and designers rate as having quality 3.5 out of 5. The median rate is 4 out of 5 in the majority of criteria judged by users and designers. Both the personas and their creation and evaluation approach are contributions of this study to the design of software that satisfies the explainability requirement.

Details

Original languageEnglish
Title of host publicationHuman-Computer Interaction. HCI-COLLAB 2021. Communications in Computer and Information Science
PublisherSpringer, Cham
Volume1478
ISBN (electronic)978-3-030-92325-9
ISBN (print)978-3-030-92324-2
Publication statusPublished - 2021
Peer-reviewedYes

External IDs

Scopus 85121869756

Keywords