Anger Against the Algorithm? – The Role of Mindful and Mindless Processing of Errors by Human-Like Generative AI
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
A growing number of organizations are implementing generative AI via chat interfaces, technology commonly known as a conversational agent (CA), to support workers. Although AI is constantly improving, it is unlikely that it will ever be flawless. Generally, mistakes by humans are less penalized in comparison to machines. However, CAs are frequently designed to be human-like, which raises the question: How does perceived humanness of AI influence how users react to generative AI errors? We conducted a 2 × 2 experimental study with 210 participants, analyzing the influence of perceived humanness and error on reliability, and frustration and anger on a cognitive and an affective pathway based on algorithm aversion and computers-are-social-actors theory. We demonstrate that perceived humanness leads to higher perceived reliability, and reduces users’ anger and frustration caused by the error. Therefore, we recommend designing AI interfaces to be human-like to reduce negative emotions associated with AI error.
Details
Original language | English |
---|---|
Title of host publication | ICIS 2024 Proceedings |
Place of Publication | Bangkok |
Publication status | Published - 2024 |
Peer-reviewed | Yes |
External IDs
ORCID | /0000-0002-0038-007X/work/171065261 |
---|---|
ORCID | /0009-0008-5147-2995/work/171066174 |