Determining the Effect of Feedback Quality on User Engagement on Online Idea Crowdsourcing Platforms Using an AI model

Research output: Contribution to journalConference articleContributedpeer-review

Contributors

Abstract

The success of idea crowdsourcing platforms relies on fostering a collaborative environment that encourages active user participation, measured by the quality and quantity of contributions and interactions. However, understanding the impact of peer interactions on user engagement and the innovation process remains challenging. While previous studies have focused on sentiment analysis, the contextual interpretation of peer feedback and its effects on engagement and innovation have yet to be explored. To address this knowledge gap, we propose a feature-based AI model that categorizes peer feedback based on its quality and polarity. Our model achieves 96% accuracy in identifying feedback quality (constructive vs. non-constructive and toxic vs. non-toxic) and feedback polarity (negative vs. positive). This contextual feedback categorization provides a foundation for quantitatively analyzing the impacts of feedback on user engagement and the innovation process. Our results show that positive and constructive peer feedback has a significantly positive effect, while toxic and negative peer feedback has a significantly negative impact on the process of idea generation, idea evaluation, and idea selection. Our results also indicate that inexperienced users are more susceptible to toxic feedback than experienced users. Based on these findings, we suggest a recommendation and reward system for incentivizing constructive feedback and preventing toxic feedback.

Details

Original languageEnglish
Article number376
JournalProceedings of the ACM on Human-Computer Interaction
Volume8
Issue numberCSCW2 CSCW
Publication statusPublished - 7 Nov 2024
Peer-reviewedYes

External IDs

unpaywall 10.1145/3686915
Scopus 85210321036

Keywords