A Comparison of Different Approaches of Model Editors for Automatic Item Generation (AIG)

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

Abstract

The Automatic Item Generation (AIG) approach allows users to generate tasks or items based on user-defined knowledge models created with associated editors. The challenge is that these editors typically require a certain level of technical expertise, which limits the users who can benefit from the AIG approach. To overcome this, editors can be used with strict user guidance, following a purist approach to avoid feature overload. However, once users are familiar with AIG, the purist approach may hinder their productivity. This paper examines the relationship between the users who can benefit from AIG, the AIG model editing approach used, and its usability aspects. In addition, it tries to identify further perspectives for the development of AIG model editors that make them accessible to both experienced and novice users. For this purpose, we conceptualized an editor that allows more modeling freedom and compared it with a previously developed editor that enforces strict user guidance. Our evaluation shows that the new editor can use more AIG features, but is harder to get used to, and that an appropriate approach may be to dynamically adapt the guidance and features based on the user's goal and expertise.

Details

Original languageEnglish
Title of host publicationProceedings of the 17th International Conference on Computer Supported Education
EditorsChristoph Meinel, Benedict du Boulay, Tania Di Mascio, Edmundo Tovar
PublisherSCITEPRESS - Science and Technology Publications
Pages765-776
Number of pages12
Volume1
ISBN (print)978-989-758-746-7
Publication statusPublished - 2025
Peer-reviewedYes

Keywords

Keywords

  • Automatic Item Generation, AIG, Assessment, Cognitive Model, Item Model