A Comparison of Different Approaches of Model Editors for Automatic Item Generation (AIG)
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
The Automatic Item Generation (AIG) approach allows users to generate tasks or items based on user-defined knowledge models created with associated editors. The challenge is that these editors typically require a certain level of technical expertise, which limits the users who can benefit from the AIG approach. To overcome this, editors can be used with strict user guidance, following a purist approach to avoid feature overload. However, once users are familiar with AIG, the purist approach may hinder their productivity. This paper examines the relationship between the users who can benefit from AIG, the AIG model editing approach used, and its usability aspects. In addition, it tries to identify further perspectives for the development of AIG model editors that make them accessible to both experienced and novice users. For this purpose, we conceptualized an editor that allows more modeling freedom and compared it with a previously developed editor that enforces strict user guidance. Our evaluation shows that the new editor can use more AIG features, but is harder to get used to, and that an appropriate approach may be to dynamically adapt the guidance and features based on the user's goal and expertise.
Details
Original language | English |
---|---|
Title of host publication | Proceedings of the 17th International Conference on Computer Supported Education |
Editors | Christoph Meinel, Benedict du Boulay, Tania Di Mascio, Edmundo Tovar |
Publisher | SCITEPRESS - Science and Technology Publications |
Pages | 765-776 |
Number of pages | 12 |
Volume | 1 |
ISBN (print) | 978-989-758-746-7 |
Publication status | Published - 2025 |
Peer-reviewed | Yes |
Keywords
Keywords
- Automatic Item Generation, AIG, Assessment, Cognitive Model, Item Model