Prompt the problem - investigating the mathematics educational quality of AI-supported problem solving by comparing prompt techniques
Publikation: Beitrag in Fachzeitschrift › Forschungsartikel › Beigetragen › Begutachtung
Beitragende
Abstract
The use of and research on the large language model (LLM) Generative Pretrained Transformer (GPT) is growing steadily, especially in mathematics education. As students and teachers worldwide increasingly use this AI model for teaching and learning mathematics, the question of the quality of the generated output becomes important. Consequently, this study evaluates AI-supported mathematical problem solving with different GPT versions when the LLM is subjected to prompt techniques. To assess the mathematics educational quality (content related and process related) of the LLM's output, we facilitated four prompt techniques and investigated their effects in model validations (N = 1,080) using three mathematical problem-based tasks. Subsequently, human raters scored the mathematics educational quality of AI output. The results showed that the content-related quality of AI-supported problem solving was not significantly affected by using various prompt techniques across GPT versions. However, certain prompt techniques, particular Chain-of-Thought and Ask-me-Anything, notably improved process-related quality.
Details
| Originalsprache | Englisch |
|---|---|
| Aufsatznummer | 1386075 |
| Seitenumfang | 15 |
| Fachzeitschrift | Frontiers in Education |
| Jahrgang | 9 (2024) |
| Publikationsstatus | Veröffentlicht - 9 Mai 2024 |
| Peer-Review-Status | Ja |
Externe IDs
| Scopus | 85193779734 |
|---|---|
| ORCID | /0000-0002-9898-8322/work/171554076 |
Schlagworte
DFG-Fachsystematik nach Fachkollegium
Fächergruppen, Lehr- und Forschungsbereiche, Fachgebiete nach Destatis
Ziele für nachhaltige Entwicklung
Schlagwörter
- ChatGPT, Generative AI, Large language model, Mathematics education, Model validation, Problem solving, Prompt engineering