Mr-Fosdick at SemEval-2023 Task 5: Comparing Dataset Expansion Techniques for Non-Transformer and Transformer Models: Improving Model Performance through Data Augmentation
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
In supervised learning, a significant amount of data is essential. To achieve this, we generated and evaluated datasets based on a provided dataset using transformer and non-transformer models. By utilizing these generated datasets during the training of new models, we attain a higher balanced accuracy during validation compared to using only the original dataset.
Details
Originalsprache | Englisch |
---|---|
Titel | Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023) |
Redakteure/-innen | Atul Kr. Ojha, A. Seza Dogruoz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori |
Herausgeber (Verlag) | Association for Computational Linguistics (ACL) |
Seiten | 88-93 |
Seitenumfang | 6 |
ISBN (elektronisch) | 9781959429999 |
Publikationsstatus | Veröffentlicht - 2023 |
Peer-Review-Status | Ja |
Konferenz
Titel | 17th International Workshop on Semantic Evaluation, SemEval 2023, co-located with the 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 |
---|---|
Dauer | 13 - 14 Juli 2023 |
Stadt | Hybrid, Toronto |
Land | Kanada |