Impact of Data Sampling on Performance and Robustness of Machine Learning Models in Production Engineering
Research output: Contribution to book/Conference proceedings/Anthology/Report › Chapter in book/Anthology/Report › Contributed › peer-review
Contributors
Abstract
The application of machine learning models in production systems is continuously growing. Hence, ensuring a reliable estimation of the model performance is crucial, as all following decisions regarding the deployment of the machine learning models are based on this aspect. Especially when modelling with datasets of small sample sizes, commonly used train-test split variation techniques and model evaluation strategies encompass a high variance on the model’s performance. This difficulty arises, as the available amount of meaningful data is severely limited in production engineering and can lead to the model's actual performance being greatly over- or underestimated. This work provides an experimental overview on different train-test splitting techniques and model evaluation strategies. Sophisticated statistical sampling methods are compared to simple random sampling, and their impact on performance evaluation in production datasets is analysed. The aim is to ensure a high robustness of the model performance evaluation, even when working with small datasets. Hence, the decision process for the deployment of machine learning models in production systems will be improved.
Details
Original language | English |
---|---|
Title of host publication | Lecture Notes in Production Engineering |
Publisher | Springer Nature |
Pages | 463-472 |
Number of pages | 10 |
Publication status | Published - 2023 |
Peer-reviewed | Yes |
Publication series
Series | Lecture Notes in Production Engineering |
---|---|
Volume | Part F1163 |
ISSN | 2194-0525 |
External IDs
ORCID | /0000-0001-7540-4235/work/160952791 |
---|
Keywords
ASJC Scopus subject areas
Keywords
- Data sampling, Performance evaluation, Train-test-split, Usable artificial intelligence