Improvement of Rejection for AI Safety through Loss-Based Monitoring
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
There are numerous promising applications for AI which are safety-critical, e.g. computer vision for automated driving. This requires safety measures for the underlying algorithm. Typically, the validity of a classification is solely based on the output probability of a network. Literature suggests that by rejecting classifications below an a-priori set probability threshold, the error rate of the network can be reduced. This inherently does not catch those errors, where the output probability of wrong classifications exceeds such a threshold. However, these are the most critical errors, since the system is erroneously overconfident. To solve this problem and close the gap, we present how this rejection idea can be improved by performing loss-based rejection. Our approach takes data as well as the pre-trained base-model as input and yields a monitoring model as output. For training of the monitoring model, the data samples are labeled based on the loss resulting from the base-model. This way, overconfident misclassifications can be avoided and the overall error rate reduced. As evaluation, we applied the approach to two datasets, one of which is the German Traffic Sign Recognition Benchmark (GTSRB) that is used to train safety-critical traffic sign classifiers. The experiments show that this approach yields results that improve the error-rate up to an order of magnitude while a portion of inputs is rejected as trade-off.
Details
Original language | English |
---|---|
Title of host publication | Proceedings of the Workshop on Artificial Intelligence Safety 2022 (AISafety 2022) |
Editors | Gabriel Pedroza, Xin Cynthia Chen, José Hernández-Orallo, Xiaowei Huang, Huáscar Espinoza, Richard Mallah, John McDermid, Mauricio Castillo-Effen |
Number of pages | 9 |
Publication status | Published - 2022 |
Peer-reviewed | Yes |
Publication series
Series | CEUR Workshop Proceedings |
---|---|
Volume | 3215 |
ISSN | 1613-0073 |
Conference
Title | 2022 Workshop on Artificial Intelligence Safety, AISafety 2022 |
---|---|
Duration | 24 - 25 July 2022 |
City | Vienna |
Country | Austria |
Keywords
ASJC Scopus subject areas
Keywords
- AI Safety, Classification, Neural Networks, Rejection, Representation Learning, Robustness