Improvement of Rejection for AI Safety through Loss-Based Monitoring

Research output: Contribution to book/conference proceedings/anthology/reportConference contributionContributedpeer-review

Contributors

Abstract

There are numerous promising applications for AI which are safety-critical, e.g. computer vision for automated driving. This requires safety measures for the underlying algorithm. Typically, the validity of a classification is solely based on the output probability of a network. Literature suggests that by rejecting classifications below an a-priori set probability threshold, the error rate of the network can be reduced. This inherently does not catch those errors, where the output probability of wrong classifications exceeds such a threshold. However, these are the most critical errors, since the system is erroneously overconfident. To solve this problem and close the gap, we present how this rejection idea can be improved by performing loss-based rejection. Our approach takes data as well as the pre-trained base-model as input and yields a monitoring model as output. For training of the monitoring model, the data samples are labeled based on the loss resulting from the base-model. This way, overconfident misclassifications can be avoided and the overall error rate reduced. As evaluation, we applied the approach to two datasets, one of which is the German Traffic Sign Recognition Benchmark (GTSRB) that is used to train safety-critical traffic sign classifiers. The experiments show that this approach yields results that improve the error-rate up to an order of magnitude while a portion of inputs is rejected as trade-off.

Details

Original languageEnglish
Title of host publicationProceedings of the Workshop on Artificial Intelligence Safety 2022 (AISafety 2022)
EditorsGabriel Pedroza, Xin Cynthia Chen, José Hernández-Orallo, Xiaowei Huang, Huáscar Espinoza, Richard Mallah, John McDermid, Mauricio Castillo-Effen
Number of pages9
Publication statusPublished - 2022
Peer-reviewedYes

Publication series

SeriesCEUR Workshop Proceedings
Volume3215
ISSN1613-0073

Conference

Title2022 Workshop on Artificial Intelligence Safety, AISafety 2022
Duration24 - 25 July 2022
CityVienna
CountryAustria

Keywords

ASJC Scopus subject areas

Keywords

  • AI Safety, Classification, Neural Networks, Rejection, Representation Learning, Robustness