InFlow: Robust outlier detection utilizing Normalizing Flows

Research output: Preprint/Documentation/Report › Preprint

Contributors

  • Nishant Kumar - , Helmholtz-Zentrum Dresden-Rossendorf (HZDR) (Author)
  • Pia Hanfeld - , Center for Advanced Systems Understanding (CASUS) (Author)
  • Michael Hecht - , Center for Advanced Systems Understanding (CASUS) (Author)
  • Michael Bussmann - , Center for Advanced Systems Understanding (CASUS) (Author)
  • Stefan Gumhold - , Chair of Computer Graphics and Visualisation (Author)
  • Nico Hoffmann - , Helmholtz-Zentrum Dresden-Rossendorf (HZDR) (Author)

Abstract

Normalizing flows are prominent deep generative models that provide tractable probability distributions and efficient density estimation. However, they are well known to fail while detecting Out-of-Distribution (OOD) inputs as they directly encode the local features of the input representations in their latent space. In this paper, we solve this overconfidence issue of normalizing flows by demonstrating that flows, if extended by an attention mechanism, can reliably detect outliers including adversarial attacks. Our approach does not require outlier data for training and we showcase the efficiency of our method for OOD detection by reporting state-of-the-art performance in diverse experimental settings. Code available at https://github.com/ComputationalRadiationPhysics/InFlow .

Details

Original languageEnglish
Number of pages24
Publication statusPublished - 10 Jun 2021
No renderer: customAssociatesEventsRenderPortal,dk.atira.pure.api.shared.model.researchoutput.WorkingPaper

External IDs

ORCID /0000-0001-6684-2890/work/154742384

Keywords

Keywords

  • cs.LG, cs.AI, cs.CR