On the pitfalls of Batch Normalization for end-to-end video learning: A study on surgical workflow analysis

Research output: Contribution to journalResearch articleContributedpeer-review

Abstract

Batch Normalization's (BN) unique property of depending on other samples in a batch is known to cause problems in several tasks, including sequence modeling. Yet, BN-related issues are hardly studied for long video understanding, despite the ubiquitous use of BN in CNNs (Convolutional Neural Networks) for feature extraction. Especially in surgical workflow analysis, where the lack of pretrained feature extractors has led to complex, multi-stage training pipelines, limited awareness of BN issues may have hidden the benefits of training CNNs and temporal models end to end. In this paper, we analyze pitfalls of BN in video learning, including issues specific to online tasks such as a ’cheating’ effect in anticipation. We observe that BN's properties create major obstacles for end-to-end learning. However, using BN-free backbones, even simple CNN–LSTMs beat the state of the art on three surgical workflow benchmarks by utilizing adequate end-to-end training strategies which maximize temporal context. We conclude that awareness of BN's pitfalls is crucial for effective end-to-end learning in surgical tasks. By reproducing results on natural-video datasets, we hope our insights will benefit other areas of video learning as well. Code is available at: https://gitlab.com/nct_tso_public/pitfalls_bn.

Details

Original languageEnglish
Article number103126
JournalMedical Image Analysis
Volume94
Publication statusPublished - May 2024
Peer-reviewedYes

External IDs

PubMed 38452578
ORCID /0000-0002-4590-1908/work/163294134

Keywords

Keywords

  • Anticipation, Batch normalization, BatchNorm, End-to-end, Surgical phase, Surgical workflow, Video learning, Neural Networks, Computer, Humans, Workflow