Inverse Dirichlet weighting enables reliable training of physics informed neural networks

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

Abstract

We characterize and remedy a failure mode that may arise from multi-scale dynamics with scale imbalances during training of deep neural networks, such as physics informed neural networks (PINNs). PINNs are popular machine-learning templates that allow for seamless integration of physical equation models with data. Their training amounts to solving an optimization problem over a weighted sum of data-fidelity and equation-fidelity objectives. Conflicts between objectives can arise from scale imbalances, heteroscedasticity in the data, stiffness of the physical equation, or from catastrophic interference during sequential training. We explain the training pathology arising from this and propose a simple yet effective inverse Dirichlet weighting strategy to alleviate the issue. We compare with Sobolev training of neural networks, providing the baseline of analytically epsilon-optimal training. We demonstrate the effectiveness of inverse Dirichlet weighting in various applications, including a multi-scale model of active turbulence, where we show orders of magnitude improvement in accuracy and convergence over conventional PINN training. For inverse modeling using sequential training, we find that inverse Dirichlet weighting protects a PINN against catastrophic forgetting.

Details

Original languageEnglish
Article number015026
Number of pages22
JournalMachine learning: science and technology
Volume3
Issue number1
Publication statusPublished - 15 Feb 2022
Peer-reviewedYes

External IDs

unpaywall 10.1088/2632-2153/ac3712
Scopus 85126707714
ORCID /0000-0003-4414-4340/work/142252132