A novel self-adversarial training scheme for enhanced robustness of inelastic constitutive descriptions by neural networks

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

Abstract

This contribution presents a novel training algorithm to increase the robustness of recurrent neural networks (RNN) used as a constitutive description that are subjected to perturbations induced by its own prior output. We propose to extend the data obtained from Numerical Material Tests (NMT) on Representative Volume Elements (RVE) by generating adversarial examples based on the prediction errors. This method introduces new hyperparameters like the training length before reevaluating the errors and the fraction of adversarial examples contained in the dataset. Therefore, numerical investigations of an RVE, considering two different sets of materials with elasto-plastic behavior, are conducted and a set of hyperparameters for the Self-Adversarial Training, that result in high prediction robustness, are identified. The capabilities and limitations of the application of a neural network based constitutive description with enhanced robustness are evaluated and discussed for a numerical simulation on structural level.

Details

Original languageEnglish
Article number106774
JournalComputers and Structures
Volume265
Issue number265
Early online date1 Mar 2022
Publication statusPublished - Jun 2022
Peer-reviewedYes

External IDs

Scopus 85125936052
WOS 000793229200006
Mendeley d2c2498b-84c1-3cc3-b15d-cac8193b2cb9

Keywords

Keywords

  • Data-driven modeling, Machine Learning, Multiscale modeling, Neural network constitutive description, Recurrent neural network, Self-Adversarial Training

Library keywords