A novel self-adversarial training scheme for enhanced robustness of inelastic constitutive descriptions by neural networks

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Abstract

This contribution presents a novel training algorithm to increase the robustness of recurrent neural networks (RNN) used as a constitutive description that are subjected to perturbations induced by its own prior output. We propose to extend the data obtained from Numerical Material Tests (NMT) on Representative Volume Elements (RVE) by generating adversarial examples based on the prediction errors. This method introduces new hyperparameters like the training length before reevaluating the errors and the fraction of adversarial examples contained in the dataset. Therefore, numerical investigations of an RVE, considering two different sets of materials with elasto-plastic behavior, are conducted and a set of hyperparameters for the Self-Adversarial Training, that result in high prediction robustness, are identified. The capabilities and limitations of the application of a neural network based constitutive description with enhanced robustness are evaluated and discussed for a numerical simulation on structural level.

Details

OriginalspracheEnglisch
Aufsatznummer106774
FachzeitschriftComputers and Structures
Jahrgang265
Ausgabenummer265
Frühes Online-Datum1 März 2022
PublikationsstatusVeröffentlicht - Juni 2022
Peer-Review-StatusJa

Externe IDs

Scopus 85125936052
WOS 000793229200006
Mendeley d2c2498b-84c1-3cc3-b15d-cac8193b2cb9

Schlagworte

Schlagwörter

  • Data-driven modeling, Machine Learning, Multiscale modeling, Neural network constitutive description, Recurrent neural network, Self-Adversarial Training

Bibliotheksschlagworte