A novel self-adversarial training scheme for enhanced robustness of inelastic constitutive descriptions by neural networks
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
This contribution presents a novel training algorithm to increase the robustness of recurrent neural networks (RNN) used as a constitutive description that are subjected to perturbations induced by its own prior output. We propose to extend the data obtained from Numerical Material Tests (NMT) on Representative Volume Elements (RVE) by generating adversarial examples based on the prediction errors. This method introduces new hyperparameters like the training length before reevaluating the errors and the fraction of adversarial examples contained in the dataset. Therefore, numerical investigations of an RVE, considering two different sets of materials with elasto-plastic behavior, are conducted and a set of hyperparameters for the Self-Adversarial Training, that result in high prediction robustness, are identified. The capabilities and limitations of the application of a neural network based constitutive description with enhanced robustness are evaluated and discussed for a numerical simulation on structural level.
Details
Original language | English |
---|---|
Article number | 106774 |
Journal | Computers and Structures |
Volume | 265 |
Issue number | 265 |
Early online date | 1 Mar 2022 |
Publication status | Published - Jun 2022 |
Peer-reviewed | Yes |
External IDs
Scopus | 85125936052 |
---|---|
WOS | 000793229200006 |
Mendeley | d2c2498b-84c1-3cc3-b15d-cac8193b2cb9 |
Keywords
ASJC Scopus subject areas
Keywords
- Data-driven modeling, Machine Learning, Multiscale modeling, Neural network constitutive description, Recurrent neural network, Self-Adversarial Training