Multi-GPU Approach for Training of Graph ML Models on large CFD Meshes

Research output: Contribution to book/conference proceedings/anthology/reportChapter in book/anthology/reportContributedpeer-review


View Video Presentation: numerical solvers are an important part in many design tool chains. However, accurate simulations like computational fluid dynamics are time and resource consuming which is why surrogate models are employed to speed-up the solution process. Machine Learning based surrogate models on the other hand are fast in predicting approximate solutions but often lack accuracy. Thus, the development of the predictor in a predictor-corrector approach is the focus here, where the surrogate model predicts a flow field and the numerical solver corrects it. This paper scales a state-of-the-art surrogate model from the domain of graph-based machine learning to industry-relevant mesh sizes of a numerical flow simulation. The approach partitions and distributes the flow domain to multiple GPUs and provides halo exchange between these partitions during training. The utilized graph neural network operates directly on the numerical mesh and is able to preserve complex geometries as well as all other properties of the mesh. The proposed surrogate model is evaluated with an application on a three dimensional turbomachinery setup and compared to a traditionally trained distributed model. The results show that the traditional approach produces superior predictions and outperforms the proposed surrogate model. Possible explanations, improvements and future directions are outlined.


Original languageEnglish
Title of host publicationAIAA SCITECH 2023 Forum
Number of pages1
Publication statusPublished - 19 Jan 2023

External IDs

Mendeley 881f696e-e61d-3360-a61e-847809c19dde
ORCID /0000-0002-8817-750X/work/142234978


Research priority areas of TU Dresden


  • computational simulation, numerical solvers, surrogate models