Multi-GPU Approach for Training of Graph ML Models on large CFD Meshes
Research output: Contribution to book/conference proceedings/anthology/report › Chapter in book/anthology/report › Contributed › peer-review
View Video Presentation: https://doi.org/10.2514/6.2023-1203.vidMesh-based numerical solvers are an important part in many design tool chains. However, accurate simulations like computational fluid dynamics are time and resource consuming which is why surrogate models are employed to speed-up the solution process. Machine Learning based surrogate models on the other hand are fast in predicting approximate solutions but often lack accuracy. Thus, the development of the predictor in a predictor-corrector approach is the focus here, where the surrogate model predicts a flow field and the numerical solver corrects it. This paper scales a state-of-the-art surrogate model from the domain of graph-based machine learning to industry-relevant mesh sizes of a numerical flow simulation. The approach partitions and distributes the flow domain to multiple GPUs and provides halo exchange between these partitions during training. The utilized graph neural network operates directly on the numerical mesh and is able to preserve complex geometries as well as all other properties of the mesh. The proposed surrogate model is evaluated with an application on a three dimensional turbomachinery setup and compared to a traditionally trained distributed model. The results show that the traditional approach produces superior predictions and outperforms the proposed surrogate model. Possible explanations, improvements and future directions are outlined.
|Title of host publication||AIAA SCITECH 2023 Forum|
|Number of pages||1|
|Publication status||Published - 19 Jan 2023|
Research priority areas of TU Dresden
- computational simulation, numerical solvers, surrogate models