Image Based Landing Site Detection on Planetary Surfaces by Vision Transformers and Nested Convolutional Neural Networks

Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/GutachtenBeitrag in KonferenzbandBeigetragenBegutachtung

Beitragende

Abstract

The Landing Site detection in future autonomous space missions is challenging, especially for hovering exploration robots in irregular, unstructured planetary environments on asteroids and comets. This paper describes an AI-based Landing Site detection with Convolutional Neural Networks (CNNs) in the spacecraft’s camera’s field of view, using grayscale 2D images and corresponding 3D distance data (depth images) measurements. For the training and validation of CNNs, a high-fidelity dataset generator generated a specific dataset for high- resolution 2D image data, 3D point cloud data, and 3D distance data inspired by the comet 67P/Churyumov-Gerasimenko’s surface. To test CNN’s suitability for supporting autonomous navigation tasks on unstructured and irregular planetary environment surfaces, a CNN that follows an encoder-decoder structure, the Deeplabv3+ with the backbone-net YOLO-NAS, was applied. Further, the Vision Transformer ScalableViT was combined with a nested CNN as a backbone net of the Deeplabv3+ to process high-resolution 2D and 3D distance data. This paper also includes the results of investigations of the Deeplabv3+’s inference times for the object classification CNN YOLO-NAS, the ScalableViT, and the combination of the ScalableViT and the nested YOLO-NAS as backbone nets.

Details

OriginalspracheEnglisch
TitelAIAA SciTech Forum and Exposition, 2024
Herausgeber (Verlag)American Institute of Aeronautics and Astronautics Inc. (AIAA)
ISBN (Print)9781624107115
PublikationsstatusVeröffentlicht - 2024
Peer-Review-StatusJa

Konferenz

TitelAIAA SciTech Forum and Exposition, 2024
Dauer8 - 12 Januar 2024
StadtOrlando
LandUSA/Vereinigte Staaten

Schlagworte

ASJC Scopus Sachgebiete