Image Based Landing Site Detection on Planetary Surfaces by Vision Transformers and Nested Convolutional Neural Networks

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

Abstract

The Landing Site detection in future autonomous space missions is challenging, especially for hovering exploration robots in irregular, unstructured planetary environments on asteroids and comets. This paper describes an AI-based Landing Site detection with Convolutional Neural Networks (CNNs) in the spacecraft’s camera’s field of view, using grayscale 2D images and corresponding 3D distance data (depth images) measurements. For the training and validation of CNNs, a high-fidelity dataset generator generated a specific dataset for high- resolution 2D image data, 3D point cloud data, and 3D distance data inspired by the comet 67P/Churyumov-Gerasimenko’s surface. To test CNN’s suitability for supporting autonomous navigation tasks on unstructured and irregular planetary environment surfaces, a CNN that follows an encoder-decoder structure, the Deeplabv3+ with the backbone-net YOLO-NAS, was applied. Further, the Vision Transformer ScalableViT was combined with a nested CNN as a backbone net of the Deeplabv3+ to process high-resolution 2D and 3D distance data. This paper also includes the results of investigations of the Deeplabv3+’s inference times for the object classification CNN YOLO-NAS, the ScalableViT, and the combination of the ScalableViT and the nested YOLO-NAS as backbone nets.

Details

Original languageEnglish
Title of host publicationAIAA SciTech Forum and Exposition, 2024
PublisherAmerican Institute of Aeronautics and Astronautics Inc. (AIAA)
ISBN (print)9781624107115
Publication statusPublished - 2024
Peer-reviewedYes

Conference

TitleAIAA SciTech Forum and Exposition, 2024
Duration8 - 12 January 2024
CityOrlando
CountryUnited States of America

Keywords

ASJC Scopus subject areas