An automatic workflow for orientation of historical images with large radiometric and geometric differences

Research output: Contribution to journalResearch articleContributedpeer-review

Abstract

This contribution proposes a workflow for a completely automatic orientation of historical terrestrial urban images. Automatic structure from motion (SfM) software packages often fail when applied to historical image pairs due to large radiometric and geometric differences causing challenges with feature extraction and reliable matching. As an innovative initialising step, the proposed method uses the neural network D2-Net for feature extraction and Lowe’s mutual nearest neighbour matcher. The principal distance for every camera is estimated using vanishing point detection. The results were compared to three state-of-the-art SfM workflows (Agisoft Metashape, Meshroom and COLMAP) with the proposed workflow outperforming the other SfM tools. The resulting camera orientation data are planned to be imported into a web and virtual/augmented reality (VR/AR) application for the purpose of knowledge transfer in cultural heritage.

Details

Original languageEnglish
Pages (from-to)77-103
Number of pages27
JournalPhotogrammetric record
Volume36
Issue number174
Publication statusPublished - Jun 2021
Peer-reviewedYes

External IDs

Scopus 85107197879
ORCID /0000-0002-2456-9731/work/153654799

Keywords

Sustainable Development Goals

Keywords

  • feature matching, historical images, image orientation, neural networks, structure from motion

Library keywords