INTRODUCING A MULTIMODAL DATASET FOR THE RESEARCH OF ARCHITECTURAL ELEMENTS
Publikation: Beitrag in Fachzeitschrift › Konferenzartikel › Beigetragen › Begutachtung
Beitragende
Abstract
This article looks at approaches, software solutions, standards, workflows, and quality criteria to create a multimodal dataset including images, textual information, and 3D models for a small urban area. The goal is to improve art historical research on architectural elements relying on the three data entities. A specific dataset with manually created annotations is introduced and made available to the public. The paper provides an overview of the available data and detailed information on the preparation of the different types of data as well as the process of connecting everything through annotations. It mentions the relevance and creation of a controlled vocabulary. Furthermore, point cloud processing as well as neural network approaches are discussed which may replace manual labelling. Another focus is the analysis of linguistic similarities to identify whether annotations are actually connected and therefore relevant. Additionally, research scenarios will highlight the relevance of the approach for art history and the contributions, which come from computer linguistics and computer science.
Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 325–331 |
Seitenumfang | 7 |
Fachzeitschrift | International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
Jahrgang | 48 |
Ausgabenummer | M-2-2023 |
Publikationsstatus | Veröffentlicht - 24 Juni 2023 |
Peer-Review-Status | Ja |
Extern publiziert | Ja |
Externe IDs
Scopus | 85164674298 |
---|---|
Mendeley | 4dc78e2e-5324-3d74-82fe-43610d7c9d27 |
Schlagworte
Schlagwörter
- Computer vision, Annotations, Multimodal data, Art history, Artificial intelligence