Content-adaptive generation and parallel compositing of volumetric depth images for responsive visualization of large volume data

Publikation: Vorabdruck/Dokumentation/BerichtVorabdruck (Preprint)

Beitragende

Abstract

We present a content-adaptive generation and parallel compositing algorithm for view-dependent explorable representations of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. Volumetric Depth Images (VDIs), view-dependent piece wise-constant representations of volume data, offer a potential solution: they are more compact and less expensive to render than the original data. So far, however, there is no method to generate such representations on distributed data and to automatically adapt the representation to the contents of the data. We propose an approach that addresses both issues by enabling sort-last parallel generation of VDIs with content-adaptive parameters. The resulting VDIs can be streamed for display, providing responsive visualization of large, potentially distributed, volume data.

Details

OriginalspracheEnglisch
Seitenumfang10
PublikationsstatusVeröffentlicht - 29 Juni 2022
No renderer: customAssociatesEventsRenderPortal,dk.atira.pure.api.shared.model.researchoutput.WorkingPaper

Externe IDs

ORCID /0000-0003-4414-4340/work/142660388

Schlagworte

Schlagwörter

  • Human-centered computing, Visualization, Visualization theory, concepts and paradigms Human-centered computing, Visualization techniques