Content-adaptive generation and parallel compositing of volumetric depth images for responsive visualization of large volume data

Research output: Preprint/documentation/reportPreprint

Contributors

  • Aryaman Gupta - , Chair of Scientific Computing for Systems Biology, Center for Systems Biology Dresden (CSBD), Max Planck Institute of Molecular Cell Biology and Genetics (Author)
  • Pietro Incardona - , Chair of Scientific Computing for Systems Biology, Center for Systems Biology Dresden (CSBD), Max Planck Institute of Molecular Cell Biology and Genetics (Author)
  • Paul Hunt - , TUD Dresden University of Technology, Center for Systems Biology Dresden (CSBD), Max Planck Institute of Molecular Cell Biology and Genetics (Author)
  • Guido Reina - , University of Stuttgart (Author)
  • Steffen Frey - , University of Groningen (Author)
  • Stefan Gumhold - , Chair of Computer Graphics and Visualisation (Author)
  • Ulrik Günther - , Center for Advanced Systems Understanding (CASUS), Center for Systems Biology Dresden (CSBD), Max Planck Institute of Molecular Cell Biology and Genetics (Author)
  • Ivo F. Sbalzarini - , Chair of Scientific Computing for Systems Biology, Center for Systems Biology Dresden (CSBD), Max Planck Institute of Molecular Cell Biology and Genetics (Author)

Abstract

We present a content-adaptive generation and parallel compositing algorithm for view-dependent explorable representations of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. Volumetric Depth Images (VDIs), view-dependent piece wise-constant representations of volume data, offer a potential solution: they are more compact and less expensive to render than the original data. So far, however, there is no method to generate such representations on distributed data and to automatically adapt the representation to the contents of the data. We propose an approach that addresses both issues by enabling sort-last parallel generation of VDIs with content-adaptive parameters. The resulting VDIs can be streamed for display, providing responsive visualization of large, potentially distributed, volume data.

Details

Original languageEnglish
Number of pages10
Publication statusPublished - 29 Jun 2022
No renderer: customAssociatesEventsRenderPortal,dk.atira.pure.api.shared.model.researchoutput.WorkingPaper

External IDs

ORCID /0000-0003-4414-4340/work/142660388

Keywords

Keywords

  • Human-centered computing, Visualization, Visualization theory, concepts and paradigms Human-centered computing, Visualization techniques