Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johanna Beyer is active.

Publication


Featured researches published by Johanna Beyer.


IEEE Transactions on Visualization and Computer Graphics | 2007

High-Quality Multimodal Volume Rendering for Preoperative Planning of Neurosurgical Interventions

Johanna Beyer; Markus Hadwiger; Stefan Wolfsberger; Katja Bühler

Surgical approaches tailored to an individual patients anatomy and pathology have become standard in neurosurgery. Precise preoperative planning of these procedures, however, is necessary to achieve an optimal therapeutic effect. Therefore, multiple radiological imaging modalities are used prior to surgery to delineate the patients anatomy, neurological function, and metabolic processes. Developing a three-dimensional perception of the surgical approach, however, is traditionally still done by mentally fusing multiple modalities. Concurrent 3D visualization of these datasets can, therefore, improve the planning process significantly. In this paper we introduce an application for planning of individual neurosurgical approaches with high-quality interactive multimodal volume rendering. The application consists of three main modules which allow to (1) plan the optimal skin incision and opening of the skull tailored to the underlying pathology; (2) visualize superficial brain anatomy, function and metabolism; and (3) plan the patient-specific approach for surgery of deep-seated lesions. The visualization is based on direct multi-volume raycasting on graphics hardware, where multiple volumes from different modalities can be displayed concurrently at interactive frame rates. Graphics memory limitations are avoided by performing raycasting on bricked volumes. For preprocessing tasks such as registration or segmentation, the visualization modules are integrated into a larger framework, thus supporting the entire workflow of preoperative planning.


IEEE Computer Graphics and Applications | 2010

Ssecrett and NeuroTrace: Interactive Visualization and Analysis Tools for Large-Scale Neuroscience Data Sets

Won-Ki Jeong; Johanna Beyer; Markus Hadwiger; Rusty Blue; Amelio Vázquez-Reina; R. Clay Reid; Jeff W. Lichtman; Hanspeter Pfister

Data sets imaged with modern electron microscopes can range from tens of terabytes to about one petabyte. Two new tools, Ssecrett and NeuroTrace, support interactive exploration and analysis of large-scale optical-and electron-microscopy images to help scientists reconstruct complex neural circuits of the mammalian nervous system.


IEEE Transactions on Visualization and Computer Graphics | 2009

Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets

Won Ki Jeong; Johanna Beyer; Markus Hadwiger; Amelio Vazquez; Hanspeter Pfister; Ross T. Whitaker

Recent advances in scanning technology provide high resolution EM (electron microscopy) datasets that allow neuro-scientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes.


IEEE Transactions on Visualization and Computer Graphics | 2012

Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach

Markus Hadwiger; Johanna Beyer; Won-Ki Jeong; Hanspeter Pfister

This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.


IEEE Transactions on Visualization and Computer Graphics | 2013

ConnectomeExplorer: Query-Guided Visual Analysis of Large Volumetric Neuroscience Data

Johanna Beyer; Ali K. Al-Awami; Narayanan Kasthuri; Jeff W. Lichtman; Hanspeter Pfister; Markus Hadwiger

This paper presents ConnectomeExplorer, an application for the interactive exploration and query-guided visual analysis of large volumetric electron microscopy (EM) data sets in connectomics research. Our system incorporates a knowledge-based query algebra that supports the interactive specification of dynamically evaluated queries, which enable neuroscientists to pose and answer domain-specific questions in an intuitive manner. Queries are built step by step in a visual query builder, building more complex queries from combinations of simpler queries. Our application is based on a scalable volume visualization framework that scales to multiple volumes of several teravoxels each, enabling the concurrent visualization and querying of the original EM volume, additional segmentation volumes, neuronal connectivity, and additional meta data comprising a variety of neuronal data attributes. We evaluate our application on a data set of roughly one terabyte of EM data and 750 GB of segmentation data, containing over 4,000 segmented structures and 1,000 synapses. We demonstrate typical use-case scenarios of our collaborators in neuroscience, where our system has enabled them to answer specific scientific questions using interactive querying and analysis on the full-size data for the first time.


eurographics | 2008

Smooth mixed-resolution GPU volume rendering

Johanna Beyer; Markus Hadwiger; Torsten Möller; Laura Fritz

We propose a mixed-resolution volume ray-casting approach that enables more flexibility in the choice of downsampling positions and filter kernels, allows freely mixing volume bricks of different resolutions during rendering, and does not require modifying the original sample values. A C0-continuous function is obtained everywhere with hardware-native filtering at full speed by simply warping texture coordinates of samples in transition regions. Additionally, we propose a simple but powerful, flat texture packing scheme that supports mixing different resolution levels in a single 3D volume cache texture with a very simple and fast address translation scheme. Although this packing constrains full scalability, it enables mixing different resolution levels in GPU-based ray-casting with only a single rendering pass. We demonstrate our approach on large real-world data, obtaining a continuous scalar function and shading at brick boundaries, using single-pass ray-casting at real-time frame rates.


Computer Graphics Forum | 2015

State-of-the-Art in GPU-Based Large-Scale Volume Visualization

Johanna Beyer; Markus Hadwiger; Hanspeter Pfister

This survey gives an overview of the current state of the art in GPU techniques for interactive large‐scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga‐, tera‐ and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out‐of‐core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. ‘output‐sensitive’ algorithms and system designs. This leads to recent output‐sensitive approaches that are ‘ray‐guided’, ‘visualization‐driven’ or ‘display‐aware’. In this survey, we focus on these characteristics and propose a new categorization of GPU‐based large‐scale volume visualization techniques based on the notions of actual output‐resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey.


IEEE Transactions on Visualization and Computer Graphics | 2014

Design and Evaluation of Interactive Proofreading Tools for Connectomics.

Daniel Haehn; Seymour Knowles-Barley; Mike Roberts; Johanna Beyer; Narayanan Kasthuri; Jeff W. Lichtman; Hanspeter Pfister

Proofreading refers to the manual correction of automatic segmentations of image data. In connectomics, electron microscopy data is acquired at nanometer-scale resolution and results in very large image volumes of brain tissue that require fully automatic segmentation algorithms to identify cell boundaries. However, these algorithms require hundreds of corrections per cubic micron of tissue. Even though this task is time consuming, it is fairly easy for humans to perform corrections through splitting, merging, and adjusting segments during proofreading. In this paper we present the design and implementation of Mojo, a fully-featured single-user desktop application for proofreading, and Dojo, a multi-user web-based application for collaborative proofreading. We evaluate the accuracy and speed of Mojo, Dojo, and Raveler, a proofreading tool from Janelia Farm, through a quantitative user study. We designed a between-subjects experiment and asked non-experts to proofread neurons in a publicly available connectomics dataset. Our results show a significant improvement of corrections using web-based Dojo, when given the same amount of time. In addition, all participants using Dojo reported better usability. We discuss our findings and provide an analysis of requirements for designing visual proofreading software.


eurographics | 2014

A Survey of GPU-Based Large-Scale Volume Visualization

Johanna Beyer; Markus Hadwiger; Hanspeter Pfister

This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera-, and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e., “output-sensitive” algorithms and system designs. This leads to recent outputsensitive approaches that are “ray-guided,” “visualization-driven,” or “display-aware.” In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we discuss in this survey.


IEEE Transactions on Visualization and Computer Graphics | 2014

NeuroLines: A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity

Ali K. Al-Awami; Johanna Beyer; Hendrik Strobelt; Narayanan Kasthuri; Jeff W. Lichtman; Hanspeter Pfister; Markus Hadwiger

We present NeuroLines, a novel visualization technique designed for scalable detailed analysis of neuronal connectivity at the nanoscale level. The topology of 3D brain tissue data is abstracted into a multi-scale, relative distance-preserving subway map visualization that allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics aims at reverse-engineering the wiring of the brain. Reconstructing and analyzing the detailed connectivity of neurons and neurites (axons, dendrites) will be crucial for understanding the brain and its development and diseases. However, the enormous scale and complexity of nanoscale neuronal connectivity pose big challenges to existing visualization techniques in terms of scalability. NeuroLines offers a scalable visualization framework that can interactively render thousands of neurites, and that supports the detailed analysis of neuronal structures and their connectivity. We describe and analyze the design of NeuroLines based on two real-world use-cases of our collaborators in developmental neuroscience, and investigate its scalability to large-scale neuronal connectivity data.

Collaboration


Dive into the Johanna Beyer's collaboration.

Top Co-Authors

Avatar

Markus Hadwiger

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali K. Al-Awami

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Won-Ki Jeong

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Wolfsberger

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haneen Mohammed

King Abdullah University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge