Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven P. Callahan is active.

Publication


Featured researches published by Steven P. Callahan.


international conference on management of data | 2006

VisTrails: visualization meets data management

Steven P. Callahan; Juliana Freire; Emanuele Santos; Carlos Eduardo Scheidegger; Cláudio T. Silva; Huy T. Vo

Scientists are now faced with an incredible volume of data to analyze. To successfully analyze and validate various hypothesis, it is necessary to pose several queries, correlate disparate data, and create insightful visualizations of both the simulated processes and observed phenomena. Often, insight comes from comparing the results of multiple visualizations. Unfortunately, today this process is far from interactive and contains many error-prone and time-consuming tasks. As a result, the generation and maintenance of visualizations is a major bottleneck in the scientific process, hindering both the ability to mine scientific data and the actual use of the data. The VisTrails system represents our initial attempt to improve the scientific discovery process and reduce the time to insight. In VisTrails, we address the problem of visualization from a data management perspective: VisTrails manages the data and metadata of a visualization product. In this demonstration, we show the power and flexibility of our system by presenting actual scenarios in which scientific visualization is used and showing how our system improves usability, enables reproducibility, and greatly reduces the time required to create scientific visualizations.


ieee visualization | 2005

VisTrails: enabling interactive multiple-view visualizations

Louis Bavoil; Steven P. Callahan; Patricia Crossno; Juliana Freire; Carlos Eduardo Scheidegger; Cláudio T. Silva; Huy T. Vo

VisTrails is a new system that enables interactive multiple-view visualizations by simplifying the creation and maintenance of visualization pipelines, and by optimizing their execution. It provides a general infrastructure that can be combined with existing visualization systems and libraries. A key component of VisTrails is the visualization trail (vistrail), a formal specification of a pipeline. Unlike existing dataflow-based systems, in VisTrails there is a clear separation between the specification of a pipeline and its execution instances. This separation enables powerful scripting capabilities and provides a scalable mechanism for generating a large number of visualizations. VisTrails also leverages the vistrail specification to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. In this paper, we describe the design and implementation of VisTrails, and show its effectiveness in different application scenarios.


international provenance and annotation workshop | 2006

Managing rapidly-evolving scientific workflows

Juliana Freire; Cláudio T. Silva; Steven P. Callahan; Emanuele Santos; Carlos Eduardo Scheidegger; Huy T. Vo

We give an overview of VisTrails, a system that provides an infrastructure for systematically capturing detailed provenance and streamlining the data exploration process. A key feature that sets VisTrails apart from previous visualization and scientific workflow systems is a novel action-based mechanism that uniformly captures provenance for data products and workflows used to generate these products. This mechanism not only ensures reproducibility of results, but it also simplifies data exploration by allowing scientists to easily navigate through the space of workflows and parameter settings for an exploration task.


Computing in Science and Engineering | 2007

Provenance for Visualizations: Reproducibility and Beyond

Cláudio T. Silva; Juliana Freire; Steven P. Callahan

The demand for the construction of complex visualizations is growing in many disciplines of science, as users are faced with ever increasing volumes of data to analyze. In this paper, the authors present VisTrails, an open source provenance-management system that provides infrastructure for data exploration and visualization. VisTrails transparently records detailed provenance of exploratory computational tasks and leverages this information beyond just the ability to reproduce and share results. In particular, it uses this information to simplify the process of exploring data through visualization.


IEEE Transactions on Visualization and Computer Graphics | 2005

Hardware-assisted visibility sorting for unstructured volume rendering

Steven P. Callahan; Milan Ikits; João Luiz Dihl Comba; Cláudio T. Silva

Harvesting the power of modern graphics hardware to solve the complex problem of real-time rendering of large unstructured meshes is a major research goal in the volume visualization community. While, for regular grids, texture-based techniques are well-suited for current GPUs, the steps necessary for rendering unstructured meshes are not so easily mapped to current hardware. We propose a novel volume rendering technique that simplifies the CPU-based processing and shifts much of the sorting burden to the GPU, where it can be performed more efficiently. Our hardware-assisted visibility sorting algorithm is a hybrid technique that operates in both object-space and image-space. In object-space, the algorithm performs a partial sort of the 3D primitives in preparation for rasterization. The goal of the partial sort is to create a list of primitives that generate fragments in nearly sorted order. In image-space, the fragment stream is incrementally sorted using a fixed-depth sorting network. In our algorithm, the object-space work is performed by the CPU and the fragment-level sorting is done completely on the GPU. A prototype implementation of the algorithm demonstrates that the fragment-level sorting achieves rendering rates of between one and six million tetrahedral cells per second on an ATI Radeon 9800.


international conference on data engineering | 2006

Managing the Evolution of Dataflows with VisTrails

Steven P. Callahan; Juliana Freire; Emanuele Santos; Carlos Eduardo Scheidegger; Cláudio T. Silva; Huy T. Vo

Scientists are now faced with an incredible volume of data to analyze. To successfully analyze and validate various hypotheses, it is necessary to pose several queries, correlate disparate data, and create insightful visualizations of both the simulated processes and observed phenomena. Data exploration through visualization requires scientists to go through several steps. In essence, they need to assemble complex workflows that consist of dataset selection, specification of series of operations that need to be applied to the data, and the creation of appropriate visual representations, before they can finally view and analyze the results. Often, insight comes from comparing the results of multiple visualizations that are created during the data exploration process.


Optics Express | 2011

Sample drift correction in 3D fluorescence photoactivation localization microscopy

Michael J. Mlodzianoski; John M. Schreiner; Steven P. Callahan; Katarína Smolková; Andrea Dlasková; Jitka Šantorová; Petr Ježek; Joerg Bewersdorf

The recent development of diffraction-unlimited far-field fluorescence microscopy has overcome the classical resolution limit of ~250 nm of conventional light microscopy by about a factor of ten. The improved resolution, however, reveals not only biological structures at an unprecedented resolution, but is also susceptible to sample drift on a much finer scale than previously relevant. Without correction, sample drift leads to smeared images with decreased resolution, and in the worst case to misinterpretation of the imaged structures. This poses a problem especially for techniques such as Fluorescence Photoactivation Localization Microscopy (FPALM/PALM) or Stochastic Optical Reconstruction Microscopy (STORM), which often require minutes recording time. Here we discuss an approach that corrects for three-dimensional (3D) drift in images of fixed samples without the requirement for fiduciary markers or instrument modifications. Drift is determined by calculating the spatial cross-correlation function between subsets of localized particles imaged at different times. Correction down to ~5 nm precision is achieved despite the fact that different molecules are imaged in each frame. We demonstrate the performance of our drift correction algorithm with different simulated structures and analyze its dependence on particle density and localization precision. By imaging mitochondria with Biplane FPALM we show our algorithms feasibility in a practical application.


interactive 3d graphics and games | 2007

Multi-fragment effects on the GPU using the k -buffer

Louis Bavoil; Steven P. Callahan; Aaron E. Lefohn; João Luiz Dihl Comba; Cláudio T. Silva

Many interactive rendering algorithms require operations on multiple fragments (i.e., ray intersections) at the same pixel location: however, current Graphics Processing Units (GPUs) capture only a single fragment per pixel. Example effects include transparency, translucency, constructive solid geometry, depth-of-field, direct volume rendering, and isosurface visualization. With current GPUs, programmers implement these effects using multiple passes over the scene geometry, often substantially limiting performance. This paper introduces a generalization of the Z-buffer, called the k-buffer, that makes it possible to efficiently implement such algorithms with only a single geometry pass, yet requires only a small, fixed amount of additional memory. The k-buffer uses framebuffer memory as a read-modify-write (RMW) pool of k entries whose use is programmatically defined by a small k-buffer program. We present two proposals for adding k-buffer support to future GPUs and demonstrate numerous multiple-fragment, single-pass graphics algorithms running on both a software-simulated k-buffer and a k-buffer implemented with current GPUs. The goal of this work is to demonstrate the large number of graphics algorithms that the k-buffer enables and that the efficiency is superior to current multipass approaches.


IEEE Transactions on Visualization and Computer Graphics | 2008

VisComplete: Automating Suggestions for Visualization Pipelines

David Koop; Carlos Eduardo Scheidegger; Steven P. Callahan; Juliana Freire; Cláudio T. Silva

Building visualization and analysis pipelines is a large hurdle in the adoption of visualization and workflow systems by domain scientists. In this paper, we propose techniques to help users construct pipelines by consensus-automatically suggesting completions based on a database of previously created pipelines. In particular, we compute correspondences between existing pipeline subgraphs from the database, and use these to predict sets of likely pipeline additions to a given partial pipeline. By presenting these predictions in a carefully designed interface, users can create visualizations and other data products more efficiently because they can augment their normal work patterns with the suggested completions. We present an implementation of our technique in a publicly-available, open-source scientific workflow system and demonstrate efficiency gains in real-world situations.


IEEE Transactions on Visualization and Computer Graphics | 2006

Progressive Volume Rendering of Large Unstructured Grids

Steven P. Callahan; Louis Bavoil; Valerio Pascucci; Cláudio T. Silva

We describe a new progressive technique that allows real-time rendering of extremely large tetrahedral meshes. Our approach uses a client-server architecture to incrementally stream portions of the mesh from a server to a client which refines the quality of the approximate rendering until it converges to a full quality rendering. The results of previous steps are re-used in each subsequent refinement, thus leading to an efficient rendering. Our novel approach keeps very little geometry on the client and works by refining a set of rendered images at each step. Our interactive representation of the dataset is efficient, light-weight, and high quality. We present a framework for the exploration of large datasets stored on a remote server with a thin client that is capable of rendering and managing full quality volume visualizations

Collaboration


Dive into the Steven P. Callahan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

João Luiz Dihl Comba

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fábio F. Bernardon

Universidade Federal do Rio Grande do Sul

View shared research outputs
Researchain Logo
Decentralizing Knowledge