Brian Summa
University of Utah
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian Summa.
ieee symposium on large data analysis and visualization | 2011
Huy T. Vo; Jonathan R. Bronson; Brian Summa; João Luiz Dihl Comba; Juliana Freire; Bill Howe; Valerio Pascucci; Cláudio T. Silva
Large-scale visualization systems are typically designed to efficiently “push” datasets through the graphics hardware. However, exploratory visualization systems are increasingly expected to support scalable data manipulation, restructuring, and querying capabilities in addition to core visualization algorithms. We posit that new emerging abstractions for parallel data processing, in particular computing clouds, can be leveraged to support large-scale data exploration through visualization. In this paper, we take a first step in evaluating the suitability of the MapReduce framework to implement large-scale visualization techniques. MapReduce is a lightweight, scalable, general-purpose parallel data processing framework increasingly popular in the context of cloud computing. Specifically, we implement and evaluate a representative suite of visualization tasks (mesh rendering, isosurface extraction, and mesh simplification) as MapReduce programs, and report quantitative performance results applying these algorithms to realistic datasets. For example, we perform isosurface extraction of up to l6 isovalues for volumes composed of 27 billion voxels, simplification of meshes with 30GBs of data and subsequent rendering with image resolutions up to 800002 pixels. Our results indicate that the parallel scalability, ease of use, ease of access to computing resources, and fault-tolerance of MapReduce offer a promising foundation for a combined data manipulation and data visualization system deployed in a public cloud or a local commodity cluster.
international conference on computer graphics and interactive techniques | 2012
Brian Summa; Julien Tierny; Valerio Pascucci
A fundamental step in stitching several pictures to form a larger mosaic is the computation of boundary seams that minimize the visual artifacts in the transition between images. Current seam computation algorithms use optimization methods that may be slow, sequential, memory intensive, and prone to finding suboptimal solutions related to local minima of the chosen energy function. Moreover, even when these techniques perform well, their solution may not be perceptually ideal (or even good). Such an inflexible approach does not allow the possibility of user-based improvement. This paper introduces the Panorama Weaving technique for seam creation and editing in an image mosaic. First, Panorama Weaving provides a procedure to create boundaries for panoramas that is fast, has low memory requirements and is easy to parallelize. This technique often produces seams with lower energy than the competing global technique. Second, it provides the first interactive technique for the exploration of the seam solution space. This powerful editing capability allows the user to automatically extract energy minimizing seams given a sparse set of constraints. With a variety of empirical results, we show how Panorama Weaving allows the computation and editing of a wide range of digital panoramas including unstructured configurations.
IEEE Transactions on Visualization and Computer Graphics | 2011
Bei Wang; Brian Summa; Valerio Pascucci; Mikael Vejdemo-Johansson
Large observations and simulations in scientific research give rise to high-dimensional data sets that present many challenges and opportunities in data analysis and visualization. Researchers in application domains such as engineering, computational biology, climate study, imaging and motion capture are faced with the problem of how to discover compact representations of highdimensional data while preserving their intrinsic structure. In many applications, the original data is projected onto low-dimensional space via dimensionality reduction techniques prior to modeling. One problem with this approach is that the projection step in the process can fail to preserve structure in the data that is only apparent in high dimensions. Conversely, such techniques may create structural illusions in the projection, implying structure not present in the original high-dimensional data. Our solution is to utilize topological techniques to recover important structures in high-dimensional data that contains non-trivial topology. Specifically, we are interested in high-dimensional branching structures. We construct local circle-valued coordinate functions to represent such features. Subsequently, we perform dimensionality reduction on the data while ensuring such structures are visually preserved. Additionally, we study the effects of global circular structures on visualizations. Our results reveal never-before-seen structures on real-world data sets from a variety of applications.
ACM Transactions on Graphics | 2011
Brian Summa; Giorgio Scorzelli; Ming Jiang; Peer-Timo Bremer; Valerio Pascucci
This article presents a simple framework for progressive processing of high-resolution images with minimal resources. We demonstrate this frameworks effectiveness by implementing an adaptive, multi-resolution solver for gradient-based image processing that, for the first time, is capable of handling gigapixel imagery in real time. With our system, artists can use commodity hardware to interactively edit massive imagery and apply complex operators, such as seamless cloning, panorama stitching, and tone mapping. We introduce a progressive Poisson solver that processes images in a purely coarse-to-fine manner, providing near instantaneous global approximations for interactive display (see Figure 1). We also allow for data-driven adaptive refinements to locally emulate the effects of a global solution. These techniques, combined with a fast, cache-friendly data access mechanism, allow the user to interactively explore and edit massive imagery, with the illusion of having a full solution at hand. In particular, we demonstrate the interactive modification of gigapixel panoramas that previously required extensive offline processing. Even with massive satellite images surpassing a hundred gigapixels in size, we enable repeated interactive editing in a dynamically changing environment. Images at these scales are significantly beyond the purview of previous methods yet are processed interactively using our techniques. Finally our system provides a robust and scalable out-of-core solver that consistently offers high-quality solutions while maintaining strict control over system resources.
ieee vgtc conference on visualization | 2010
Huy T. Vo; Daniel K. Osmari; Brian Summa; João Luiz Dihl Comba; Valerio Pascucci; Cláudio T. Silva
We propose a new framework design for exploiting multi‐core architectures in the context of visualization dataflow systems. Recent hardware advancements have greatly increased the levels of parallelism available with all indications showing this trend will continue in the future. Existing visualization dataflow systems have attempted to take advantage of these new resources, though they still have a number of limitations when deployed on shared memory multi‐core architectures. Ideally, visualization systems should be built on top of a parallel dataflow scheme that can optimally utilize CPUs and assign resources adaptively to pipeline elements. We propose the design of a flexible dataflow architecture aimed at addressing many of the shortcomings of existing systems including a unified execution model for both demand‐driven and event‐driven models; a resource scheduler that can automatically make decisions on how to allocate computing resources; and support for more general streaming data structures which include unstructured elements. We have implemented our system on top of VTK with backward compatibility. In this paper, we provide evidence of performance improvements on a number of applications.
international conference on cluster computing | 2011
Sidharth Kumar; Venkatram Vishwanath; Philip H. Carns; Brian Summa; Giorgio Scorzelli; Valerio Pascucci; Robert B. Ross; Jacqueline H. Chen; Hemanth Kolla; Ray W. Grout
The IDX data format provides efficient, cache oblivious, and progressive access to large-scale scientific datasets by storing the data in a hierarchical Z (HZ) order. Data stored in IDX format can be visualized in an interactive environment allowing for meaningful explorations with minimal resources. This technology enables real-time, interactive visualization and analysis of large datasets on a variety of systems ranging from desktops and laptop computers to portable devices such as iPhones/iPads and over the web. While the existing ViSUS API for writing IDX data is serial, there are obvious advantages of applying the IDX format to the output of large scale scientific simulations. We have therefore developed PIDX - a parallel API for writing data in an IDX format. With PIDX it is now possible to generate IDX datasets directly from large scale scientific simulations with the added advantage of real-time monitoring and visualization of the generated data. In this paper, we provide an overview of the IDX file format and how it is generated using PIDX. We then present a data model description and a novel aggregation strategy to enhance the scalability of the PIDX library. The S3D combustion application is used as an example to demonstrate the efficacy of PIDX for a real-world scientific simulation. S3D is used for fundamental studies of turbulent combustion requiring exceptionally high fidelity simulations. PIDX achieves up to 18 GiB/s I/O throughput at 8,192 processes for S3D to write data out in the IDX format. This allows for interactive analysis and visualization of S3D data, thus, enabling in situ analysis of S3D simulation.
IEEE Transactions on Visualization and Computer Graphics | 2015
Sujin Philip; Brian Summa; Julien Tierny; Peer-Timo Bremer; Valerio Pascucci
Gigapixel panoramas are an increasingly popular digital image application. They are often created as a mosaic of many smaller images. The mosaic acquisition can take many hours causing the individual images to differ in exposure and lighting conditions. A blending operation is often necessary to give the appearance of a seamless image. The blending quality depends on the magnitude of discontinuity along the image boundaries. Often, new boundaries, or seams, are first computed that minimize this transition. Current techniques based on multi-labeling Graph Cuts are too slow and memory intensive for gigapixel sized panoramas. In this paper, we present a parallel, out-of-core seam computing technique that is fast, has small memory footprint, and is capable of running efficiently on different types of parallel systems. Its maximum memory usage is configurable, in the form of a cache, which can improve performance by reducing redundant disk I/O and computations. It shows near-perfect scaling on symmetric multiprocessing systems and good scaling on clusters and distributed shared memory systems. Our technique improves the time required to compute seams for gigapixel imagery from many hours (or even days) to just a few minutes, while still producing boundaries with energy that is on-par with Graph Cuts.
Archive | 2012
Valerio Pascucci; Giorgio Scorzelli; Brian Summa; Peer-Timo Bremer; Attila Gyulassy; Cameron Christensen; Sujin Philip; Sidharth Kumar
19.
ieee vgtc conference on visualization | 2016
Shusen Liu; Peer-Timo Bremer; Jayaraman Thiagarajan Jayaraman; Bei Wang; Brian Summa; Valerio Pascucci
Linear projections are one of the most common approaches to visualize high‐dimensional data. Since the space of possible projections is large, existing systems usually select a small set of interesting projections by ranking a large set of candidate projections based on a chosen quality measure. However, while highly ranked projections can be informative, some lower ranked ones could offer important complementary information. Therefore, selection based on ranking may miss projections that are important to provide a global picture of the data. The proposed work fills this gap by presenting the Grassmannian Atlas, a framework that captures the global structures of quality measures in the space of all projections, which enables a systematic exploration of many complementary projections and provides new insights into the properties of existing quality measures.
IEEE Transactions on Visualization and Computer Graphics | 2013
Atul Rungta; Brian Summa; Dogan Demir; Peer-Timo Bremer; Valerio Pascucci
As the visualization field matures, an increasing number of general toolkits are developed to cover a broad range of applications. However, no general tool can incorporate the latest capabilities for all possible applications, nor can the user interfaces and workflows be easily adjusted to accommodate all user communities. As a result, users will often chose either substandard solutions presented in familiar, customized tools or assemble a patchwork of individual applications glued through ad-hoc scripts and extensive, manual intervention. Instead, we need the ability to easily and rapidly assemble the best-in-task tools into custom interfaces and workflows to optimally serve any given application community. Unfortunately, creating such meta-applications at the API or SDK level is difficult, time consuming, and often infeasible due to the sheer variety of data models, design philosophies, limits in functionality, and the use of closed commercial systems. In this paper, we present the ManyVis framework which enables custom solutions to be built both rapidly and simply by allowing coordination and communication across existing unrelated applications. ManyVis allows users to combine software tools with complementary characteristics into one virtual application driven by a single, custom-designed interface.