Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Peterka is active.

Publication


Featured researches published by Tom Peterka.


IEEE Transactions on Visualization and Computer Graphics | 2008

Advances in the Dynallax Solid-State Dynamic Parallax Barrier Autostereoscopic Visualization Display System

Tom Peterka; Robert Kooima; Daniel J. Sandin; Andrew E. Johnson; Jason Leigh; Thomas A. DeFanti

A solid-state dynamic parallax barrier autostereoscopic display mitigates some of the restrictions present in static barrier systems such as fixed view-distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, Dynallax can output four independent eye channels when two viewers are present, and both head-tracked viewers receive an independent pair of left-eye and right-eye perspective views based on their position in 3D space. The display device is constructed by using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and a modulated virtual environment composed of two or four channels is rendered on the rear display. Dynallax was recently demonstrated in a small-scale head-tracked prototype system. This paper summarizes the concepts presented earlier, extends the discussion of various topics, and presents recent improvements to the system.


international conference on computer graphics and interactive techniques | 2005

The Varrier TM autostereoscopic virtual reality display

Daniel J. Sandin; Todd Margolis; Jinghua Ge; Javier Girado; Tom Peterka; Thomas A. DeFanti

Virtual reality (VR) has long been hampered by the gear needed to make the experience possible; specifically, stereo glasses and tracking devices. Autostereoscopic display devices are gaining popularity by freeing the user from stereo glasses, however few qualify as VR displays. The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) has designed and produced a large scale, high resolution head-tracked barrier-strip autostereoscopic display system that produces a VR immersive experience without requiring the user to wear any encumbrances. The resulting system, called Varrier, is a passive parallax barrier 35-panel tiled display that produces a wide field of view, head-tracked VR experience. This paper presents background material related to parallax barrier autostereoscopy, provides system configuration and construction details, examines Varrier interleaving algorithms used to produce the stereo images, introduces calibration and testing, and discusses the camera-based tracking subsystem.


Proceedings of SPIE | 2013

CAVE2: a hybrid reality environment for immersive simulation and information analysis

Alessandro Febretti; Arthur Nishimoto; Terrance Thigpen; Jonas Talandis; Lance Long; Jd Pirtle; Tom Peterka; Alan Verlo; Maxine D. Brown; Dana Plepys; Daniel J. Sandin; Luc Renambot; Andrew E. Johnson; Jason Leigh

Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Simultaneous cryo X-ray ptychographic and fluorescence microscopy of green algae

Junjing Deng; David J. Vine; Si Chen; Youssef S. G. Nashed; Qiaoling Jin; Nicholas W. Phillips; Tom Peterka; Robert B. Ross; Stefan Vogt; Chris Jacobsen

Significance X-ray fluorescence microscopy provides unparalleled sensitivity for measuring the distribution of trace elements in many-micrometer-thick specimens, whereas ptychography offers a path to the imaging of weakly fluorescing biological ultrastructure at beyond-focusing-optic resolution. We demonstrate here for the first time, to our knowledge, the combination of fluorescence and ptychography for imaging frozen-hydrated specimens at cryogenic temperatures, with excellent structural and chemical preservation. This combined approach will have significant impact on studies of the intracellular localization of nanocomposites with attached therapeutic or diagnostic agents, help elucidate the roles of trace metals in cell development, and further the study of diseases where trace metal misregulation is suspected (including neurodegenerative diseases). Trace metals play important roles in normal and in disease-causing biological functions. X-ray fluorescence microscopy reveals trace elements with no dependence on binding affinities (unlike with visible light fluorophores) and with improved sensitivity relative to electron probes. However, X-ray fluorescence is not very sensitive for showing the light elements that comprise the majority of cellular material. Here we show that X-ray ptychography can be combined with fluorescence to image both cellular structure and trace element distribution in frozen-hydrated cells at cryogenic temperatures, with high structural and chemical fidelity. Ptychographic reconstruction algorithms deliver phase and absorption contrast images at a resolution beyond that of the illuminating lens or beam size. Using 5.2-keV X-rays, we have obtained sub–30-nm resolution structural images and ∼90-nm–resolution fluorescence images of several elements in frozen-hydrated green algae. This combined approach offers a way to study the role of trace elements in their structural context.


international parallel and distributed processing symposium | 2011

A Study of Parallel Particle Tracing for Steady-State and Time-Varying Flow Fields

Tom Peterka; Robert B. Ross; Boonthanome Nouanesengsy; Teng-Yok Lee; Han-Wei Shen; Wesley Kendall; Jian Huang

Particle tracing for streamline and path line generation is a common method of visualizing vector fields in scientific data, but it is difficult to parallelize efficiently because of demanding and widely varying computational and communication loads. In this paper we scale parallel particle tracing for visualizing steady and unsteady flow fields well beyond previously published results. We configure the 4D domain decomposition into spatial and temporal blocks that combine in-core and out-of-core execution in a flexible way that favors faster run time or smaller memory. We also compare static and dynamic partitioning approaches. Strong and weak scaling curves are presented for tests conducted on an IBM Blue Gene/P machine at up to 32 K processes using a parallel flow visualization library that we are developing. Datasets are derived from computational fluid dynamics simulations of thermal hydraulics, liquid mixing, and combustion.


ieee international conference on high performance computing data and analytics | 2009

A configurable algorithm for parallel image-compositing applications

Tom Peterka; David Goodell; Robert B. Ross; Han-Wei Shen; Rajeev Thakur

Collective communication operations can dominate the cost of large-scale parallel algorithms. Image compositing in parallel scientific visualization is a reduction operation where this is the case. We present a new algorithm called Radix-k that in many cases performs better than existing compositing algorithms. It does so through a set of configurable parameters, the radices, that determine the number of communication partners in each message round. The algorithm embodies and unifies binary swap and direct-send, two of the best-known compositing methods, and enables numerous other configurations through appropriate choices of radices. While the algorithm is not tied to a particular computing architecture or network topology, the selection of radices allows Radix-k to take advantage of new supercomputer interconnect features such as multiporting. We show scalability across image size and system size, including both powers of two and nonpowers-of-two process counts.


ieee symposium on large data analysis and visualization | 2011

Scalable parallel building blocks for custom data analysis

Tom Peterka; Robert B. Ross; Attila Gyulassy; Valerio Pascucci; Wesley Kendall; Han-Wei Shen; Teng Yok Lee; Abon Chaudhuri

We present a set of building blocks that provide scalable data movement capability to computational scientists and visualization researchers for writing their own parallel analysis. The set includes scalable tools for domain decomposition, process assignment, parallel I/O, global reduction, and local neighborhood communicationtasks that are common across many analysis applications. The global reduction is performed with a new algorithm, described in this paper, that efficiently merges blocks of analysis results into a smaller number of larger blocks. The merging is configurable in the number of blocks that are reduced in each round, the number of rounds, and the total number of resulting blocks. We highlight the use of our library in two analysis applications: parallel streamline generation and parallel Morse-Smale topological analysis. The first case uses an existing local neighborhood communication algorithm, whereas the latter uses the new merge algorithm.


international conference on parallel processing | 2009

End-to-End Study of Parallel Volume Rendering on the IBM Blue Gene/P

Tom Peterka; Hongfeng Yu; Robert B. Ross; Kwan-Liu Ma; Robert Latham

In addition to their role as simulation engines, modern supercomputers can be harnessed for scientific visualization. Their extensive concurrency, parallel storage systems, and high-performance interconnects can mitigate the expanding size and complexity of scientific datasets and prepare for in situ visualization of these data. In ongoing research into testing parallel volume rendering on the IBM Blue Gene/P (BG/P), we measure performance of disk I/O, rendering, and compositing on large datasets, and evaluate bottlenecks with respect to system-specific I/O and communication patterns. To extend the scalability of the direct-send image compositing stage of the volume rendering algorithm, we limit the number of compositing cores when many small messages are exchanged. To improve the data-loading stage of the volume renderer, we study the I/O signatures of the algorithm in detail. The results of this research affirm that a distributed-memory computing architecture such as BG/P is a scalable platform for large visualization problems.


eurographics workshop on parallel graphics and visualization | 2008

Parallel volume rendering on the IBM Blue Gene/P

Tom Peterka; Hongfeng Yu; Robert B. Ross; Kwan-Liu Ma

Parallel volume rendering is implemented and tested on an IBM Blue Gene distributed-memory parallel architecture. The goal of studying the cost of parallel rendering on a new class of supercomputers such as the Blue Gene/P is not necessarily to achieve real-time rendering rates. It is to identify and understand the extent of bottlenecks and interactions between various components that affect the design of future visualization solutions on these machines, solutions that may offer alternatives to hardware-accelerated volume rendering, for example, when large volumes, large image sizes, and very high quality results are dictated by peta- and exascale data. As a step in that direction, this study presents data from experiments under a number of conditions, including dataset size, number of processors, low- and high-quality rendering, offline storage of results, and streaming of images for remote display. Performance is divided into three main sections of the algorithm: disk I/O, rendering, and compositing. The dynamic balance among these tasks varies with the number of processors and other conditions. Lessons learned from the work include understanding the balance between parallel I/O, computation, and communication within the context of visualization on supercomputers; recommendations for tuning and optimization; and opportunities for further scaling. Extrapolating these results to very large data and image sizes suggests that a distributed-memory high-performance computing architecture such as the Blue Gene is a viable platform for some types of visualization at very large scales.


ieee international conference on high performance computing data and analytics | 2011

An image compositing solution at scale

Kenneth Moreland; Wesley Kendall; Tom Peterka; Jian Huang

The only proven method for performing distributed-memory parallel rendering at large scales, tens of thousands of nodes, is a class of algorithms called sort last. The fundamental operation of sort-last parallel rendering is an image composite, which combines a collection of images generated independently on each node into a single blended image. Over the years numerous image compositing algorithms have been proposed as well as several enhancements and rendering modes to these core algorithms. However, the testing of these image compositing algorithms has been with an arbitrary set of enhancements, if any are applied at all. In this paper we take a leading production-quality image compositing framework, IceT, and use it as a testing frame work for the leading image compositing algorithms of today. As we scale IceT to ever increasing job sizes, we consider the image compositing systems holistically, incorporate numerous optimizations, and discover several improvements to the process never considered before. We conclude by demonstrating our solution on 64K cores of the Intrepid Blue Gene/P at Argonne National Laboratories.

Collaboration


Dive into the Tom Peterka's collaboration.

Top Co-Authors

Avatar

Robert B. Ross

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Sandin

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Andrew E. Johnson

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Jacobsen

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Junjing Deng

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Leigh

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Jinghua Ge

University of Illinois at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge