Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph A. Insley is active.

Publication


Featured researches published by Joseph A. Insley.


Review of Scientific Instruments | 2001

A high-throughput x-ray microtomography system at the Advanced Photon Source

Yuxin Wang; Francesco De Carlo; Derrick C. Mancini; Ian McNulty; Brian Tieman; John Bresnahan; Ian T. Foster; Joseph A. Insley; Peter Lane; Gregor von Laszewski; Carl Kesselman; Mei-Hui Su; Marcus Thiebaux

~Received 14 November 2000; accepted for publication 23 January 2001!A third-generation synchrotron radiation source provides enough brilliance to acquire completetomographic data sets at 100 nm or better resolution in a few minutes. To take advantage of suchhigh-brilliance sources at the Advanced Photon Source, we have constructed a pipelined dataacquisition and reconstruction system that combines a fast detector system, high-speed datanetworks, and massively parallel computers to rapidly acquire the projection data and perform thereconstruction and rendering calculations. With the current setup, a data set can be obtained andreconstructed in tens of minutes. A specialized visualization computer makes renderedthree-dimensional~3D! images available to the beamline users minutes after the data acquisition iscompleted. This system is capable of examining a large number of samples at sub-mm 3D resolutionor studying the full 3D structure of a dynamically evolving sample on a 10 min temporal scale. Inthe near future, we expect to increase the spatial resolution to below 100 nm by using zone-platex-ray focusing optics and to improve the time resolution by the use of a broadband x-raymonochromator and a faster detector system.


IEEE Computer | 1999

Distance visualization: data exploration on the grid

Ian T. Foster; Joseph A. Insley; G. von Laszewski; Carl Kesselman; Marcus Thiebaux

Our increased ability to model and measure a wide variety of phenomena has left us awash in data. In the immediate future, the authors anticipate collecting data at the rate of terabytes per day from many classes of applications, including simulations running on teraFLOPS-class computers and experimental data produced by increasingly more sensitive and accurate instruments, such as telescopes, microscopes, particle accelerators and satellites. Generating or acquiring data is not an end in itself but a vehicle for obtaining insights. While data analysis and reduction have a role to play, in many situations we achieve understanding only when a human being interprets the data. Visualization has emerged as an important tool for extracting meaning from the large volumes of data that scientific instruments and simulations produce. The authors describe an online system that supports 3D tomographic image reconstruction-and subsequent collaborative analysis-of data from remote scientific instruments.


high performance distributed computing | 2002

GridMapper: a tool for visualizing the behavior of large-scale distributed systems

William E. Allcock; Joseph Bester; John Bresnahan; Ian T. Foster; Jarek Gawor; Joseph A. Insley; Joseph M. Link; Michael E. Papka

Grid applications can combine the use of computation, storage, network, and other resources. These resources are often geographically distributed, adding to application complexity and thus the difficulty of understanding application performance. We present GridMapper, a tool for monitoring and visualizing the behavior of such distributed systems. GridMapper builds on basic mechanisms for registering, discovering, and accessing performance information sources, as well as for mapping from domain names to physical locations. The visualization system itself then supports the automatic layout of distributed sets of such sources and animation of their activities. We use a set of examples to illustrate how the system can provide valuable insights into the behavior and performance of a range of different applications.


Future Generation Computer Systems | 2003

High-resolution remote rendering of large datasets in a collaborative environment

Nicholas T. Karonis; Michael E. Papka; Justin Binns; John Bresnahan; Joseph A. Insley; David Jones; Joseph M. Link

In a time when computational and data resources are distributed around the globe, users need to interact with these resources and each other easily and efficient. The Grid, by definition, represents a connection of distributed resources that can be used regardless of the users location. We have built a prototype visualization system using the Globus Toolkit, MPICH-G2, and the Access Grid in order to explore how future scientific collaborations may occur over the Grid. We describe our experience in demonstrating our system at iGrid2002, where the United States and the Netherlands were connected via a high-latency, high-bandwidth network. In particular, we focus on issues related to a Grid-based application that couples a collaboration component (including a user interface to the Access Grid) with a high-resolution remote rendering component.


ieee international conference on high performance computing data and analytics | 2011

A new computational paradigm in multiscale simulations: application to brain blood flow

Leopold Grinberg; Joseph A. Insley; Vitali A. Morozov; Michael E. Papka; George Em Karniadakis; Dmitry A. Fedosov; Kalyan Kumaran

Interfacing atomistic-based with continuum-based simulation codes is now required in many multiscale physical and biological systems. We present the computational advances that have enabled the first multiscale simulation on 190,740 processors by coupling a high-order (spectral element) Navier-Stokes solver with a stochastic (coarse-grained) Molecular Dynamics solver based on Dissipative Particle Dynamics (DPD). The key contributions are proper interface conditions for overlapped domains, topology-aware communication, SIMDization, multiscale visualization and a new do- main partitioning for atomistic solvers. We study blood flow in a patient-specific cerebrovasculature with a brain aneurysm, and analyze the interaction of blood cells with the arterial walls endowed with a glycocalyx causing thrombus formation and eventual aneurysm rupture. The macro-scale dynamics (about 3 billion unknowns) are resolved by NεκTαr - a spectral element solver; the micro-scale flow and cell dynamics within the aneurysm are resolved by an in-house version of DPD-LAMMPS (for an equivalent of about 100 billions molecules).


Future Generation Computer Systems | 2006

Simulating and visualizing the human arterial system on the TeraGrid

Suchuan Dong; Joseph A. Insley; Nicholas T. Karonis; Michael E. Papka; Justin Binns; George Em Karniadakis

We present a Grid solution to a grand challenge problem, the simulation and visualization of the human arterial system. We implemented our simulation and visualization system on the National Science Foundations TeraGrid and demonstrated it at the iGrid 2005 conference in San Diego, California. We discuss our experience in running on a computational Grid and present observations and suggestions for improving similar experiences.


SPIE's International Symposium on Optical Science, Engineering, and Instrumentation | 1999

Quasi-real-time x-ray microtomography system at the Advanced Photon Source

F. DeCarlo; Ian T. Foster; Joseph A. Insley; Carl Kesselman; Peter Lane; Derrick C. Mancini; Ian McNulty; Mei-Hui Su; Brian Tieman; Yuxin Wang; G. von Laszewski

The combination of high-brilliance x-ray sources, fast detector systems, wide-bandwidth networks, and parallel computers can substantially reduce the time required to acquire, reconstruct, and visualize high-resolution three- dimensional tomographic data sets. A quasi-realtime computed x-ray microtomography system has been implemented at the 2-BM beamline at the Advanced Photon Source at Argonne National Laboratory. With this system, a complete tomographic data set can be collected in about 15 minutes. Immediately after each projection is obtained, it is rapidly transferred to the Mathematics and Computing Sciences Division where preprocessing and reconstruction calculations are performed concurrently with the data acquisition by a SGI parallel computer. The reconstruction results, once completed, are transferred to a visualization computer that performs the volume rendering calculations. Rendered images of the reconstructed data are available for viewing back at the beamline experiment station minutes after the data acquisition was complete. The fully pipelined data acquisition and reconstruction system also gives us the option to acquire the tomographic data set in several cycles, initially with coarse then with fine angular steps. At present the projections are acquired with a straight-ray projection imaging scheme using 5 - 20 keV hard x rays in either phase or amplitude contrast mode at a 1 - 10 micrometer resolution. In the future, we expect to increase the resolution of the projections to below 100 nm by using a focused x-ray beam at the 2-ID-B beamline and to reduce the combined acquisition and computation time to the 1 min scale with improvements in the detectors, network links, software pipeline, and computation algorithms.


eurographics workshop on parallel graphics and visualization | 2015

Large-scale parallel visualization of particle-based simulations using point sprites and level-of-detail

Silvio Rizzi; Mark Hereld; Joseph A. Insley; Michael E. Papka; Thomas D. Uram; Venkatram Vishwanath

Recent large-scale particle-based simulations are generating vast amounts of data posing a challenge to visualization algorithms. One possibility for addressing this challenge is to map particles into a regular grid for volume rendering, which carries the disadvantages of inefficient use of memory and undesired losses of dynamic range. As an alternative, we propose a method to efficiently visualize these massive particle datasets using point rendering techniques with neither loss of dynamic range nor memory overheads. In addition, a hierarchical reorganization of the data is desired to deliver meaningful visual representations of a large number of particles in a limited number of pixels, preserving point locality and also helping achieve interactive frame rates. In this paper, we present a framework for parallel rendering of large-scale particle data sets combining point sprites and z-ordering. The latter is used to create a multi level representation of the data which helps improving frame rates. Performance and scalability are evaluated on a GPU-based visualization cluster, scaling up to 128 GPUs. Results using particle datasets of up to 32 billion particles are shown.


eurographics workshop on parallel graphics and visualization | 2014

Performance modeling of vl3 volume rendering on GPU-based clusters

Silvio Rizzi; Mark Hereld; Joseph A. Insley; Michael E. Papka; Thomas D. Uram; Venkatram Vishwanath

This paper presents an analytical model for parallel volume rendering of large datasets using GPU-based clusters. The model is focused on the parallel volume rendering and compositing stages and predicts their performance requiring only a few input parameters. We also present vl3, a novel parallel volume rendering framework for visualization of large datasets. Its performance is evaluated on a GPU-based cluster, weak and strong scaling are studied, and model predictions are validated with experimental results on up to 128 GPUs.


IEEE Transactions on Visualization and Computer Graphics | 2007

Runtime Visualization of the Human Arterial Tree

Joseph A. Insley; Michael E. Papka; Suchuan Dong; George Em Karniadakis; Nicholas T. Karonis

Large-scale simulation codes typically execute for extended periods of time and often on distributed computational resources. Because these simulations can run for hours, or even days, scientists like to get feedback about the state of the computation and the validity of its results as it runs. It is also important that these capabilities be made available with little impact on the performance and stability of the simulation. Visualizing and exploring data in the early stages of the simulation can help scientists identify problems early, potentially avoiding a situation where a simulation runs for several days, only to discover that an error with an input parameter caused both time and resources to be wasted. We describe an application that aids in the monitoring and analysis of a simulation of the human arterial tree. The application provides researchers with high-level feedback about the state of the ongoing simulation and enables them to investigate particular areas of interest in greater detail. The application also offers monitoring information about the amount of data produced and data transfer performance among the various components of the application.

Collaboration


Dive into the Joseph A. Insley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Hereld

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Silvio Rizzi

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian T. Foster

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

John Bresnahan

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vitali A. Morozov

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Thomas D. Uram

Argonne National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge