Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Scullin is active.

Publication


Featured researches published by William Scullin.


Scientific Reports | 2018

Low-dose x-ray tomography through a deep convolutional neural network

Xiaogang Yang; Vincent De Andrade; William Scullin; Eva L. Dyer; Narayanan Kasthuri; Francesco De Carlo; Doga Gursoy

Synchrotron-based X-ray tomography offers the potential for rapid large-scale reconstructions of the interiors of materials and biological tissue at fine resolution. However, for radiation sensitive samples, there remain fundamental trade-offs between damaging samples during longer acquisition times and reducing signals with shorter acquisition times. We present a deep convolutional neural network (CNN) method that increases the acquired X-ray tomographic signal by at least a factor of 10 during low-dose fast acquisition by improving the quality of recorded projections. Short-exposure-time projections enhanced with CNNs show signal-to-noise ratios similar to long-exposure-time projections. They also show lower noise and more structural information than low-dose short-exposure acquisitions post-processed by other techniques. We evaluated this approach using simulated samples and further validated it with experimental data from radiation sensitive mouse brains acquired in a tomographic setting with transmission X-ray microscopy. We demonstrate that automated algorithms can reliably trace brain structures in low-dose datasets enhanced with CNN. This method can be applied to other tomographic or scanning based X-ray imaging techniques and has great potential for studying faster dynamics in specimens


Advanced Structural and Chemical Imaging | 2017

Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

Tekin Bicer; Doga Gursoy; Vincent De Andrade; Rajkumar Kettimuthu; William Scullin; Francesco De Carlo; Ian T. Foster

AbstractBackgroundModern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis.MethodsWe present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. ResultsOur experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration.ConclusionThe proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.


ieee international conference on high performance computing data and analytics | 2015

Toward a Proof of Concept Implementation of a Cloud Infrastructure on the Blue Gene/Q

Patrick Dreher; William Scullin; Mladen A. Vouk

Conventional cloud computing architectures may seriously constrain computational throughput for HPC applications and data analysis. The traditional approach to circumvent such problems has been to map such applications onto specialized hardware and co-processor architectures. These shortcomings have given rise to calls for software that can provide richer environments for implementing clouds on these HPC architectures. It was recently reported that a proof of concept cloud computing system was successfully embedded in a standard Blue Gene/P HPC supercomputer. This software defined system re-arranged user access to nodes and dynamically customized features of the BG/P architecture to map clouds systems and applications onto the BG/P. This paper reports on efforts to extend the results achieved on the BG/P to the newer BG/Q architecture. This work demonstrates a potential for a cloud to capitalize on the BG/Q infrastructure and provides a platform for developing better hybrid workflows and for experimentation with new schedulers and operating systems within a working HPC environment.


Proceedings of the HPC Systems Professionals Workshop on | 2017

Lessons from the IBM Blue Gene Series of Supercomputers

William Scullin; Adam Scovel

The Argonne Leadership Computing Facility has operated IBM Blue Gene/L, /P, and /Q series supercomputers for over a decade. This paper discusses the lessons the authors learned from the Blue Gene architecture that are generally applicable to the design and operation of large high performance computing systems.


Journal of Physics: Conference Series | 2015

Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

Patrick Dreher; William Scullin; Mladen A. Vouk

Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.


usenix large installation systems administration conference | 2008

Petascale system management experiences

Narayan Desai; Rick Bradshaw; Cory Lueninghoener; Andrew Cherry; Susan Coghlan; William Scullin


Chemical Communications | 2017

Fast Mg2+ diffusion in Mo3(PO4)3O for Mg batteries

Ziqin Rong; Penghao Xiao; Miao Liu; Wenxuan Huang; Daniel C. Hannah; William Scullin; Kristin A. Persson; Gerbrand Ceder


Microscopy and Microanalysis | 2018

A Pipeline for Distributed Segmentation of Teravoxel Tomography Datasets

Mehdi Tondravi; William Scullin; Ming Du; Rafael Vescovi; Vincent De Andrade; Chris Jacobsen; Konrad P. Körding; Doga Gursoy; Eva L. Dyer


Materials Characterization | 2018

Automated correlative segmentation of large Transmission X-ray Microscopy (TXM) tomograms using deep learning

C. Shashank Kaira; Xiaogang Yang; Vincent De Andrade; Francesco De Carlo; William Scullin; Doga Gursoy; N. Chawla


Journal of Synchrotron Radiation | 2018

Tomosaic: efficient acquisition and reconstruction of teravoxel tomography data using limited-size synchrotron X-ray beams

Rafael Vescovi; Ming Du; Vincent De Andrade; William Scullin; Dogˇa Gürsoy; Chris Jacobsen

Collaboration


Dive into the William Scullin's collaboration.

Top Co-Authors

Avatar

Vincent De Andrade

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Doga Gursoy

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Francesco De Carlo

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bill Spotz

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Chris Jacobsen

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ming Du

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Mladen A. Vouk

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Patrick Dreher

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rafael Vescovi

Argonne National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge