Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tekin Bicer is active.

Publication


Featured researches published by Tekin Bicer.


ieee international conference on high performance computing data and analytics | 2015

Smart: a MapReduce-like framework for in-situ scientific analytics

Yi Wang; Gagan Agrawal; Tekin Bicer; Wei Jiang

In-situ analytics has lately been shown to be an effective approach to reduce both I/O and storage costs for scientific analytics. Developing an efficient in-situ implementation, however, involves many challenges, including parallelization, data movement or sharing, and resource allocation. Based on the premise that MapReduce can be an appropriate API for specifying scientific analytics applications, we present a novel MapReduce-like framework that supports efficient in-situ scientific analytics, and address several challenges that arise in applying the MapReduce idea for in-situ processing. Specifically, our implementation can load simulated data directly from distributed memory, and it uses a modified API that helps meet the strict memory constraints of in-situ analytics. The framework is designed so that analytics can be launched from the parallel code region of a simulation program. We have developed both time sharing and space sharing modes for maximizing the performance in different scenarios, with the former even avoiding any copying of data from simulation to the analytics program. We demonstrate the functionality, efficiency, and scalability of our system, by using different simulation and analytics programs, executed on clusters with multi-core and many-core nodes.


Optics Express | 2015

Hyperspectral image reconstruction for x-ray fluorescence tomography

D. Gürsoy; Tekin Bicer; Antonio Lanzirotti; Matthew Newville; Francesco De Carlo

A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversion approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.


european conference on parallel processing | 2015

Rapid Tomographic Image Reconstruction via Large-Scale Parallelization

Tekin Bicer; Doga Gursoy; Rajkumar Kettimuthu; Francesco De Carlo; Gagan Agrawal; Ian T. Foster

Synchrotron (x-ray) light sources permit investigation of the structure of matter at extremely small length and time scales. Advances in detector technologies enable increasingly complex experiments and more rapid data acquisition. However, analysis of the resulting data then becomes a bottleneck—preventing near-real-time error detection or experiment steering. We present here methods that leverage highly parallel computers to improve the performance of iterative tomographic image reconstruction applications. We apply these methods to the conventional per-slice parallelization approach and use them to implement a novel in-slice approach that can use many more processors. To address programmability, we implement the introduced methods in high-performance MapReduce-like computing middleware, which is further optimized for reconstruction operations. Experiments with four reconstruction algorithms and two large datasets show that our methods can scale up to 8 K cores on an IBM BG/Q supercomputer with almost perfect speedup and can reduce total reconstruction times for large datasets by more than 95.4 % on 32 K cores relative to 1 K cores. Moreover, the average reconstruction times are improved from \(\sim \)2 h (256 cores) to \(\sim \)1 min (32 K cores), thus enabling near-real-time use.


Philosophical Transactions of the Royal Society A | 2015

Maximum a posteriori estimation of crystallographic phases in X-ray diffraction tomography

Doĝa Gürsoy; Tekin Bicer; Jonathan Almer; Raj Kettimuthu; Stuart R. Stock; Francesco De Carlo

A maximum a posteriori approach is proposed for X-ray diffraction tomography for reconstructing three-dimensional spatial distribution of crystallographic phases and orientations of polycrystalline materials. The approach maximizes the a posteriori density which includes a Poisson log-likelihood and an a priori term that reinforces expected solution properties such as smoothness or local continuity. The reconstruction method is validated with experimental data acquired from a section of the spinous process of a porcine vertebra collected at the 1-ID-C beamline of the Advanced Photon Source, at Argonne National Laboratory. The reconstruction results show significant improvement in the reduction of aliasing and streaking artefacts, and improved robustness to noise and undersampling compared to conventional analytical inversion approaches. The approach has the potential to reduce data acquisition times, and significantly improve beamtime efficiency.


Advanced Structural and Chemical Imaging | 2017

Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

Tekin Bicer; Doga Gursoy; Vincent De Andrade; Rajkumar Kettimuthu; William Scullin; Francesco De Carlo; Ian T. Foster

AbstractBackgroundModern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis.MethodsWe present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. ResultsOur experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration.ConclusionThe proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.


ieee international symposium on parallel & distributed processing, workshops and phd forum | 2013

A Compression Framework for Multidimensional Scientific Datasets

Tekin Bicer; Gagan Agrawal

Scientific simulations and instruments can generate tremendous amount of data in short time periods. Since the generated data is used for inferring new knowledge, it is important to efficiently store and provide it to the scientific endeavors. Although parallel and distributed systems can help to ease the management of such data, the transmission and storage are still challenging problems. Compression is a popular approach for reducing data transfer overheads and storage requirements. However, effectively supporting compression for scientific simulation data and integrating compression algorithms with simulation applications remain a challenge. In this work, we focus on management of multidimensional scientific datasets using domain specific compression algorithms. We propose a compression framework and methodology in order to maximize the bandwidth and storage utilization. We port our framework into PnetCDF and present our preliminary experimental results.


international conference on e-science | 2017

Real-Time Data Analysis and Autonomous Steering of Synchrotron Light Source Experiments

Tekin Bicer; Doga Gursoy; Rajkumar Kettimuthu; Ian T. Foster; Bin Ren; Vincent De Andrede; Francesco De Carlo

Modern scientific instruments, such as detectors at synchrotron light sources, can generate data at 10s of GB/sec. Current experimental protocols typically process and validate data only after an experiment has completed, which can lead to undetected errors and prevents online steering. Real-time data analysis can enable both detection of, and recovery from, errors, and optimization of data acquisition. We thus propose an autonomous stream processing system that allows data streamed from beamline computers to be processed in real time on a remote supercomputer, with a control feed-back loop used to make decisions during experimentation. We evaluate our system using two iterative tomographic reconstruction algorithms and varying data generation rates. These experiments are performed in a real-world environment in which data are streamed from a light source to a cluster for analysis and experimental control. We demonstrate that our system can sustain analysis rates of hundreds of projections per second by using up to 1,200 cores, while meeting stringent data quality constraints.


international conference on parallel architectures and compilation techniques | 2018

Graphphi: efficient parallel graph processing on emerging throughput-oriented architectures

Zhen Peng; Alexander Powell; Bo Wu; Tekin Bicer; Bin Ren

Modern parallel architecture design has increasingly turned to throughput-oriented devices to address concerns about energy efficiency and power consumption. However, graph applications cannot tap into the full potential of such architectures because of highly unstructured computations and irregular memory accesses. In this paper, we present GraphPhi, a new approach to graph processing on emerging Intel Xeon Phi-like architectures, by addressing the restrictions of migrating existing graph processing frameworks on shared-memory multi-core CPUs to this new architecture. Specifically, GraphPhi consists of 1) an optimized hierarchically blocked graph representation to enhance the data locality for both edges and vertices within and among threads, 2) a hybrid vertex-centric and edge-centric execution to efficiently find and process active edges, and 3) a uniform MIMD-SIMD scheduler integrated with a lock-free update support to achieve both good thread-level load balance and SIMD-level utilization. Besides, our efficient MIMD-SIMD execution is capable of hiding memory latency by increasing the number of concurrent memory access requests, thus benefiting more from the latest High-Bandwidth Memory technique. We evaluate our GraphPhi on six graph processing applications. Compared to two state-of-the-art shared-memory graph processing frameworks, it results in speedups up to 4X and 35X, respectively.


grid computing | 2011

An Autonomic Framework for Time and Cost Driven Execution of MPI Programs on Cloud Environments

Aarthi Raveendran; Tekin Bicer; Gagan Agrawal

This paper gives an overview of a framework for making existing MPI applications elastic, and executing them with user-specified time and cost constraints in a cloud framework. Considering the limitations of the MPI implementations currently available, we support adaptation by terminating one execution and restarting a new program on a different number of instances. The key component of our system is a decision layer. Based on the time and cost constraints, this layer decides whether to use fewer or a larger number of instances for the applications, and when appropriate, chooses to migrate the application to a different type of instance. Among other factors, the decision layer also models the redistribution costs.


SAE International journal of engines | 2015

Time-resolved X-ray Tomography of Gasoline Direct Injection Sprays

Daniel Duke; Andrew B. Swantek; Nicolas Sovis; F. Zak Tilocco; Christopher F. Powell; Alan L. Kastengren; Doga Gursoy; Tekin Bicer

Collaboration


Dive into the Tekin Bicer's collaboration.

Top Co-Authors

Avatar

Francesco De Carlo

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Doga Gursoy

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian T. Foster

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin Ren

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan L. Kastengren

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew B. Swantek

Argonne National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge