Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Henschel is active.

Publication


Featured researches published by Robert Henschel.


Nature | 2010

Systems survey of endocytosis by multiparametric image analysis

Claudio Collinet; Martin Stöter; Charles R. Bradshaw; Nikolay Samusik; Jochen C. Rink; Denise Kenski; Bianca Habermann; Frank Buchholz; Robert Henschel; Matthias S. Mueller; Wolfgang E. Nagel; Eugenio Fava; Yannis Kalaidzidis; Marino Zerial

Endocytosis is a complex process fulfilling many cellular and developmental functions. Understanding how it is regulated and integrated with other cellular processes requires a comprehensive analysis of its molecular constituents and general design principles. Here, we developed a new strategy to phenotypically profile the human genome with respect to transferrin (TF) and epidermal growth factor (EGF) endocytosis by combining RNA interference, automated high-resolution confocal microscopy, quantitative multiparametric image analysis and high-performance computing. We identified several novel components of endocytic trafficking, including genes implicated in human diseases. We found that signalling pathways such as Wnt, integrin/cell adhesion, transforming growth factor (TGF)-β and Notch regulate the endocytic system, and identified new genes involved in cargo sorting to a subset of signalling endosomes. A systems analysis by Bayesian networks further showed that the number, size, concentration of cargo and intracellular position of endosomes are not determined randomly but are subject to specific regulation, thus uncovering novel properties of the endocytic system.


international conference on cloud computing | 2011

Analysis of Virtualization Technologies for High Performance Computing Environments

Andrew J. Younge; Robert Henschel; James T. Brown; Gregor von Laszewski; Judy Qiu; Geoffrey C. Fox

As Cloud computing emerges as a dominant paradigm in distributed systems, it is important to fully understand the underlying technologies that make Clouds possible. One technology, and perhaps the most important, is virtualization. Recently virtualization, through the use of hyper visors, has become widely used and well understood by many. However, there are a large spread of different hyper visors, each with their own advantages and disadvantages. This paper provides an in-depth analysis of some of todays commonly accepted virtualization technologies from feature comparison to performance analysis, focusing on the applicability to High Performance Computing environments using Future Grid resources. The results indicate virtualization sometimes introduces slight performance impacts depending on the hyper visor type, however the benefits of such technologies are profound and not all virtualization technologies are equal. From our experience, the KVM hyper visor is the optimal choice for supporting HPC applications within a Cloud infrastructure.


international workshop on openmp | 2012

SPEC OMP2012 -- an application benchmark suite for parallel systems using OpenMP

Matthias S. Müller; John Baron; William C. Brantley; Huiyu Feng; Daniel Hackenberg; Robert Henschel; Gabriele Jost; Daniel Molka; Chris Parrott; Joe Robichaux; Pavel Shelepugin; G. Matthijs van Waveren; Brian Whitney; Kalyan Kumaran

This paper describes SPEC OMP2012, a benchmark developed by the SPEC High Performance Group. It consists of 15 OpenMP parallel applications from a wide range of fields. In addition to a performance metric based on the run time of the applications the benchmark adds an optional energy metric. The accompanying run rules detail how the benchmarks are executed and the results reported. They also cover the energy measurements. The first set of results provide scalability on three different platforms.


extreme science and engineering discovery environment | 2012

Trinity RNA-Seq assembler performance optimization

Robert Henschel; Matthias Lieber; Le-Shin Wu; Phillip M. Nista; Brian J. Haas; Richard D. LeDuc

RNA-sequencing is a technique to study RNA expression in biological material. It is quickly gaining popularity in the field of transcriptomics. Trinity is a software tool that was developed for efficient de novo reconstruction of transcriptomes from RNA-Seq data. In this paper we first conduct a performance study of Trinity and compare it to previously published data from 2011. The version from 2011 is much slower than many other de novo assemblers and biologists have thus been forced to choose between quality and speed. We examine the runtime behavior of Trinity as a whole as well as its individual components and then optimize the most performance critical parts. We find that standard best practices for HPC applications can also be applied to Trinity, especially on systems with large amounts of memory. When combining best practices for HPC applications along with our specific performance optimization, we can decrease the runtime of Trinity by a factor of 3.9. This brings the runtime of Trinity in line with other de novo assemblers while maintaining superior quality. The purpose of this paper is to describe a series of improvements to Trinity, quantify the execution improvements achieved, and document the new version of the software.


ieee international conference on high performance computing data and analytics | 2012

Demonstrating lustre over a 100Gbps wide area network of 3,500km

Robert Henschel; Stephen C. Simms; David Y. Hancock; Scott Michael; Tom Johnson; Nathan Heald; Thomas William; Donald K. Berry; Matthew Allen; Richard Knepper; Matt Davy; Matthew R. Link; Craig A. Stewart

As part of the SCinet Research Sandbox at the Supercomputing 2011 conference, Indiana University (IU) demonstrated use of the Lustre high performance parallel file system over a dedicated 100 Gbps wide area network (WAN) spanning more than 3,500 km (2,175 mi). This demonstration functioned as a proof of concept and provided an opportunity to study Lustres performance over a 100 Gbps WAN. To characterize the performance of the network and file system, low level iperf network tests, file system tests with the IOR benchmark, and a suite of real-world applications reading and writing to the file system were run over a latency of 50.5 ms. In this article we describe the configuration and constraints of the demonstration and outline key findings.


ieee international conference on high performance computing data and analytics | 2014

SPEC ACCEL : a Standard Application Suite for Measuring Hardware Accelerator Performance

Guido Juckeland; William C. Brantley; Sunita Chandrasekaran; Barbara M. Chapman; Shuai Che; Mathew E. Colgrove; Huiyu Feng; Alexander Grund; Robert Henschel; Wen-mei W. Hwu; Huian Li; Matthias S. Müller; Wolfgang E. Nagel; Maxim Perminov; Pavel Shelepugin; Kevin Skadron; John A. Stratton; Alexey Titov; Ke Wang; G. Matthijs van Waveren; Brian Whitney; Sandra Wienke; Rengan Xu; Kalyan Kumaran

Hybrid nodes with hardware accelerators are becoming very common in systems today. Users often find it difficult to characterize and understand the performance advantage of such accelerators for their applications. The SPEC High Performance Group (HPG) has developed a set of performance metrics to evaluate the performance and power consumption of accelerators for various science applications. The new benchmark comprises two suites of applications written in OpenCL and OpenACC and measures the performance of accelerators with respect to a reference platform. The first set of published results demonstrate the viability and relevance of the new metrics in comparing accelerator performance. This paper discusses the benchmark suites and selected published results in great detail.


extreme science and engineering discovery environment | 2012

Exploiting HPC resources for the 3D-time series analysis of caries lesion activity

Hui Zhang; Huian Li; Michael Boyles; Robert Henschel; Eduardo Kazuo Kohara; Masatoshi Ando

We present a research framework to analyze 3D-time series caries lesion activity based on collections of SkyScan® μ-CT images taken at different times during the dynamic caries process. Analyzing caries progression (or reversal) is data-driven and computationally demanding. It involves segmenting high-resolution μ-CT images, constructing 3D models suitable for interactive visualization, and analyzing 3D and 4D (3D + time) dental images. Our development exploits XSEDEs supercomputing, storage, and visualization resources to facilitate the knowledge discovery process. In this paper, we describe the required image processing algorithms and then discuss the parallelization of these methods to utilize XSEDEs high performance computing resources. We then present a workflow for visualization and analysis using ParaView. This workflow enables quantitative analysis as well as three-dimensional comparison of multiple temporal datasets from the longitudinal dental research studies. Such quantitative assessment and visualization can help us to understand and evaluate the underlying processes that arise from dental treatment, and therefore can have significant impact in the clinical decision-making process and caries diagnosis.


Proceedings of SPIE | 2014

ODI - Portal, Pipeline, and Archive (ODI-PPA): a web-based astronomical compute archive, visualization, and analysis service

Arvind Gopu; Soichi Hayashi; Michael D. Young; Daniel R. Harbeck; Todd A. Boroson; Wilson M. Liu; Ralf Kotulla; Richard A. Shaw; Robert Henschel; Jayadev Rajagopal; Elizabeth B. Stobie; Patricia Marie Knezek; R. Pierre Martin; Kevin Archbold

The One Degree Imager-Portal, Pipeline, and Archive (ODI-PPA) is a web science gateway that provides astronomers a modern web interface that acts as a single point of access to their data, and rich computational and visualization capabilities. Its goal is to support scientists in handling complex data sets, and to enhance WIYN Observatorys scientific productivity beyond data acquisition on its 3.5m telescope. ODI-PPA is designed, with periodic user feedback, to be a compute archive that has built-in frameworks including: (1) Collections that allow an astronomer to create logical collations of data products intended for publication, further research, instructional purposes, or to execute data processing tasks (2) Image Explorer and Source Explorer, which together enable real-time interactive visual analysis of massive astronomical data products within an HTML5 capable web browser, and overlaid standard catalog and Source Extractor-generated source markers (3) Workflow framework which enables rapid integration of data processing pipelines on an associated compute cluster and users to request such pipelines to be executed on their data via custom user interfaces. ODI-PPA is made up of several light-weight services connected by a message bus; the web portal built using Twitter/Bootstrap, AngularJS and jQuery JavaScript libraries, and backend services written in PHP (using the Zend framework) and Python; it leverages supercomputing and storage resources at Indiana University. ODI-PPA is designed to be reconfigurable for use in other science domains with large and complex datasets, including an ongoing offshoot project for electron microscopy data.


Future Generation Computer Systems | 2013

Performance and quality of service of data and video movement over a 100 Gbps testbed

Michael Kluge; Stephen C. Simms; Thomas William; Robert Henschel; Andy Georgi; Christian Meyer; Matthias S. Mueller; Craig A. Stewart; Wolfgang Wünsch; Wolfgang E. Nagel

Digital instruments and simulations are creating an ever-increasing amount of data. The need for institutions to acquire these data and transfer them for analysis, visualization, and archiving is growing as well. In parallel, networking technology is evolving, but at a much slower rate than our ability to create and store data. Single fiber 100 Gbps networking solutions have recently been deployed as national infrastructure. This article describes our experiences with data movement and video conferencing across a networking testbed, using the first commercially available single fiber 100 Gbps technology. The testbed is unique in its ability to be configured for a total length of 60, 200, or 400 km, allowing for tests with varying network latency. We performed low-level TCP tests and were able to use more than 99.9% of the theoretical available bandwidth with minimal tuning efforts. We used the Lustre file system to simulate how end users would interact with a remote file system over such a high performance link. We were able to use 94.4% of the theoretical available bandwidth with a standard file system benchmark, essentially saturating the wide area network. Finally, we performed tests with H.323 video conferencing hardware and quality of service (QoS) settings, showing that the link can reliably carry a full high-definition stream. Overall, we demonstrated the practicality of 100?Gbps networking and Lustre as excellent tools for data management. Highlights? The need for institutions to acquire and transfer data is growing. ? We tested data transfer on the first commercial single fiber 100?Gbps network. ? We used Lustre to simulate user interaction with a remote file system. ? We were able to use more than 94.4% of the theoretical available bandwidth. ? 100?Gbps networking and Lustre are excellent tools for data management.


high performance distributed computing | 2010

A distributed workflow for an astrophysical OpenMP application: using the data capacitor over WAN to enhance productivity

Robert Henschel; Scott Michael; Stephen C. Simms

Astrophysical simulations of protoplanetary disks and gas giant planet formation are being performed with a variety of numerical methods. Some of the codes in use today have been producing scientifically significant results for several years, or even decades. Each must simulate millions of resolution elements for millions of time steps, capture and store output data, and rapidly and efficiently analyze this data. To do this effectively, a parallel code is needed that scales to tens or hundreds of processors. Furthermore, an efficient workflow for the transport, analysis, and interpretation of the output data is needed to achieve scientifically meaningful results. Since such simulations are usually performed on moderate to large parallel systems, the compute system is generally located at a remote institution. However, analysis of results is typically performed interactively, and due to the fact that most supercomputing centers do not offer dedicated interactive nodes, the transfer of simulation output data to local resources becomes necessary. Even if interactive resources were available, typical network latencies make X-forwarded displays nearly impossible to work with. Since data sets can be quite large and traditional transfer mechanisms such as scp and sftp offer relatively low throughput, this transfer of data sets becomes a bottleneck in the research workflow. In this article we measure the scalability of the Computational HYdronamics with MultiplE Radiation Algorithms (CHYMERA) code on the SGI Altix architecture. We find that it scales well up to 64 threads for moderate and large sized problems. We also present a novel approach to enable rapid transfer and analysis of simulation data via the Data Capacitor (DC) and Lustre WAN (Wide Area Network) [17]. The usage of a WAN file system to tie batch system operated compute resources and interactive analysis and visualization resources together is of general interest and can be applied broadly.

Collaboration


Dive into the Robert Henschel's collaboration.

Top Co-Authors

Avatar

Craig A. Stewart

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Matthew R. Link

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Scott Michael

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Stephen C. Simms

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

David Y. Hancock

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Ben Fulton

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Matthias S. Mueller

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas William

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Abhinav Thota

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Thomas G. Doak

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge