Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David E. Hudak is active.

Publication


Featured researches published by David E. Hudak.


International Journal of Parallel Programming | 2009

A computational science IDE for HPC systems: design and applications

David E. Hudak; Neil Ludban; Ashok K. Krishnamurthy; Vijay Gadepally; Siddharth Samsi; John Nehrbass

Software engineering studies have shown that programmer productivity is improved through the use of computational science integrated development environments (or CSIDE, pronounced “sea side”) such as MATLAB. Scientists often desire to use high-performance computing (HPC) systems to run their existing CSIDE scripts with large data sets. ParaM is a CSIDE distribution that provides parallel execution of MATLAB scripts on HPC systems at large shared computer centers. ParaM runs on a range of processor architectures (e.g., x86, x64, Itanium, PowerPC) and its MPI binding, known as bcMPI, supports a number of interconnect architectures (e.g., Myrinet and InfiniBand). On a cluster at Ohio Supercomputer Center, bcMPI with blocking communication has achieved 60% of the bandwidth of an equivalent C/MPI benchmark. In this paper, we describe goals and status for the ParaM project and the development of applications in signal and image processing that use ParaM.


ieee international conference on cloud computing technology and science | 2014

Simulation as a service (SMaaS): a cloud-based framework to support the educational use of scientific software

Thomas Bitterman; Prasad Calyam; Alex Berryman; David E. Hudak; Lin Li; Alan Chalker; Steve Gordon; Da Zhang; Da Cai; Changpil Lee; Rajiv Ramnath

Large manufacturers increasingly leverage modelling and simulation to improve quality and reduce cost. Small manufacturers have not adopted these techniques due to sizable upfront costs for expertise, software and hardware. The software as a service (SaaS) model provides access to applications hosted in a cloud environment, allowing users to try services at low cost and scale as needed. We have extended SaaS to include high-performance computing-hosted applications, thus creating simulation as a service (SMaaS). Polymer portal is a first-generation SMaaS platform designed to integrate access to multiple modelling, simulation and training services. Polymer portal provides a number of features including an e-commerce front end, common AAA service, and support for both cloud-hosted virtual machine (VM) images and high-performance computing (HPC) jobs. It has been deployed for six months and has been used successfully for a number of training and simulation activities. This paper describes the requirements, challenges, design and implementation of the polymer portal.


ieee international conference on escience | 2008

Experiences from Cyberinfrastructure Development for Multi-user Remote Instrumentation

Prasad Calyam; Abdul Kalash; Neil Ludban; Sowmya Gopalan; Siddharth Samsi; Karen A. Tomko; David E. Hudak; Ashok K. Krishnamurthy

Computer-controlled scientific instruments such as electron microscopes, spectrometers, and telescopes are expensive to purchase and maintain. Also, they generate large amounts of raw and processed data that has to be annotated and archived. Cyber-enabling these instruments and their data sets using remote instrumentation cyberinfrastructures can improve user convenience and significantly reduce costs. In this paper, we discuss our experiences in gathering technical and policy requirements of remote instrumentation for research and training purposes. Next, we describe the cyberinfrastructure solutions we are developing for supporting related multi-user workflows. Finally, we present our solution-deployment experiences in the form of case studies. The case studies cover both technical issues (bandwidth provisioning, collaboration tools, data management, system security) and policy issues (service level agreements, use policy, usage billing). Our experiences suggest that developing cyberinfrastructures for remote instrumentation requires: (a) understanding and overcoming multi-disciplinary challenges, (b) developing reconfigurable-and-integrated solutions, and (c) close collaborations between instrument labs, infrastructure providers, and application developers.


extreme science and engineering discovery environment | 2013

OSC OnDemand: a web platform integrating access to HPC systems, web and VNC applications

David E. Hudak; Thomas Bitterman; Patricia Carey; Douglas Johnson; Eric Franz; Shaun Brady; Piyush Diwan

In this paper, we describe the OnDemand web platform for providing OSC users integrated access to HPC systems, web applications and VNC services. We present the user experience and implementation of OnDemand and compare it with existing science gateway approaches.


ieee international conference on high performance computing data and analytics | 2009

Enabling High-Productivity SIP Application Development: Modeling and Simulation of Superconducting Quantum Interference Filters

Juan Carlos Chaves; Alan Chalker; David E. Hudak; Vijay Gadepally; Fernando Escobar; Patrick Longhini

The inherent complexity in utilizing and programming high performance computing (HPC) systems is the main obstacle to widespread exploitation of HPC resources and technologies in the Department of Defense (DoD). Consequently, there is the persistent need to simplify the programming interface for the generic user. This need is particularly acute in the Signal/Image Processing (SIP), Integrated Modeling and Test Environments (IMT), and related DoD communities where typical users have heterogeneous unconsolidated needs. Mastering the complexity of traditional programming tools (C, MPI, etc.) is often seen as a diversion of energy that could be applied to the study of the given scientific domain. Many SIP users instead prefer high-level languages (HLLs) within integrated development environments, such as MATLAB. We report on our collaborative effort to use a HLL distribution for HPC systems called ParaM to optimize and parallelize a compute-intensive Superconducting Quantum Interference Filter (SQIF) application provided by the Navy SPAWAR Systems Center in San Diego, CA. ParaM is an open-source HLL distribution developed at the Ohio Supercomputer Center (OSC), and includes support for processor architectures not supported by MATLAB (e.g., Itanium and POWER5) as well as support for high-speed interconnects (e.g., InfiniBand and Myrinet). We make use of ParaM installations available at the Army Research Laboratory (ARL) DoD Supercomputing Resource Center (DSRC) and OSC to perform a successful optimization/parallelization of the SQIF application. This optimization/parallelization may be used to assess the feasibility of using SQIF devices as extremely sensitive detectors for electromagnetic radiation which is of great importance to the Navy and DoD in general.


acm symposium on applied computing | 2013

Applying software product line engineering in building web portals for supercomputing services

Piyush Diwan; Patricia Carey; Eric Franz; Yixue Li; Thomas Bitterman; David E. Hudak; Rajiv Ramnath

Supercomputing centers, typically non-profit, government or university-based organizations with scarce resources, are increasingly being requested to provide customized web portals for user-centered access to their services in order to support a demanding customer base. These portals often have very similar architectures and meet similar requirements, with the variations primarily being in the specialized analysis applications, and in the input and output of these applications. Given these characteristics, Software Production Line Engineering (SPLE) approaches will be valuable in enabling development teams to cost-effectively meet demands. In this paper, we demonstrate a suite of web portals developed at The Ohio Supercomputer Center (OSC) by applying SPLE methodologies. We show how we applied feature modeling on these applications to identify commonalities in their application level features despite differences in their problem domains. We describe a common framework (we term it Per User DrupaL, or PUDL), which serves as the common foundation for these portals. We demonstrate the effectiveness of SPLE in terms of reduced development time and effort, and discuss the technical challenges faced in this process. Finally we propose, as an extension to our work, an automation framework for portal generation, which users could build their own customized portals.


Omics A Journal of Integrative Biology | 2011

Enabling Data-Intensive Biomedical Science: Gaps, Opportunities, and Challenges

David E. Hudak; Don Stredney; Prasad Calyam; Kevin Wohlever; Thomas Bitterman; Bradley Hittle; Thomas Kerwin; Ashok K. Krishnamurthy

The challenges of data-intensive computing have been summarized (Gorton et al., 2008) as ‘‘managing and processing exponentially growing data volumes, often arriving in time-sensitive streams from arrays of sensors and instruments’’ and ‘‘significantly reducing data analysis cycles so that researchers can make timely decisions.’’ The management of such data requires integrated services for the (high-speed) transfer, storage, indexing, and retrieval of data. Enabling technologies for data management are under active development and investigation (including high-speed networks such as those studied by GENI (http://www.geni.net/), high-performance file systems and semantic ontologies for data access). In addition to existing cluster-based highperformance computing solutions, data-intensive cloud programming environments (e.g., MapReduce and Dryad) are emerging technologies that show promise. The Ohio Supercomputer Center (OSC) has supported data-intensive science projects in the physical sciences [e.g., ALICE—A Large Ion Collider Experiment (http://www .osc.edu/press/releases/2010/supercollider.shtml)] and the environmental sciences [e.g., ASR—Arctic System Reanalysis (http://www.osc.edu/press/releases/2007/bromwich.shtml)]. In biomedical sciences, OSC is actively supporting dataintensive biomedical research groups located at the Comprehensive Cancer Center (CCC) at The Ohio State University’s Medical Center as well as those at the Research Institute at Nationwide Children’s Hospital (RINCH). These organizations contain a number of core facilities, common laboratories providing analysis to a collection of research and clinical groups. Currently, OSC is engaged with the following core facilities at the CCC:


Proceedings of the Practice and Experience on Advanced Research Computing | 2018

Supporting distributed, interactive Jupyter and RStudio in a scheduled HPC environment with Spark using Open OnDemand

Jeremy W. Nicklas; Douglas Johnson; Shameema Oottikkal; Eric Franz; Brian McMichael; Alan Chalker; David E. Hudak

Open OnDemand supports Interactive HPC web applications enabling the interactive and distributed environments for Jupyter and RStudio running on an HPC cluster. These web applications provide a simple user-interface for building and submitting the batch job responsible for launching the interactive environment as well as proxying the connection between the users browser and the web server running on the cluster. Support for distributive computing through a Jupyter notebook and RStudio session is provided by an Apache Spark cluster launched concurrently in standalone mode on the allocated nodes within the batch job. Alternatively, users can directly use the corresponding MPI bindings for either R or Python. This paper describes the design of Interactive HPC web applications on an Open OnDemand deployment for launching and connecting to Jupyter notebooks and RStudio sessions as well as the architecture and software required for supporting Jupyter, RStudio, and Apache Spark on the corresponding HPC cluster. Singularity can be leveraged for packaging and portability of this architecture across HPC clusters. This paper also discusses the challenges encountered in providing interactive access to HPC resources that are in need of general solutions.


ieee international conference on high performance computing data and analytics | 2007

Developing a Computational Science IDE for HPC Systems

David E. Hudak; Neil Ludban; Vijay Gadepally; Ashok K. Krishnamurthy


Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale | 2016

Open OnDemand: Transforming Computational Science Through Omnidisciplinary Software Cyberinfrastructure

David E. Hudak; Douglas Johnson; Jeremy W. Nicklas; Eric Franz; Brian McMichael; Basil M Gohar

Collaboration


Dive into the David E. Hudak's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Chalker

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar

Eric Franz

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar

Neil Ludban

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vijay Gadepally

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Douglas Johnson

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Siddharth Samsi

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge