Alan Chalker
Ohio Supercomputer Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alan Chalker.
hpcmp users group conference | 2006
Juan Carlos Chaves; John Nehrbass; Brian Guilfoos; Judy Gardiner; Stanley C. Ahalt; Ashok K. Krishnamurthy; Jose Unpingco; Alan Chalker; Andy Warnock; Siddharth Samsi
Octave and Python are open source alternatives to MATLAB, which is widely used by the High Performance Computing Modernization Program (HPCMP) community. These languages are two well known examples of high-level scripting languages that promise to increase productivity without compromising performance on HPC systems. In this paper, we report our work and experience with these two non-traditional programming languages at the HPCMP Centers. We used a representative sample of SIP codes for the study, with special emphasis given to the understanding of issues such as portability, degree of complexity, productivity and suitability of Octave and Python to address signal/image processing (SIP) problems on the HPCMP HPC platforms. We implemented a relatively simple two-dimensional (2D) FFT and a more complex image enhancement algorithm in Octave and Python and benchmarked these SIP codes on several HPCMP platforms, paying special attention to usability, productivity and performance aspects. Moreover, we performed a thorough benchmark containing important low level SIP core functions and algorithms and compared the outcome with the corresponding results for MATLAB. We found that the capabilities of these languages are comparable to MATLAB and they are powerful enough to efficiently implement complex SIP algorithms. Productivity and performance results for each language vary depending on the specific task and the availability of high level functions in each system to address such tasks. Therefore, the choice of the best language to use in a particular instance will strongly depend upon the specifics of the SIP application that needs to be addressed. We concluded that Octave and Python look like promising tools that may provide an alternative to MATLAB without compromising performance and productivity. Their syntax and functionality are similar enough to MATLAB to present a very shallow learning curve for experienced MATLAB users
hpcmp users group conference | 2006
John Nehrbass; Siddharth Samsi; Juan Carlos Chaves; Jose Unpingco; Brian Guilfoos; Ashok K. Krishnamurthy; Alan Chalker; Judy Gardiner
Many DoD HPC users, particularly in the SIP area, run codes developed with MATLAB and related applications (MatlabMPI, StarP, pMatlab, etc.). There is a desire to run codes from a desktop instance of MATLAB and connect to and interact with codes running on HPC resources. The PET SIP team has developed and demonstrated technology that makes this possible. The SSH toolbox for MATLAB enables users to connect to and use HPC resources using SSH without leaving the MATLAB environment. The toolbox uses a freely available implementation of SSH, a modified version of which is also used by the DoD HPCMP. The SSH toolbox consists of a Windows DLL written in C, which is used by MATLAB to communicate with the SSH client. The toolbox provides simple MATLAB commands for users to connect to remote resources, run code, retrieve results and end the SSH session. The complexity of the DLL interface and most of the security needs are hidden from the user, making this a very easy to use and powerful toolbox. Since the main component of the toolbox is written is C and packaged as a DLL, the toolbox can also be extended to work with other programming languages such as Java, Python and Octave. MATLAB-style documentation for the toolbox also makes it easy to obtain help on various aspects of the toolbox and a GUI-based installer makes distribution easier. This technology provides a revolutionary way of providing support to the DoD. Software developers are now able to provide all the hooks to a complicated HPC environment, thus removing the burden of end users
ieee international conference on cloud computing technology and science | 2014
Thomas Bitterman; Prasad Calyam; Alex Berryman; David E. Hudak; Lin Li; Alan Chalker; Steve Gordon; Da Zhang; Da Cai; Changpil Lee; Rajiv Ramnath
Large manufacturers increasingly leverage modelling and simulation to improve quality and reduce cost. Small manufacturers have not adopted these techniques due to sizable upfront costs for expertise, software and hardware. The software as a service (SaaS) model provides access to applications hosted in a cloud environment, allowing users to try services at low cost and scale as needed. We have extended SaaS to include high-performance computing-hosted applications, thus creating simulation as a service (SMaaS). Polymer portal is a first-generation SMaaS platform designed to integrate access to multiple modelling, simulation and training services. Polymer portal provides a number of features including an e-commerce front end, common AAA service, and support for both cloud-hosted virtual machine (VM) images and high-performance computing (HPC) jobs. It has been deployed for six months and has been used successfully for a number of training and simulation activities. This paper describes the requirements, challenges, design and implementation of the polymer portal.
ieee international conference on high performance computing data and analytics | 2009
Juan Carlos Chaves; Alan Chalker; David E. Hudak; Vijay Gadepally; Fernando Escobar; Patrick Longhini
The inherent complexity in utilizing and programming high performance computing (HPC) systems is the main obstacle to widespread exploitation of HPC resources and technologies in the Department of Defense (DoD). Consequently, there is the persistent need to simplify the programming interface for the generic user. This need is particularly acute in the Signal/Image Processing (SIP), Integrated Modeling and Test Environments (IMT), and related DoD communities where typical users have heterogeneous unconsolidated needs. Mastering the complexity of traditional programming tools (C, MPI, etc.) is often seen as a diversion of energy that could be applied to the study of the given scientific domain. Many SIP users instead prefer high-level languages (HLLs) within integrated development environments, such as MATLAB. We report on our collaborative effort to use a HLL distribution for HPC systems called ParaM to optimize and parallelize a compute-intensive Superconducting Quantum Interference Filter (SQIF) application provided by the Navy SPAWAR Systems Center in San Diego, CA. ParaM is an open-source HLL distribution developed at the Ohio Supercomputer Center (OSC), and includes support for processor architectures not supported by MATLAB (e.g., Itanium and POWER5) as well as support for high-speed interconnects (e.g., InfiniBand and Myrinet). We make use of ParaM installations available at the Army Research Laboratory (ARL) DoD Supercomputing Resource Center (DSRC) and OSC to perform a successful optimization/parallelization of the SQIF application. This optimization/parallelization may be used to assess the feasibility of using SQIF devices as extremely sensitive detectors for electromagnetic radiation which is of great importance to the Navy and DoD in general.
hpcmp users group conference | 2006
Judy Gardiner; John Nehrbass; Juan Carlos Chaves; Brian Guilfoos; Ashok K. Krishnamurthy; Jose Unpingco; Alan Chalker; Siddharth Samsi
This paper provides a brief overview of several enhancements made to the MatlabMPI suite. MatlabMPI is a pure MATLAB code implementation of the core parts of the MPI specifications. The enhancements provide a more attractive option for HPCMP users to design parallel MATLAB code. Intelligent compiler configuration tools have also been delivered to further isolate MatlabMPI users from the complexities of the UNIX environments on the various HPCMP systems. Users are now able to install and use MatlabMPI with less difficulty, greater flexibility, and increased portability. Collective communication functions were added to MatlabMPI to expand functionality beyond the core implementation. Profiling capabilities, producing TAU (tuning and analysis utility) trace files, are now offered to support parallel code optimization. All of these enhancements have been tested and documented on a variety of HPCMP systems. All material, including commented example code to demonstrate the usefulness of MatlabMPI, is available by contacting the authors
ieee international conference on high performance computing data and analytics | 2009
Bracy H. Elton; Siddharth Samsi; Harrison Ben Smith; Laura Humphrey; Stanley C. Ahalt; Alan Chalker; Niraj Srivastava; Aquil H. Abdullah; Patrick Boyle
This paper provides a step-by-step demonstration of a Very High Level Language system, Star-P, on Department of Defense (DoD) high performance computing (HPC) systems. Specifically, we demonstrate how to effect parallel computing in MATLAB and Python via Star-P on the DoD Supercomputing Resource Center (DSRC) Army Research Laboratory (ARL) 4,488-core Intel Woodcrest MJM system. We demonstrate how to run various Star-P/MATLAB and Star-P/Python programs in parallel on the ARL DSRC MJM system. The results will focus on the use of Star-P software platform and how it delivers mission tempo by enabling rapid application prototyping and allowing transparent use of DSRC HPC resources from familiar desktop environments, such as Microsoft Windows and Linux.
ieee international conference on high performance computing data and analytics | 2007
Brian Guilfoos; Siddharth Samsi; Juan Carlos Chaves; Jose Unpingco; John Nehrbass; Alan Chalker; Stanley C. Ahalt; Ashok K. Krishnamurthy
The resource description framework (RDF) is a language for representing information about resources on the web. However, RDF can also be used to describe other data and relationships between objects in the data. Many applications in the signal/image processing (SIP) community (such as radar imaging, electromagnetics, etc.) generate large amounts of data. Researchers would like to have online access to this data as well as the ability to easily explore and mine the data. Our applications RDF metadata representation is similar to that of a conventional database, and users can use forms to search the database, or use the standard RDF query language SPARQL, to create queries. In most cases, all the data as well as the RDF description of the data resides on secure Department of Defense (DoD) major shared resource center (MSRC) resources. In order to provide a web interface for exploring this data, we need a secure way to access the user data. Towards this goal, we use the user interface toolkit (UIT) to provide a web application that allows users to browse and search the RDF metadata of large SIP databases securely and conveniently on their desktop. The UIT uses the same Kerberos technology and Secure ID cards that are used to access all MSRC machines and provides an application programming interface (API) for building clients to access computing resources in the DoD high performance computing and modernization program (HPCMP).
hpcmp users group conference | 2006
G Kenneth; Ken Yetzer; Mike Stokes; Ashok K. Krishnamurthy; Alan Chalker
At the heart of Network Centric Warfare is the ability for all assets on the battlefield to communicate and coordinate their actions. Therefore, as these systems are being developed they must be tested and evaluated together along with other assets in a networked environment. The key requirement to conducting this type of test and evaluation (i.e., distributed testing) is having the necessary expertise to combine networking, security, high performance computing (HPC), and simulation experience as needed. The army began preparation for testing in a distributed environment more than a decade ago when the Army Test and Evaluation Command created the virtual proving ground. An outgrowth of this technology investment was a series of increasingly complex distributed test events or exercises whose purpose was to provide technology integration points and demonstrate and document the capabilities and methodologies for conducting distributed testing. The experience gained in performing these exercises over the past ten years, raises important questions regarding interoperability of network-centric assets, performance of spatially separated systems (especially those involving hardware-in-the-loop (HWIL) assets) and high bandwidth requirements such as video and audio streaming feeds. This paper seeks to expound on a few of these issues as observed from the most recent tests as observed from the US Army Redstone Technical Test Center (RTTC). The latest exercise, Distributed Test Event 5 (DTE-5), occurred in August/September of 2005
Proceedings of the Practice and Experience on Advanced Research Computing | 2018
Jeremy W. Nicklas; Douglas Johnson; Shameema Oottikkal; Eric Franz; Brian McMichael; Alan Chalker; David E. Hudak
Open OnDemand supports Interactive HPC web applications enabling the interactive and distributed environments for Jupyter and RStudio running on an HPC cluster. These web applications provide a simple user-interface for building and submitting the batch job responsible for launching the interactive environment as well as proxying the connection between the users browser and the web server running on the cluster. Support for distributive computing through a Jupyter notebook and RStudio session is provided by an Apache Spark cluster launched concurrently in standalone mode on the allocated nodes within the batch job. Alternatively, users can directly use the corresponding MPI bindings for either R or Python. This paper describes the design of Interactive HPC web applications on an Open OnDemand deployment for launching and connecting to Jupyter notebooks and RStudio sessions as well as the architecture and software required for supporting Jupyter, RStudio, and Apache Spark on the corresponding HPC cluster. Singularity can be leveraged for packaging and portability of this architecture across HPC clusters. This paper also discusses the challenges encountered in providing interactive access to HPC resources that are in need of general solutions.
Journal of Social Structure | 2018
Dave Hudak; Doug Johnson; Alan Chalker; Jeremy W. Nicklas; Eric Franz; Trey Dockendorf; Brian McMichael
The web has become the dominant access mechanism for remote compute services in every computing area except high-performance computing (HPC). Accessing HPC resources, either at the campus or national level typically requires advanced knowledge of Linux, familiarity with command-line interfaces and installation and configuration of custom client software (e.g., Secure Shell (SSH) and Virtual Network Computing (VNC)). These additional requirements create an accessibility gap for HPC. To help address this gap we have created the Open OnDemand Project (Hudak et al. 2016), an open-source software project based on the proven Ohio Supercomputer Center (OSC) OnDemand platform (Hudak et al. 2013), to allow HPC centers to provide advanced web and graphical interfaces for their users.