Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Guilfoos is active.

Publication


Featured researches published by Brian Guilfoos.


hpcmp users group conference | 2006

Octave and Python: High-Level Scripting Languages Productivity and Performance Evaluation

Juan Carlos Chaves; John Nehrbass; Brian Guilfoos; Judy Gardiner; Stanley C. Ahalt; Ashok K. Krishnamurthy; Jose Unpingco; Alan Chalker; Andy Warnock; Siddharth Samsi

Octave and Python are open source alternatives to MATLAB, which is widely used by the High Performance Computing Modernization Program (HPCMP) community. These languages are two well known examples of high-level scripting languages that promise to increase productivity without compromising performance on HPC systems. In this paper, we report our work and experience with these two non-traditional programming languages at the HPCMP Centers. We used a representative sample of SIP codes for the study, with special emphasis given to the understanding of issues such as portability, degree of complexity, productivity and suitability of Octave and Python to address signal/image processing (SIP) problems on the HPCMP HPC platforms. We implemented a relatively simple two-dimensional (2D) FFT and a more complex image enhancement algorithm in Octave and Python and benchmarked these SIP codes on several HPCMP platforms, paying special attention to usability, productivity and performance aspects. Moreover, we performed a thorough benchmark containing important low level SIP core functions and algorithms and compared the outcome with the corresponding results for MATLAB. We found that the capabilities of these languages are comparable to MATLAB and they are powerful enough to efficiently implement complex SIP algorithms. Productivity and performance results for each language vary depending on the specific task and the availability of high level functions in each system to address such tasks. Therefore, the choice of the best language to use in a particular instance will strongly depend upon the specifics of the SIP application that needs to be addressed. We concluded that Octave and Python look like promising tools that may provide an alternative to MATLAB without compromising performance and productivity. Their syntax and functionality are similar enough to MATLAB to present a very shallow learning curve for experienced MATLAB users


hpcmp users group conference | 2006

Interfacing PC-based MATLAB Directly to HPC Resources

John Nehrbass; Siddharth Samsi; Juan Carlos Chaves; Jose Unpingco; Brian Guilfoos; Ashok K. Krishnamurthy; Alan Chalker; Judy Gardiner

Many DoD HPC users, particularly in the SIP area, run codes developed with MATLAB and related applications (MatlabMPI, StarP, pMatlab, etc.). There is a desire to run codes from a desktop instance of MATLAB and connect to and interact with codes running on HPC resources. The PET SIP team has developed and demonstrated technology that makes this possible. The SSH toolbox for MATLAB enables users to connect to and use HPC resources using SSH without leaving the MATLAB environment. The toolbox uses a freely available implementation of SSH, a modified version of which is also used by the DoD HPCMP. The SSH toolbox consists of a Windows DLL written in C, which is used by MATLAB to communicate with the SSH client. The toolbox provides simple MATLAB commands for users to connect to remote resources, run code, retrieve results and end the SSH session. The complexity of the DLL interface and most of the security needs are hidden from the user, making this a very easy to use and powerful toolbox. Since the main component of the toolbox is written is C and packaged as a DLL, the toolbox can also be extended to work with other programming languages such as Java, Python and Octave. MATLAB-style documentation for the toolbox also makes it easy to obtain help on various aspects of the toolbox and a GUI-based installer makes distribution easier. This technology provides a revolutionary way of providing support to the DoD. Software developers are now able to provide all the hooks to a complicated HPC environment, thus removing the burden of end users


hpcmp users group conference | 2006

Enhancements to MatlabMPI: Easier Compilation, Collective Communication, and Profiling

Judy Gardiner; John Nehrbass; Juan Carlos Chaves; Brian Guilfoos; Ashok K. Krishnamurthy; Jose Unpingco; Alan Chalker; Siddharth Samsi

This paper provides a brief overview of several enhancements made to the MatlabMPI suite. MatlabMPI is a pure MATLAB code implementation of the core parts of the MPI specifications. The enhancements provide a more attractive option for HPCMP users to design parallel MATLAB code. Intelligent compiler configuration tools have also been delivered to further isolate MatlabMPI users from the complexities of the UNIX environments on the various HPCMP systems. Users are now able to install and use MatlabMPI with less difficulty, greater flexibility, and increased portability. Collective communication functions were added to MatlabMPI to expand functionality beyond the core implementation. Profiling capabilities, producing TAU (tuning and analysis utility) trace files, are now offered to support parallel code optimization. All of these enhancements have been tested and documented on a variety of HPCMP systems. All material, including commented example code to demonstrate the usefulness of MatlabMPI, is available by contacting the authors


ieee international conference on high performance computing data and analytics | 2007

Web Interface for Querying/Searching RDF Database

Brian Guilfoos; Siddharth Samsi; Juan Carlos Chaves; Jose Unpingco; John Nehrbass; Alan Chalker; Stanley C. Ahalt; Ashok K. Krishnamurthy

The resource description framework (RDF) is a language for representing information about resources on the web. However, RDF can also be used to describe other data and relationships between objects in the data. Many applications in the signal/image processing (SIP) community (such as radar imaging, electromagnetics, etc.) generate large amounts of data. Researchers would like to have online access to this data as well as the ability to easily explore and mine the data. Our applications RDF metadata representation is similar to that of a conventional database, and users can use forms to search the database, or use the standard RDF query language SPARQL, to create queries. In most cases, all the data as well as the RDF description of the data resides on secure Department of Defense (DoD) major shared resource center (MSRC) resources. In order to provide a web interface for exploring this data, we need a secure way to access the user data. Towards this goal, we use the user interface toolkit (UIT) to provide a web application that allows users to browse and search the RDF metadata of large SIP databases securely and conveniently on their desktop. The UIT uses the same Kerberos technology and Secure ID cards that are used to access all MSRC machines and provides an application programming interface (API) for building clients to access computing resources in the DoD high performance computing and modernization program (HPCMP).


extreme science and engineering discovery environment | 2012

Computational science certificates for the current workforce: lessons learned

Steven I. Gordon; Judith D. Gardiner; Brian Guilfoos

One of the keys to the future competitiveness of U. S. industry is the integration of modeling and simulation into the development, design, and manufacturing process. A related challenge is to retrain the current workforce in the use of computational modeling to enable its effective use in the workplace. We review our implementation of a new, computational science certificate program aimed at the current workforce in Ohio. The structure of the current program is discussed along with the problems associated with meeting the educational needs of this population.


ieee international conference on high performance computing data and analytics | 2009

A Java-Based Interface for Creating and Mining RDF Database

Siddharth Samsi; Brian Guilfoos; Harrison Ben Smith; Jose Unpingco; Alan Chalker

The Resource Description Framework (RDF) language can be used to describe data and the relationships between different objects in the data. As larger amounts of data are generated, many applications in the Signal and Image Processing areas such as radar image processing, electromagnetics, etc., present users with the challenge of representing and mining the data. In many cases, this data resides on secure Department of Defense Supercomputing Resource Centers (DSRC). Our earlier work developed a Web interface for querying and searching this RDF data and also allowed users to transfer this data between DSRCs. In this paper, we describe the architecture improvements in the Web interface that make the application easier to deploy, maintain, and modify. The entire application has beenrefactored in an object-oriented manner, making it easier to customize and re-use parts of the application in other Java based tools. Additionally, we have developed a Java-based tool for creating the metadata associated with existing data. The RDF creation tool makes it easy for users to create RDF databases without the need to learn the intricacies of the RDF language. This new tool can also be integrated into the Web application thus allowing users to generate RDF databases for new data. Pilot studies were also conducted to enable the use of mpscp for high bandwidth data transfers, with promising results.


ieee international conference on high performance computing data and analytics | 2009

Evaluating Parallel Extensions to High Level Languages Using the HPC Challenge Benchmarks

Laura Humphrey; Brian Guilfoos; Harrison Ben Smith; Andrew Warnock; Jose Unpingco; Bracy H. Elton; Alan Chalker

Recent years have seen the development of many new parallel extensions to high level languages. However, there does not yet seem to have been a concentrated effort to quantify their performance or qualify their usability. Toward this end, we have used several parallel extensions to implement four of the high performance computing (HPC) Challenge benchmarks—FFT, HPL, RandomAccess, and STREAM—according to the Class 2 specifications. The parallel extensions used here include pMatlab, Star-P, and the official Parallel Computing Toolbox for MATLAB; pMatlab for Octave; and Star-P for Python. We have recorded performance results for the benchmarks using these extensions on the Ohio Supercomputing Center’s supercomputer Glenn as well as several of the Department of Defense Supercomputing Resource Centers (DoD DSRCs). These results are compared to those of the original C benchmarks as run on Glenn. We also highlight some of the features of these parallel extensions, as well as those of gridMathematica for Mathematica and IPython for Python, which have not yet been fully benchmarked.


ieee international conference on high performance computing data and analytics | 2009

A Scalability Study (as a Guide for HPC Operations at a Remote Test Facility) on DSRC HPC Systems of Radio Frequency Tomography Code Written for MATLAB® and Parallelized via Star-P®

Bracy H. Elton; Siddharth Samsi; Harrison Ben Smith; Laura Humphrey; Brian Guilfoos; Stanley C. Ahalt; Alan Chalker; Kevin M. Magde; Niraj Srivastava; Aquil H. Abdullah; Patrick Boyle

A team of researchers at the Air Force Research Laboratory in Rome, NY is building a remote test facility for developing a radio frequency (RF) tomography imaging capability. While at the test site and via batch reservations, they plan on employing the Army Research Laboratory Department of Defense Supercomputer Resource Center MJM distributed memory architecture system, while conducting operations at the test site. We present a scalability study of example RF tomography code, written in the M language of MATLAB and parallelized via Star-P, on the MJM system. The team can use the study to help guide operations while at the remote test facility. We are not attempting to show that the RF tomography code scales well; indeed, it suffers from communication bottlenecks in parts of the algorithms. Nonetheless, this is the code the team uses and, for planning purposes, the team needs to know how long it takes to produce images of a given size for a given number of processors with the existing algorithms.


hpcmp users group conference | 2006

Applications in Parallel MATLAB

Brian Guilfoos; Judy Gardiner; Juan Carlos Chaves; John Nehrbass; Ashok K. Krishnamurthy; Jose Unpingco; Alan Chalker; Laura Humphrey; Siddharth Samsi

The parallel MATLAB implementations used for this project are MatlabMPI and pMATLAB, both developed by Dr. Jeremy Kepner at MIT-LL. MatlabMPI is based on the message passing interface standard, in which processes coordinate their work and communicate by passing messages among themselves. The pMATLAB library supports parallel array programming in MATLAB. The user program defines arrays that are distributed among the available processes. Although communication between processes is actually done through message passing, the details are hidden from the user. The objective of this PET project was to develop parallel MATLAB code for selected algorithms that are of interest to the Department of Defense (DoD) signal/image processing (SIP) community and to run the code on the HPCMP systems. The algorithms selected for parallel MATLAB implementation were a support vector machine (SVM) classifier, metropolis-Hastings Markov chain Monte Carlo (MCMC) simulation, and content-based image compression (CBIC)


Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale | 2016

The Advanced Cyberinfrastructure Research and Education Facilitators Virtual Residency: Toward a National Cyberinfrastructure Workforce

Henry Neeman; Aaron Bergstrom; Dana Brunson; Carrie L. Ganote; Zane Gray; Brian Guilfoos; Robert Kalescky; Evan C. Lemley; Brian Moore; Sai Kumar Ramadugu; Alana Romanella; Johnathan Rush; Andrew H. Sherman; Brian Stengel; Dan Voss

Collaboration


Dive into the Brian Guilfoos's collaboration.

Top Co-Authors

Avatar

Alan Chalker

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar

Jose Unpingco

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar

Siddharth Samsi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judy Gardiner

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar

Laura Humphrey

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar

John Nehrbass

Ohio Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge