Preston M. Smith
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Preston M. Smith.
ieee international conference on cloud computing technology and science | 2010
Adam G. Carlyle; Stephen Lien Harrell; Preston M. Smith
The increasing availability of commercial cloud computing resources in recent years has caught the attention of the high-performance computing (HPC) and scientific computing community. Many researchers have subsequently examined the relative computational performance of commercially available cloud computing offerings across a number of HPC application bench-marks and scientific workflows, but the analogous cost comparisons—i.e., comparisons between the cost of doing scientific computation in traditional HPC environments vs. cloud computing environments—are less frequently discussed and are difficult to make in meaning-ful ways. Such comparisons are of interest to traditional HPC resource providers as well as to members of the scientific research community who need access to HPC resources on a routine basis. This paper is a case study of costs incurred by faculty end-users of Purdue University’s HPC “community cluster” program. We develop and present a per node-hour cloud computing equivalent cost that is based upon actual usage patterns of the community cluster participants and is suitable for direct comparison to hourly costs charged by one commercial cloud computing provider. We find that the majority of community cluster participants incur substantially lower out-of-pocket costs in this community cluster program than in purchasing cloud computing HPC products.
challenges of large applications in distributed environments | 2006
S.L. Gooding; L. Arns; Preston M. Smith; J. Tillotson
This paper discusses the implementation of a distributed rendering environment (DRE) utilizing the TeraGrid. Using the new system, researchers and students across the TeraGrid have access to available resources for distributed rendering. Previously, researchers at universities and national labs, using high end rendering software such as Renderman Compliant Pixie were often limited by the amount of time that it takes to calculate (render) their final images. The amount of time required to render introduces several potential complications in a research setting. In contrast, a typical animation studio has a render farm, consisting of a cluster of computers (nodes) used to render 3D images, known as a distributed rendering environment. By spreading the rendering across hundreds of machines, the overall render time is reduced significantly. Unfortunately, most researchers do not have access to a distributed rendering environment. Our university has been developing a DRE for local use. However, because we are a TeraGrid site, we recently modified our DRE implementation to make use of open source rendering tools and grid tools such as Condor, in order to make the DRE available to other TeraGrid users
international parallel and distributed processing symposium | 2008
Preston M. Smith; Thomas J. Hacker; Carol Song
Purdue University operates one of the largest cycle recovery systems in existence in academia based on the Condor workload management system. This system represents a valuable and useful cyberinfrastructure (CI) resource supporting research and education for campus and national users. During the construction and operation of this CI, we encountered many unforeseen challenges and benefits unique to an actively used infrastructure of this size. The most significant problems were integrating Condor with existing campus UPC resources, managing resource and user growth, coping with the distributed ownership of compute resources around campus, and integrating this CI with the Tera- Gridand Open Science Grid. In this paper, we describe some of our experiences and establish some best practices, which we believe will be valuable and useful to other academic institutions seeking to operate a production campus cyberinfrastrucure of a similar scale and utility.
teragrid conference | 2011
Stephen Lien Harrell; Preston M. Smith; Doug Smith; Torsten Hoefler; Anna A. Labutina; Trinity Overmyer
This paper aims to describe methods that can be used to create new Student Cluster Competition teams from the standpoint of the team advisors. The purpose is to share these methods in order to create an easier path for organizing a successful team. These methods were gleaned from a survey of advisors that have formed teams in the last four years. Four advisors responded to the survey and those responses fit into five categories: (1) early preparation, (2) coursework specific to the competition, (3) close relationships with the hardware vendors, (4) concentration on the applications over the hardware, and (5) the need to encourage the team members to write papers about their experiences. In addition to these commonalities which may be best practices there are a few divergent but intriguing techniques that may also prove useful for potential advisors. Both will be discussed here and these methods can serve as a primer for anyone looking to start a new Student Cluster Competition team.
2016 New York Scientific Data Summit (NYSDS) | 2016
Boyu Zhang; Line C. Pouchard; Preston M. Smith; Amandine Gasc; Bryan C. Pijanowski
Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multi-terabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human-Environment Modeling and Analysis Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.
Proceedings of the First International Workshop on HPC User Support Tools | 2014
Kevin D. Colby; Daniel T. Dietz; Preston M. Smith; Donna D. Cumberland
Under a shared campus cluster model, with many different investing research groups, and annual new cluster acquisitions, constantly adding and removing students and collaborators from resources owned by partner faculty, campus IT staff set out to design a cluster management solution to empower faculty to manage access to their purchased resources. This system needed to also allow IT staff to quickly provision resources, provide accurate accounting and tracking of faculty purchases over time, and provide one location for data about the cluster program.This paper describes the applications developed to empower faculty to directly grant and remove access to their resources and improve the ability of IT staff to provision, track, and manage these resources.
Proceedings of the Practice and Experience on Advanced Research Computing | 2018
Richard H. Grant; Stuart D. Smith; Stephen Lien Harrell; Alex Younts; Preston M. Smith
Scientists with non-traditional computational and workflow needs are a growing demographic in Research Computing. In order to serve these scientific communities we must create new ways to leverage existing resources. One set of problems that is not properly served revolves around Microsoft Windows-based software. These softwares can be both legacy software that have no modern counterpart or traditionally GUI software that have had batch components integrated into them. In this paper we describe a general solution and three successful use-cases using Microsoft Windows Virtual Machines in both interactive and non-interactive batch jobs on Linux-based Beowulf-style clusters to complete workflows based on Microsoft Windows software. With this general solution the utility of ubiquitous Beowulf clusters can be extended.
Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact | 2017
Gladys K. Andino; Marisa Brazil; Michael Gribskov; Preston M. Smith
We seek to describe the efforts that our research computing team at Purdue is pursuing to advance and support the representation and diversity of women in High Performance Computing (HPC).
Proceedings of the HPC Systems Professionals Workshop on | 2017
Preston M. Smith; Jason St. John; Stephen Lien Harrell
Configuration management is a critical tool in the management of large groups of computer systems which are vital to deployment of High Performance Computing (HPC). In this paper, we describe the history, architecture, overarching goals, and outcomes of various configuration management systems utilized in support of HPC at Purdue University. Additionally, we enumerate best practices of configuration management that have been discovered in the strongly iterative HPC environment at Purdue.
Nuclear Instruments & Methods in Physics Research Section B-beam Interactions With Materials and Atoms | 2007
Xiuzeng Ma; Yingkui Li; Mike Bourgeois; Marc W. Caffee; David Elmore; D. Granger; Paul Muzikar; Preston M. Smith