George Turner
Indiana University Bloomington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by George Turner.
extreme science and engineering discovery environment | 2015
Craig A. Stewart; Timothy Cockerill; Ian T. Foster; David Y. Hancock; Nirav Merchant; Edwin Skidmore; Dan Stanzione; James Taylor; Steven Tuecke; George Turner; Matthew W. Vaughn; Niall Gaffney
Jetstream will be the first production cloud resource supporting general science and engineering research within the XD ecosystem. In this report we describe the motivation for proposing Jetstream, the configuration of the Jetstream system as funded by the NSF, the team that is implementing Jetstream, and the communities we expect to use this new system. Our hope and plan is that Jetstream, which will become available for production use in 2016, will aid thousands of researchers who need modest amounts of computing power interactively. The implementation of Jetstream should increase the size and disciplinary diversity of the US research community that makes use of the resources of the XD ecosystem.
international parallel and distributed processing symposium | 2004
Peng Wang; George Turner; Daniel A. Lauer; Matthew Allen; Stephen C. Simms; David Hart; Mary Papakhian; Craig A. Stewart
Summary form only given. As the first geographically distributed supercomputer on the top 500 list, the AVIDD facility of Indiana University ranked 50/sup th/ in June of 2003. It achieved 1.169 tera-flops running the LINPACK benchmark. Here, our work of improving LINPACK performance is reported, and the impact of math kernel, LINPACK problem size and network tuning is analyzed based on the performance model of LINPACK.
siguccs: user services conference | 2017
Jeremy Fischer; David Y. Hancock; John Michael Lowe; George Turner; Winona Snapp-Childs; Craig A. Stewart
Jetstream is the first production cloud funded by the NSF for conducting general-purpose science and engineering research as well as an easy-to-use platform for education activities. Unlike many high-performance computing systems, Jetstream uses the interactive Atmosphere graphical user interface developed as part of the iPlant (now CyVerse) project and focuses on interactive use on uniprocessor or multiprocessor. This interface provides for a lower barrier of entry for use by educators, students, practicing scientists, and engineers. A key part of Jetstreams mission is to extend the reach of the NSFs eXtreme Digital (XD) program to a community of users who have not previously utilized NSF XD program resources, including those communities and institutions that traditionally lack significant cyberinfrastructure resources. One manner in which Jetstream eases this access is via virtual desktops facilitating use in education and research at small colleges and universities, including Historically Black Colleges and Universities (HBCUs), Minority Serving Institutions (MSIs), Tribal colleges, and higher education institutions in states designated by the NSF as eligible for funding via the Experimental Program to Stimulate Competitive Research (EPSCoR). Jetstream entered into full production in September 2016 and during the first six months it has supported more than a dozen educational efforts across the United States. Here, we discuss how educators at institutions of higher education have been using Jetstream in the classroom and at student-focused workshops. Specifically, we explore success stories, difficulties encountered, and everything in between. We also discuss plans for increasing the use of cloud-based systems in higher education. A primary goal in this paper is to spark discussions between educators and information technologists on how to improve using cloud resources in education.
Proceedings of the Practice and Experience on Advanced Research Computing | 2018
Jeremy Fischer; Brian W. Beck; Sanjana Sudarshan; George Turner; Winona Snapp-Childs; Craig A. Stewart; David Y. Hancock
There are numerous domains of science that have been using high performance computing (HPC) systems for decades. Historically, when new HPC resources are introduced, specific variations may require researchers to make minor adjustments to their workflows but the general usage and expectations remain much the same. This consistency means that domain scientists can generally move from system to system as necessary and as new resources come online, they can be fairly easily adopted by these researchers. However, as novel resources, such as cloud computing systems, become available, additional work may be required in order to help researchers find and use the resource. When the goal of a systems funding and deployment is to find non-traditional research groups that have been under-served by the national cyberinfrastructure, a different approach to system adoption and training is required. When Jetstream was funded by the NSF as the first production research cloud, it became clear that to attract non-traditional or under-served researchers, a very proactive approach would be required. Here we show how the Jetstream team 1) developed methods and practices for increasing awareness of the system to both traditional HPC users as well as under-served and non-traditional users of HPC systems, 2) developed training approaches which highlight the capabilities that a cloud system may offer that are different from traditional HPC systems. We also discuss areas of success and failure, and plans for future efforts.
Proceedings of the HPC Systems Professionals Workshop on | 2017
David Akin; Mehmet Belgin; Timothy A. Bouvet; Neil Bright; Stephen Lien Harrell; Brian Haymore; Michael Jennings; Rich Knepper; Daniel LaPine; Fang Cherry Liu; Amiya Kumar Maji; Henry Neeman; Resa Reynolds; Andrew H. Sherman; Michael Showerman; Jenett Tillotson; John Towns; George Turner; Brett Zimmerman
We discuss training workshops run by the Linux Clusters Institute (LCI), which provides education and advanced technical training for IT professionals who deploy and support High Performance Computing (HPC) Linux clusters, which have become the most ubiquitous tools for HPC worldwide. The LCI offers workshops that cover the basics of Linux HPC cluster system administration, including hardware (computing, storage, and networking); system-level software (e.g., provisioning systems, resource ma nagers, and job schedulers); system security; and user support. These workshops also aim to seed an HPC systems professional community of practice, by bringing together groups of research computing professionals to obtain essential training in cluster administration, while interacting with the experienced HPC systems professionals who serve as instructors and mentors.
scientific cloud computing | 2018
John Michael Lowe; Jeremy Fischer; Sanjana Sudarshan; George Turner; Craig A. Stewart; David Y. Hancock
Research computing has traditionally used high performance computing (HPC) clusters and has been a service not given to high availability without a doubling of computational and storage capacity. System maintenance such as security patching, firmware updates, and other system upgrades generally meant that the system would be unavailable for the duration of the work unless one has redundant HPC systems and storage. While efforts were often made to limit downtimes, when it became necessary, maintenance windows might be one to two hours or as much as an entire day. As the National Science Foundation (NSF) began funding non-traditional research systems, looking at ways to provide higher availability for researchers became one focus for service providers. One of the design elements of Jetstream was to have geographic dispersion to maximize availability. This was the first step in a number of design elements intended to make Jetstream exceed the NSFs availability requirements. We will examine the design steps employed, the components of the system and how the availability for each was considered in deployment, how maintenance is handled, and the lessons learned from the design and implementation of the Jetstream cloud.
Concurrency and Computation: Practice and Experience | 2018
David Y. Hancock; Craig A. Stewart; Matthew W. Vaughn; Jeremy Fischer; John Michael Lowe; George Turner; Tyson L. Swetnam; Tyler K. Chafin; Enis Afgan; Marlon E. Pierce; Winona Snapp-Childs
Jetstream is a first of its kind system for the NSF — a distributed production cloud resource. We review the purpose for creating Jetstream, discuss Jetstreams key characteristics, describe our experiences from the first year of maintaining an OpenStack‐based cloud environment, and share some of the early scientific impacts achieved by Jetstream users. Jetstream offers a unique capability within the XSEDE‐supported US national cyberinfrastructure, delivering interactive virtual machines (VMs) via the Atmosphere interface. As a multi‐region deployment that operates as an integrated system, Jetstream is proving effective in supporting modes and disciplines of research traditionally underrepresented on larger XSEDE‐supported clusters and supercomputers. Already, Jetstream has been used to perform research and education in biology, biochemistry, atmospheric science, earth science, and computer science.
Icarus | 1998
Gregor E. Morfill; Richard H. Durisen; George Turner
Archive | 2002
Beth Plale; George Turner; Akshay Sharma
Archive | 2016
Craig A. Stewart; Dan Stanzione; Timothy Cockerill; Edwin Skidmore; Jeremy Fischer; John Michael Lowe; Bret Hammond; George Turner; David Y. Hancock; Therese Miller