Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Knepper is active.

Publication


Featured researches published by Richard Knepper.


ieee international conference on high performance computing data and analytics | 2012

Demonstrating lustre over a 100Gbps wide area network of 3,500km

Robert Henschel; Stephen C. Simms; David Y. Hancock; Scott Michael; Tom Johnson; Nathan Heald; Thomas William; Donald K. Berry; Matthew Allen; Richard Knepper; Matt Davy; Matthew R. Link; Craig A. Stewart

As part of the SCinet Research Sandbox at the Supercomputing 2011 conference, Indiana University (IU) demonstrated use of the Lustre high performance parallel file system over a dedicated 100 Gbps wide area network (WAN) spanning more than 3,500 km (2,175 mi). This demonstration functioned as a proof of concept and provided an opportunity to study Lustres performance over a 100 Gbps WAN. To characterize the performance of the network and file system, low level iperf network tests, file system tests with the IOR benchmark, and a suite of real-world applications reading and writing to the file system were run over a latency of 50.5 ms. In this article we describe the configuration and constraints of the demonstration and outline key findings.


extreme science and engineering discovery environment | 2014

Methods For Creating XSEDE Compatible Clusters

Jeremy Fischer; Richard Knepper; Matthew Standish; Craig A. Stewart; Resa Alvord; David Lifka; Barbara Hallock; Victor Hazlewood

The Extreme Science and Engineering Discovery Environment has created a suite of software that is collectively known as the basic XSEDE-compatible cluster build. It has been distributed as a Rocks roll for some time. It is now available as individual RPM packages, so that it can be downloaded and installed in portions as appropriate on existing and working clusters. In this paper, we explain the concept of the XSEDE-compatible cluster and explain how to install individual components as RPMs through use of Puppet and the XSEDE compatible cluster YUM repository.


teragrid conference | 2011

The shape of the TeraGrid: analysis of TeraGrid users and projects as an affiliation network

Richard Knepper

I examine the makeup of the users and projects of the TeraGrid using social network analysis techniques. Analyzing the TeraGrid as an affiliation (two-mode) network allows for understanding the relationship between types of users and field of science and allocation size of projects. The TeraGrid data shows that while less than half of TeraGrid users are involved in projects that are connected to each other, a considerable core of the TeraGrid emerges that constitutes the most-commonly-related projects. The largest complete subgraph of TeraGrid users and projects constitutes a more dense and more centralized network core of TeraGrid users. I perform social network analysis on the largest complete subgraph in order to identify additional groupings of projects and users within the TeraGrid. This analysis of users and projects provides substantive information about the connections of individual scientists, projects groups, and fields of science in a large-scale environment that incorporates both competition and cooperation between actors.


siguccs: user services conference | 2005

PubsOnline: open source bibliography database

Scott A. Myron; Richard Knepper; Matthew R. Link; Craig A. Stewart

Universities and colleges, departments within universities and colleges, and individual researchers often desire the ability to provide online listings, via the Web, of citations to publications and other forms of information dissemination. Cataloging citations to publications or other forms of information dissemination by a particular organization facilitates access to the information, its use, and citation in subsequent publications. Listing, searching, and indexing of citations is further improved when citations can be searched on by additional key information, such as by grant, university resource, or research lab.This paper describes PubsOnline, an open source tool for management and presentation of databases of citations via the Web. Citations with bibliographic information are kept in the database and associated with attributes that are grouped by category and usable as search keys. Citations may optionally be linked to files containing an entire article. PubsOnline was developed with PHP and MySQL, and may be downloaded from http://pubsonline.indiana.edu/.


international conference on cluster computing | 2015

XCBC and XNIT - Tools for Cluster Implementation and Management in Research and Training

Jeremy Fischer; Eric Coulter; Richard Knepper; Charles Peck; Craig A. Stewart

The Extreme Science and Engineering Discovery Environment has created a suite of software designed to facilitate the local management of computer clusters for scientific research and integration of such clusters with the US open research national cyberinfrastructure. This suite of software is distributed in two ways. One distribution is called the XSEDE-compatible basic cluster (XCBC), a Rocks Roll that does an “all at once, from scratch” installation of core components. The other distribution is called the XSEDE National Integration Toolkit (XNIT), so that specific tools can be downloaded and installed in portions as appropriate on existing clusters. In this paper, we describe the software included in XCBC and XNIT, and examine the use of XCBC installed on the LittleFe cluster design created by the Earlham College Cluster Computing Group as a teaching tool to show the deployment of XCBC from Rocks. In addition, the demonstration of the commercial Limulus HPC200 Deskside Cluster solution is shown as a viable, off-the-shelf cluster that can be adapted to become an XSEDE-like cluster through the use of the XNIT repository. We demonstrate that both approaches to cluster management - use of SCBC to build clusters from scratch and use of XNIT to expand capabilities of existing clusters - aid cluster administrators in administering clusters that are valuable locally and facilitate integration and interoperability of campus clusters with national cyberinfrastructure. We also demonstrate that very economical clusters can be useful tools in education and research. Categories and Subject Descriptors Theory of computation - Parallel computing models; Computer systems organization - Grid computing; Computer systems organization - Special purpose systems.


networking architecture and storages | 2012

The Lustre File System and 100 Gigabit Wide Area Networking: An Example Case from SC11

Richard Knepper; Scott Michael; William Johnson; Robert Henschel; Matthew R. Link

As part of the SCinet Research Sandbox at the IEEE/ACM International Conference for High Performance Computing, Networking, Storage and Analysis (SC11), Indiana University utilized a dedicated 100 Gbps wide area network (WAN) link spanning more than 3,500 km (2,175 mi) to demonstrate the capabilities of the Lustre high performance parallel file system in a high bandwidth, high latency WAN environment. This demonstration functioned as a proof of concept and provided an opportunity to study Lustres performance over a 100 Gbps WAN. To characterize the performance of the network and file system a series of benchmarks and tests were undertaken. These included low level iperf network tests, Lustre networking tests, file system tests with the IOR benchmark, and a suite of real-world applications reading and writing to the file system. All of the tests and benchmarks were run over a the WAN link with a latency of 50.5 ms. In this article we describe the configuration and constraints of the demonstration and focus on the key findings regarding the networking layer for this extremely high bandwidth and high latency connection. Of particular interest are the challenges presented by link aggregation for a relatively small number of high bandwidth connections, and the specifics of virtual local area network routing for 100 Gbps routing elements.


international conference on e-science | 2016

Campus Compute Co-operative (CCC): A service oriented cloud federation

Andrew S. Grimshaw; Anindya Prodhan; Alexander Thomas; Craig A. Stewart; Richard Knepper

Universities struggle to provide both the quantity and diversity of compute resources that their researchers need when their researchers need them. Purchasing resources to meet peak demand for all resource types is cost prohibitive for all but a few institutions. Renting capacity on commercial clouds is seen as an alternative to owning. Commercial clouds though expect to be paid. The Campus Compute Cooperative (CCC) provides an alternative to purchasing capacity from commercial providers that provides increased value to member institutions at reduced cost. Member institutions trade their resources with one another to meet both local peak demand as well as provide access to resource types not available on the local campus that are available elsewhere. Participating institutions have dual roles. First as consumers of resources when their researchers use CCC machines, and second as producers of resources when CCC users from other institutions use their resources. In order to avoid the tragedy of the commons in which everyone only wants to use resources, the resource providers will receive credit when their resources are used by others. The consumer is charged based on the quality of service (high, medium, low) and the particulars of the resource provided (speed, interconnection network, memory, etc.). Account balances are cleared monthly. This paper describes solutions to both the technical and sociopolitical challenges of federating university resources and early results with the CCC. Technical issues include the security model, accounting, job specification/management and user interfaces. Socio-political issues include institutional risk management, how to manage market forces and incentives to avoid sub-optimal outcomes, and budget predictability.


PLOS ONE | 2016

Comparing the Consumption of CPU Hours with Scientific Output for the Extreme Science and Engineering Discovery Environment (XSEDE).

Richard Knepper; Katy Börner

This paper presents the results of a study that compares resource usage with publication output using data about the consumption of CPU cycles from the Extreme Science and Engineering Discovery Environment (XSEDE) and resulting scientific publications for 2,691 institutions/teams. Specifically, the datasets comprise a total of 5,374,032,696 central processing unit (CPU) hours run in XSEDE during July 1, 2011 to August 18, 2015 and 2,882 publications that cite the XSEDE resource. Three types of studies were conducted: a geospatial analysis of XSEDE providers and consumers, co-authorship network analysis of XSEDE publications, and bi-modal network analysis of how XSEDE resources are used by different research fields. Resulting visualizations show that a diverse set of consumers make use of XSEDE resources, that users of XSEDE publish together frequently, and that the users of XSEDE with the highest resource usage tend to be “traditional” high-performance computing (HPC) community members from astronomy, atmospheric science, physics, chemistry, and biology.


international conference on conceptual structures | 2015

Big Data on Ice: The Forward Observer System for In-Flight Synthetic Aperture Radar Processing

Richard Knepper; Matthew Standish; Matthew R. Link

We introduce the Forward Observer system, which is designed to provide data assurance in field data acquisition while receiving significant amounts (several terabytes per flight) of Synthetic Aperture Radar data during flights over the polar regions, which provide unique requirements for developing data collection and processing systems. Under polar conditions in the field and given the difficulty and expense of collecting data, data retention is absolutely critical. Our system provides a storage and analysis cluster with software that connects to field instruments via standard protocols, replicates data to multiple stores automatically as soon as it is written, and provides pre-processing of data so that initial visualizations are available immediately after collection, where they can provide feedback to researchers in the aircraft during the flight.


extreme science and engineering discovery environment | 2014

XSEDE Campus Bridging Pilot Case Study

Barbara Hallock; Richard Knepper; James Ferguson; Craig A. Stewart

The major goals of the XSEDE Campus Bridging pilot were to simplify the transition between resources local to the researcher and those at the national scale, as well as those resources intermediary to them; to put in place software and other resources that facilitate diverse researcher workflows; and to begin resolving programming and usability issues with the software selected for these purposes. In this paper, we situate the pilot within the domain of existing research cyberinfrastructure (and in the context of campus bridging) and examine the process by which the pilot program was completed and evaluated. We then present a status update for the selected software packages and explore further advancements to be made in this realm.

Collaboration


Dive into the Richard Knepper's collaboration.

Top Co-Authors

Avatar

Craig A. Stewart

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Matthew R. Link

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Y. Hancock

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge