Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew R. Link is active.

Publication


Featured researches published by Matthew R. Link.


siguccs: user services conference | 2010

What is cyberinfrastructure

Craig A. Stewart; Stephen C. Simms; Beth Plale; Matthew R. Link; David Y. Hancock; Geoffrey C. Fox

Cyberinfrastructure is a word commonly used but lacking a single, precise definition. One recognizes intuitively the analogy with infrastructure, and the use of cyber to refer to thinking or computing -- but what exactly is cyberinfrastructure as opposed to information technology infrastructure? Indiana University has developed one of the more widely cited definitions of cyberinfrastructure. Cyberinfrastructure consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high performance networks to improve research productivity and enable breakthroughs not otherwise possible. A second definition, more inclusive of scholarship generally and educational activities, has also been published and is useful in describing cyberinfrastructure: Cyberinfrastructure consists of computational systems, data and information management, advanced instruments, visualization environments, and people, all linked together by software and advanced networks to improve scholarly productivity and enable knowledge breakthroughs and discoveries not otherwise possible. In this paper, we describe the origin of the term cyberinfrastructure based on the history of the root word infrastructure, discuss several terms related to cyberinfrastructure, and provide several examples of cyberinfrastructure


ieee international conference on high performance computing data and analytics | 2012

Demonstrating lustre over a 100Gbps wide area network of 3,500km

Robert Henschel; Stephen C. Simms; David Y. Hancock; Scott Michael; Tom Johnson; Nathan Heald; Thomas William; Donald K. Berry; Matthew Allen; Richard Knepper; Matt Davy; Matthew R. Link; Craig A. Stewart

As part of the SCinet Research Sandbox at the Supercomputing 2011 conference, Indiana University (IU) demonstrated use of the Lustre high performance parallel file system over a dedicated 100 Gbps wide area network (WAN) spanning more than 3,500 km (2,175 mi). This demonstration functioned as a proof of concept and provided an opportunity to study Lustres performance over a 100 Gbps WAN. To characterize the performance of the network and file system, low level iperf network tests, file system tests with the IOR benchmark, and a suite of real-world applications reading and writing to the file system were run over a latency of 50.5 ms. In this article we describe the configuration and constraints of the demonstration and outline key findings.


conference on high performance computing (supercomputing) | 2006

All in a day's work: advancing data-intensive research with the data capacitor

Stephen C. Simms; Matt Davy; Bret Hammond; Matthew R. Link; Craig A. Stewart; Randall Bramley; Beth Plale; Dennis Gannon; Mu-Hyun Baik; Scott Teige; John C. Huffman; Rick McMullen; Doug Balog; Greg Pike

Indiana University provides powerful compute, storage, and network resources to a diverse local and national research community every day. IUs facilities have been used to support data-intensive applications ranging from digital humanities to computational biology.For this years bandwidth challenge, several IU researchers will conduct experiments from the exhibit floor utilizing the resources that University Information Technology Services currently provides.Using IUs newly constructed 535 TB Data Capacitor and an additional component installed on the exhibit floor, we will use Lustre across the wide area network to simultaneously facilitate dynamic weather modeling, protein analysis, instrument data capture, and the production, storage, and analysis of simulation data.


international workshop on data intensive distributed computing | 2012

A study of lustre networking over a 100 gigabit wide area network with 50 milliseconds of latency

Scott Michael; Liang Zhen; Robert Henschel; Stephen C. Simms; Eric Barton; Matthew R. Link

As part of the SCinet Research Sandbox at the 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC11), Indiana University utilized a dedicated 100 Gbps wide area network (WAN) link spanning more than 3,500 km (2,175 mi) to demonstrate the capabilities of the Lustre high performance parallel file system in a high bandwidth, high latency WAN environment. This demonstration functioned as a proof of concept and provided an opportunity to study Lustres performance over a 100 Gbps WAN. To characterize the performance of the network and file system, a series of benchmarks and tests were undertaken. These included low level iperf network tests, Lustre networking (LNET) tests, file system tests with the IOR benchmark, and a suite of real-world applications reading and writing to the file system. All of the benchmarks were run over a the WAN link with a latency of 50.5 ms. In this article, we describe the configuration and constraints of the demonstration, and focus on the key findings made regarding the Lustre networking layer for this extremely high bandwidth, high latency connection. Of particular interest is the relationship between the peer_credits and max_rpcs_in_flight settings when considering LNET performance.


teragrid conference | 2010

A compelling case for a centralized filesystem on the TeraGrid: enhancing an astrophysical workflow with the data capacitor WAN as a test case

Scott Michael; Stephen C. Simms; W. B. Breckenridge Iii; Roger Smith; Matthew R. Link

In this article we explore the utility of a centralized filesystem provided by the TeraGrid to both TeraGrid and non-TeraGrid sites. We highlight several common cases in which such a filesystem would be useful in obtaining scientific insight. We present results from a test case using Indiana Universitys Data Capacitor over the wide area network as a central filesystem for simulation data generated at multiple TeraGrid sites and analyzed at Mississippi State University. Statistical analysis of the I/O patterns and rates, via detailed trace records generated with VampirTrace, for both the Data Capacitor and a local Lustre filesystem are provided. The benefits of a centralized filesystem and potential hurdles in adopting such a system for both TeraGrid and non-TeraGrid sites are discussed.


siguccs: user services conference | 2005

PubsOnline: open source bibliography database

Scott A. Myron; Richard Knepper; Matthew R. Link; Craig A. Stewart

Universities and colleges, departments within universities and colleges, and individual researchers often desire the ability to provide online listings, via the Web, of citations to publications and other forms of information dissemination. Cataloging citations to publications or other forms of information dissemination by a particular organization facilitates access to the information, its use, and citation in subsequent publications. Listing, searching, and indexing of citations is further improved when citations can be searched on by additional key information, such as by grant, university resource, or research lab.This paper describes PubsOnline, an open source tool for management and presentation of databases of citations via the Web. Citations with bibliographic information are kept in the database and associated with attributes that are grouped by category and usable as search keys. Citations may optionally be linked to files containing an entire article. PubsOnline was developed with PHP and MySQL, and may be downloaded from http://pubsonline.indiana.edu/.


networking architecture and storages | 2012

The Lustre File System and 100 Gigabit Wide Area Networking: An Example Case from SC11

Richard Knepper; Scott Michael; William Johnson; Robert Henschel; Matthew R. Link

As part of the SCinet Research Sandbox at the IEEE/ACM International Conference for High Performance Computing, Networking, Storage and Analysis (SC11), Indiana University utilized a dedicated 100 Gbps wide area network (WAN) link spanning more than 3,500 km (2,175 mi) to demonstrate the capabilities of the Lustre high performance parallel file system in a high bandwidth, high latency WAN environment. This demonstration functioned as a proof of concept and provided an opportunity to study Lustres performance over a 100 Gbps WAN. To characterize the performance of the network and file system a series of benchmarks and tests were undertaken. These included low level iperf network tests, Lustre networking tests, file system tests with the IOR benchmark, and a suite of real-world applications reading and writing to the file system. All of the tests and benchmarks were run over a the WAN link with a latency of 50.5 ms. In this article we describe the configuration and constraints of the demonstration and focus on the key findings regarding the networking layer for this extremely high bandwidth and high latency connection. Of particular interest are the challenges presented by link aggregation for a relatively small number of high bandwidth connections, and the specifics of virtual local area network routing for 100 Gbps routing elements.


siguccs: user services conference | 2016

A PetaFLOPS Supercomputer as a Campus Resource: Innovation, Impact, and Models for Locally-Owned High Performance Computing at Research Colleges and Universities

Abhinav Thota; Ben Fulton; Le Mai Weakley Weakley; Robert Henschel; David Y. Hancock; Matthew Allen; Jenett Tillotson; Matthew R. Link; Craig A. Stewart

In 1997, Indiana University (IU) began a purposeful and steady drive to expand the use of supercomputers and what we now call cyberinfrastructure. In 2001, IU implemented the first 1 TFLOPS supercomputer owned by and operated for a single US University. In 2013, IU made an analogous investment and achievement at the 1 PFLOPS level: Big Red II, a Cray XE6/XK7, was the first supercomputer capable of 1 PFLOPS (theoretical) performance that was a dedicated university resource. IUs high performance computing (HPC) resources have fostered innovation in disciplines from biology to chemistry to medicine. Currently, 185 disciplines and sub disciplines are represented on Big Red II with a wide variety of usage needs. Quantitative data suggest that investment in this supercomputer has been a good value to IU in terms of academic achievement and federal grant income. Here we will discuss how investment in Big Red II has benefited IU, and argue that locally-owned computational resources (scaled appropriately to needs and budgets) may be of benefit to many colleges and universities. We will also discuss software tools under development that will aid others in quantifying the benefit of investment in high performance computing to their campuses.


international conference on conceptual structures | 2015

Big Data on Ice: The Forward Observer System for In-Flight Synthetic Aperture Radar Processing

Richard Knepper; Matthew Standish; Matthew R. Link

We introduce the Forward Observer system, which is designed to provide data assurance in field data acquisition while receiving significant amounts (several terabytes per flight) of Synthetic Aperture Radar data during flights over the polar regions, which provide unique requirements for developing data collection and processing systems. Under polar conditions in the field and given the difficulty and expense of collecting data, data retention is absolutely critical. Our system provides a storage and analysis cluster with software that connects to field instruments via standard protocols, replicates data to multiple stores automatically as soon as it is written, and provides pre-processing of data so that initial visualizations are available immediately after collection, where they can provide feedback to researchers in the aircraft during the flight.


siguccs: user services conference | 2006

Research data storage available to researchers throughout the U.S. via the TeraGrid

D. Scott McCaulay; Matthew R. Link

Many faculty members at small to mid-size colleges and universities do important, high quality research that requires significant storage. In many cases, such storage requirements are difficult to meet with local resources; even when local resources suffice, data integrity is best ensured by maintenance of a remote copy. Via the nationally-funded TeraGrid, Indiana University offers researchers at colleges and universities throughout the US the opportunity to easily store up to 1 TB of data within the IU data storage system.The TeraGrid is the National Science Foundations flagship effort to create a national research cyberinfrastructure, and one key goal of the TeraGrid is to provide facilities that improve the productivity of the US research community generally. Providing facilities that improve the capacity and reliability of research data storage is an important part of this. This paper will describe the process for storing data at IU via the TeraGrid, and will in general discuss how this capability is part of a larger TeraGrid-wide data storage strategy.

Collaboration


Dive into the Matthew R. Link's collaboration.

Top Co-Authors

Avatar

Craig A. Stewart

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

David Y. Hancock

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen C. Simms

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Scott Michael

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Fulton

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Turner

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge