Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott Teige is active.

Publication


Featured researches published by Scott Teige.


Proceedings of the 2007 workshop on Service-oriented computing performance: aspects, issues, and approaches | 2007

Empowering distributed workflow with the data capacitor: maximizing lustre performance across the wide area network

Stephen C. Simms; Gregory G. Pike; Scott Teige; Bret Hammond; Yu Ma; Larry L. Simms; C. Westneat; Douglas A. Balog

The Indiana University Data Capacitor is a 535 TB distributed parallel filesystem constructed for short to mid term storage of large research data sets. Spanning multiple, geographically distributed compute, storage, and visualization resources and showing unprecedented performance across the wide area network, the Data Capacitors Lustre filesystem can be used as a powerful tool to accommodate loosely coupled, service oriented computing. In this paper we demonstrate single file/single client write performance from Oak Ridge National Laboratory to Indiana University in excess of 750 MB/s. We evaluate client parameters that will allow widely distributed services to achieve data transfer rates closely matching those of local services. Finally, we outline the tuning strategy used to maximize performance, and present the results of this tuning.


conference on high performance computing (supercomputing) | 2006

All in a day's work: advancing data-intensive research with the data capacitor

Stephen C. Simms; Matt Davy; Bret Hammond; Matthew R. Link; Craig A. Stewart; Randall Bramley; Beth Plale; Dennis Gannon; Mu-Hyun Baik; Scott Teige; John C. Huffman; Rick McMullen; Doug Balog; Greg Pike

Indiana University provides powerful compute, storage, and network resources to a diverse local and national research community every day. IUs facilities have been used to support data-intensive applications ranging from digital humanities to computational biology.For this years bandwidth challenge, several IU researchers will conduct experiments from the exhibit floor utilizing the resources that University Information Technology Services currently provides.Using IUs newly constructed 535 TB Data Capacitor and an additional component installed on the exhibit floor, we will use Lustre across the wide area network to simultaneously facilitate dynamic weather modeling, protein analysis, instrument data capture, and the production, storage, and analysis of simulation data.


Journal of Physics: Conference Series | 2014

OASIS: a data and software distribution service for Open Science Grid

B Bockelman; J Caballero Bejar; J De Stefano; John Hover; R Quick; Scott Teige

The Open Science Grid encourages the concept of software portability: a users scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.


Nuclear Physics B - Proceedings Supplements | 1991

Preliminary partial-wave analysis of the K+KOs π− system produced in 8 GeV/c K−p interactions

E. King; Z. Bar-Yam; S. Blessing; A. Boehnlein; D. Boehnlein; B. E. Bonner; S. U. Chung; J.M. Clement; R.R. Crittenden; J. Dowd; A. Dzierba; R. Fernow; J. H. Goldman; V. Hagopian; W. Kern; H. Kirk; N. Krishna; T. Marshall; G. S. Mutchler; S. Protopopescu; J.B. Roberts; H. Rudnicka; P. Rulon; Scott Teige; Dennis P. Weygand; H.J. Willutzki; D. Zieminska

Abstract We have performed a partial-wave analysis of the K + K O S π − system produced in the reaction K − p → K + K 0 S π − Λ / Σ at 8 GeV/c. We present results of a preliminary analysis of approximately 2000 events in the KKπ mass range 1.24–1.64 GeV, with 0.0 2 . In the 1.28 GeV mass region, we observe a small J PG = 0 −+ contribution but little indication of a 1 ++ wave. We observe a large enhancement of events at the threshold of KK ∗ production. Above this threshold the J PG = 1 +− , 1 ++ and 0 −+ waves all contribute, and we see evidence for a 1 +− resonance below the KK ∗ threshold.


22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016 | 2017

One network metric datastore to track them all: The OSG network metric service

Robert Quick; Marián Babik; Edgar Fajardo; Kyle Gross; Soichi Hayashi; Marina Krenz; Thomas Lee; Shawn McKee; Christopher Pipes; Scott Teige

The Open Science Grid (OSG) relies upon the network as a critical part of the distributed infrastructures it enables. In 2012, OSG added a new focus area in networking with a goal of becoming the primary source of network information for its members and collaborators. This includes gathering, organizing, and providing network metrics to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion, and traffic routing. In September of 2015, this service was deployed into the OSG production environment. We will report on the creation, implementation, testing, and deployment of the OSG Networking Service. Starting from organizing the deployment of perfSONAR toolkits within OSG and its partners, to the challenges of orchestrating regular testing between sites, to reliably gathering the resulting network metrics and making them available for users, virtual organizations, and higher level services, all aspects of implementation will be reviewed. In particular, several higher-level services were developed to bring the OSG network service to its full potential. These include a web-based mesh configuration system, which allows central scheduling and management of all the network tests performed by the instances; a set of probes to continually gather metrics from the remote instances and publish it to different sources; a central network datastore (esmond), which provides interfaces to access the network monitoring information in close to real time and historically (up to a year) giving the state of the tests; and a perfSONAR infrastructure monitor system, ensuring the current perfSONAR instances are correctly configured and operating as intended. We will also describe the challenges we encountered in ongoing operations of the network service and how we have evolved our procedures to address those challenges. Finally we will describe our plans for future extensions and improvements to the service.


Proceedings of International Symposium on Grids and Clouds 2015 — PoS(ISGC2015) | 2016

Building a Chemical-Protein Interactome on the Open Science Grid

Robert Quick; Scott Teige; Soichi Hayashi; David Yu; Samy O. Meroueh; Mats Rynge; Bo Wang

The Structural Protein-Ligand Interactome (SPLINTER) project predicts the interaction of thousands of small molecules with thousands of proteins. These interactions are predicted using the three-dimensional structure of the bound complex between each pair of protein and compound that is predicted by molecular docking. These docking runs consist of millions of individual short jobs each lasting only minutes. However, computing resources to execute these jobs (which cumulatively take tens of millions of CPU hours) are not readily or easily available in a cost effective manner. By looking to National Cyberinfrastructure resources, and specifically the Open Science Grid (OSG), we have been able to harness CPU power for researchers at the Indiana University School of Medicine to provide a quick and efficient solution to their unmet computing needs. Using the job submission infrastructure provided by the OSG, the docking data and simulation executable was sent to more than 100 universities and research centers worldwide. These opportunistic resources provided millions of CPU hours in a matter of days, greatly reducing time docking simulation time for the research group. The overall impact of this approach allows researchers to identify small molecule candidates for individual proteins, or new protein targets for existing FDA-approved drugs and biologically active compounds.


Journal of Physics: Conference Series | 2012

Open Science Grid (OSG) Ticket Synchronization: Keeping Your Home Field Advantage In A Distributed Environment

Kyle Gross; Soichi Hayashi; Scott Teige; Robert Quick

Large distributed computing collaborations, such as the Worldwide LHC Computing Grid (WLCG), face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable exchange of information between support entities and users in different grid environments. To combat this problem, OSG Operations has created a ticket synchronization interface called GOC-TX that relies on web services instead of error-prone email parsing methods of the past. Synchronizing tickets between different ticketing systems allows any user or support entity to work on a ticket in their home environment, thus providing a familiar and comfortable place to provide updates without having to learn another ticketing system. The interface is built in a way that it is generic enough that it can be customized for nearly any ticketing system with a web-service interface with only minor changes. This allows us to be flexible and rapidly bring new ticket synchronization online. Synchronization can be triggered by different methods including mail, web services interface, and active messaging. GOC-TX currently interfaces with Global Grid User Support (GGUS) for WLCG, Remedy at Brookhaven National Lab (BNL), and Request Tracker (RT) at the Virtual Data Toolkit (VDT). Work is progressing on the Fermi National Accelerator Laboratory (FNAL) ServiceNow synchronization. This paper will explain the problems faced by OSG and how they led OSG to create and implement this ticket synchronization system along with the technical details that allow synchronization to be preformed at a production level.


GLUEBALLS, HYBRIDS, AND EXOTIC HADRONS | 2008

Mass dependent fits of the partial wave analysis of the K+K̄0π− system

S. Blessing; R.R. Crittenden; A. Dzierba; T. Marshall; Scott Teige; D. Zieminska; A. Birman; S. U. Chung; D. C. Peaslee; R. Fernow; H. Kirk; S. Protopopescu; Dennis P. Weygand; H.J. Willutzki; A. Boehnlein; D. Boehnlein; J. H. Goldman; V. Hagopian; D. Reeves; Z. Bar-Yam; J. Dowd; W. Kern; E. King; H. Rudnicka; P. Rulon

We have performed a partial wave analysis of the K+K0Sπ− system produced in the reaction π−pu2009→u2009K+K0π−n at GeV/c. We present the mass dependent fits to the PWA results for the waves JPG(isobar)=0−+(α0), 0−+(K*), 1++(α0) and 1++(K*).


Physical Review Letters | 1985

Spin and parity analysis of KK-bar pi system in the D and E/iota regions.

S. U. Chung; R. Fernow; H. Kirk; S. Protopopescu; Dennis P. Weygand; D. Boehnlein; J. H. Goldman; V. Hagopian; D. Reeves; R.R. Crittenden; A. Dzierba; T. Marshall; Scott Teige; D. Zieminska; Z. Bar-Yam; J. Dowd; W. Kern; H. Rudnicka


Physical Review Letters | 1988

Partial-wave analysis of the K+K-bar0 pi - system.

Amnon Birman; S. U. Chung; D. C. Peaslee; R. Fernow; H. Kirk; S. D. Protopopescu; D. P. Weygand; H.J. Willutzki; A. Boehnlein; D. Boehnlein; J. H. Goldman; V. Hagopian; D. Reeves; S. Blessing; R.R. Crittenden; A. Dzierba; T. Marshall; Scott Teige; D. Zieminska; Z. Bar-Yam; J. Dowd; W. Kern; Edmund King; H. Rudnicka; P. Rulon

Collaboration


Dive into the Scott Teige's collaboration.

Top Co-Authors

Avatar

A. Dzierba

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Fernow

Brookhaven National Laboratory

View shared research outputs
Top Co-Authors

Avatar

R.R. Crittenden

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

S. U. Chung

Brookhaven National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. H. Goldman

Florida State University

View shared research outputs
Top Co-Authors

Avatar

S. Blessing

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

T. Marshall

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

V. Hagopian

Florida State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge