Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wayne Schroeder is active.

Publication


Featured researches published by Wayne Schroeder.


Journal of Physics: Conference Series | 2005

Grid portal architectures for scientific applications

Mary Thomas; J Burruss; L Cinquini; Geoffrey C. Fox; Dennis Gannon; L Gilbert; G. von Laszewski; Keith Jackson; D Middleton; Reagan Moore; Marlon E. Pierce; Beth Plale; Arcot Rajasekar; R Regno; E Roberts; D Schissel; A Seth; Wayne Schroeder

Computational scientists often develop large models and codes intended to be used by larger user communities or for repetitive tasks such as parametric studies. Lowering the barrier of entry for access to these codes is often a technical and sociological challenge. Portals help bridge the gap because they are well known interfaces enabling access to a large variety of resources, services, applications, and tools for private, public, and commercial entities, while hiding the complexities of the underlying software systems to the user. This paper presents an overview of the current state-of-the-art in grid portals, based on a component approach that utilizes portlet frameworks and the most recent Grid standards, the Web Services Resource Framework and a summary of current DOE portal efforts.


international conference on e science | 2006

Production Storage Resource Broker Data Grids

Reagan Moore; Sheau Yen Chen; Wayne Schroeder; Arcot Rajasekar; Michael Wan; Arun Jagatheesan

International data grids are now being built that support joint management of shared collections. An emerging strategy is to build multiple independent data grids, each managed by the local institution. The data grids are then federated to enable controlled sharing of files. We examine the management issues associated with maintaining federations of production data grids, including management of access controls, coordinated sharing of name spaces, replication of data between data grids, and expansion of the data grid federation.


ieee conference on mass storage systems and technologies | 1999

Configuring and tuning archival storage systems

Reagan Moore; Joseph Lopez; Charles Lofton; Wayne Schroeder; George Kremenek; Michael K. Gleicher

Archival storage systems must operate under stringent requirements, providing 100% availability while guaranteeing that data will not be lost. In this paper we explore the multiple interconnected subsystems that must be tuned to simultaneously provide high data transfer rates, high transaction rates, and guaranteed meta-data backup. We examine how resources must be allocated to the subsystems to keep the archive operational, while simultaneously allocating resources to support the user I/O demands. Based on practical experience gained running one of the largest High Performance Storage Systems, we propose tuning guidelines that should be considered by any group that is seeking to improve the performance of an archival storage system.


international conference on intelligent systems, modelling and simulation | 2010

Applying Rules as Policies for Large-Scale Data Sharing

Arcot Rajasekar; Reagan Moore; Mike Wan; Wayne Schroeder; Adil Hasan

Large scientific projects need collaborative data sharing environments. For projects like the Ocean Observations Initiative (OOI), the Temporal Dynamics of Learning Center (TDLC) and Large-scale Synoptic Survey Telescope (LSST) the amount of data collected will be on the order of Petabytes, stored across distributed heterogeneous resources under multiple administrative organizations. Policy-oriented data management is essential in such collaborations. The integrated Rule-Oriented Data System (iRODS) is a peer-to-peer, federated server-client architecture that uses a distributed rule engine for data management to apply policies encoded as rules. The rules are triggered on data management events (ingestion, access, modifications, annotations, format conversion, etc) as well as periodically (to check integrity of the data collections, intelligent data archiving and placement, load balancing, etc). Rules are applied by system administrators (e.g. for resource creation, user management, etc.) and by individual users, groups and data providers to tailor the sharing and access of data for their own needs. In this paper, we will discuss the architecture of the iRODS middleware system and discuss some of the applications of the software.


Concurrency and Computation: Practice and Experience | 1999

The SDSC encryption/authentication (SEA) system

Wayne Schroeder

As part of the Distributed Object Computation Testbed project (DOCT) and the Data Intensive Computing initiative of the National Partnership for Advanced Computational Infrastructure (NPACI), the San Diego Supercomputer Center has designed and implemented a multi-platform encryption and authentication system referred to as the SDSC Encryption and Authentication, or SEA, system. The SEA system is based on RSA and RC5 encryption capabilities and is designed for use in an HPC/WAN environment containing diverse hardware architectures and operating systems (including Cray T90, Cray T3E, Cray J90, SunOS, Solaris, AIX, SGI, HP, NextStep, and Linux). The system includes the SEA library, which provides reliable, efficient, and flexible authentication and encryption capabilities between two processes communicating via TCP/IP sockets, and SEA utilities/daemons, which provide a simple key management system. It is currently in use by the SDSC Storage Resource Broker (SRB), as well as by user interface utilities to SDSCs installation of the High Performance Storage System (HPSS). This paper presents the design and capabilities of the SEA system and discusses future plans for enhancing this system. Copyright


ieee conference on mass storage systems and technologies | 1999

Analysis of HPSS performance based on per-file transfer logs

Wayne Schroeder; Richard Marciano; Joseph Lopez; Michael K. Gleicher; George Kremenek; Chaitan Baru; Reagan Moore

This paper analyses high performance storage system (HPSS) performance and, to a lesser extent, characterizes the San Diego Supercomputer Centre (SDSC) HPSS workload, utilizing per-file transfer logs. The performance examined includes disk cache hit rates, disk file open times, library tape mount times, manual tape mount times, transfer speeds and latencies. For workload characterization, we examine daily activity in terms of file counts for get and put, and total bytes transferred, as well as activity loads in 10-minute intervals. Our results largely confirm our expectations but provide more accurate and complete descriptions, with unexpected subtleties. The visual representations provide additional insights into the functioning of the system.


collaboration technologies and systems | 2009

Universal view and open policy: Paradigms for collaboration in data grids

Arcot Rajasekar; Reagan Moore; Michael Wan; Wayne Schroeder

Large-scale Data Grid Systems (LDGS) facilitate collaborative sharing of large collections (Petabytes and100s of millions of objects) containing files, databases and data streams that are geographically distributed across heterogeneous resources and multiple administrative domains. LDGS provide a “universal view” of the distributed data, resources, users and methods and hide the idiosyncrasies and the heterogeneity of the underlying infrastructure and protocols - enhancing user collaborations. To improve transparency, an “open policy” system is needed by which data providers and administrators can describe the exact processes and policies that implement LDGS services. We consider policies and processes as the essential defining characteristics of a productive LDGS collaboration. We have implemented an LDGS, called integrated Rule-Oriented Data Systems (iRODS), which provides a universal view while enabling an open policy environment for publishing descriptions of the available services. The open policy environment is supported by a distributed workflow/rule engine. The services are encoded as rules in a high-level workflow language that transparently describes the underlying functionality. Well-defined semantics are used to control the composition of the workflow functions, called micro-services, to map to the desired client-level actions. In this paper, we describe the iRODS system from the “universal view” and “open policy” perspective and show its scalability for managing more than 10 million files.


international geoscience and remote sensing symposium | 2010

Cyber infrastructure for Community Remote Sensing

Arcot Rajasekar; Reagan Moore; Mike Wan; Wayne Schroeder

Community Remote Sensing (CRS) is an emerging field where information is collected about the environment by the general public and then integrated into collections to provide a holistic view of the environment with local details. We argue the need for a common architecture for the cyber-infrastructure that will be necessary to cater to the needs of Community Remote Sensing systems. We identify the challenges that such a cyber infrastructure (CRS-CI) has to meet and also proposed five principles as solutions to meet these challenges. Finally, we also describe the integrated Rule Oriented Data System, a data grid middleware that is built upon these principles which provides an ideal and exemplar implementation for CRS-CI.


siguccs: user services conference | 1991

Electronic consulting: software tools to enhance consulting at the San Diego Supercomputer Center

Mark Sheddon; Wayne Schroeder

This paper describessomeof the software fools and methods usedat the San Diego Supercomputer Center (SDSC) to help usersaccomplish their computatwnal sciencetaskson the center’s CRAY Y-MP8/864 supercompufer, which runs under LRWCOS,Cray Research’sversion of UNIX. Thesesoftware toolsare of two main classes:fools usedby the users themselvesand fools usedby the SDSC user consulting stafi to assist users.The former includes the UNIX manual pagesystem, SDSC’Sonline document program called dot, the news utility, electronic mail, SDSC’Sgive, givedti, takedti, j7r, and error utilities, and SDSC’S CPUtime quota system utilities (reslisf, r~alloc, and acctrep). The btfer class includes software and methodstoaccessinformation on .dkfchogs,.system call hogs,.held jobs (CPU-time quota system), Network Queuing System entries, archived e-mail, software-wn@ratwn management,and softwarerequesfsmanagement, University of California, and over 40 industrial partners. Most researchers connect to SDSC via NSFNET, which provides high-speed access among the National Science Foundation supercomputer centers, mid-level regional networks, and other research-center networks such as SPAN/HEPnet, ESnet, and CSUnet. SDSC researchers have widely varied computing experiences and are pursuing projects as diverse as global-climate-change modeling, rational drug design, elucidation of hightemperature superconductivity, seismic analysis of structures, and quantitative genetics of natural plant populations. One of SDSC’s most important tasks is to help these researchers make effective use of the center’s C1’WY Y-MP8/864 supercomputer, which runs under UNICOS, Cray Researches UNIX operating system. The SDSC CRAY is used heavily. Every month about 120 new users are given logins and about 1,100 people use some time on it. Typically there are 60 to 110 active login sessions during the day. There is almost never any idle time due to lack of user load. To assist with training these users, the center’s staff offers a three-day training program, encourages the use of its extensive documentation Permission to copy without feeall or part of this material is granted provided that the copiesare not madeor . distributed for direct commercial advantage, theACM copyright noticeand the title of fhepublicafwn and ifs date appear,and notice is given that cop@ngis by permission of the Associafwn of Computing Machinery. To copy otherwise, or to republish, requires a feeand/or spec


Archive | 2010

iRODS Primer: integrated Rule-Oriented Data System

Arcot Rajasekar; Reagan Moore; Chien-Yi Hou; Christopher A. Lee; Richard Marciano; Antoine de Torcy; Michael Wan; Wayne Schroeder; Sheau-Yen Chen; Lucas Gilbert; Paul Tooby; Bing Zhu

Collaboration


Dive into the Wayne Schroeder's collaboration.

Top Co-Authors

Avatar

Reagan Moore

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Wan

San Diego Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chaitan Baru

University of California

View shared research outputs
Top Co-Authors

Avatar

Mike Wan

University of California

View shared research outputs
Top Co-Authors

Avatar

Amarnath Gupta

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge