Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Wolski is active.

Publication


Featured researches published by Richard Wolski.


Future Generation Computer Systems | 1999

The network weather service: a distributed resource performance forecasting service for metacomputing

Richard Wolski; Neil Spring; Jim Hayes

Abstract The goal of the Network Weather Service is to provide accurate forecasts of dynamically changing performance characteristics from a distributed set of metacomputing resources. Providing a ubiquitous service that can both track dynamic performance changes and remain stable in spite of them requires adaptive programming techniques, an architectural design that supports extensibility, and internal abstractions that can be implemented efficiently and portably. In this paper, we describe the current implementation of the NWS for Unix and TCP/IP sockets and provide examples of its performance monitoring and forecasting capabilities.


IEEE Transactions on Parallel and Distributed Systems | 2003

Adaptive computing on the Grid using AppLeS

Francine Berman; Richard Wolski; Henri Casanova; Walfredo Cirne; Holly Dail; Marcio Faerman; Silvia Figueira; Jim Hayes; Graziano Obertelli; Jennifer M. Schopf; Gary Shao; Shava Smallen; Neil Spring; Alan Su; Dmitrii Zagorodnov

Ensembles of distributed, heterogeneous resources, also known as computational grids, have emerged as critical platforms for high-performance and resource-intensive applications. Such platforms provide the potential for applications to aggregate enormous bandwidth, computational power, memory, secondary storage, and other resources during a single execution. However, achieving this performance potential in dynamic, heterogeneous environments is challenging. Recent experience with distributed applications indicates that adaptivity is fundamental to achieving application performance in dynamic grid environments. The AppLeS (Application Level Scheduling) project provides a methodology, application software, and software environments for adaptively scheduling and deploying applications in heterogeneous, multiuser grid environments. We discuss the AppLeS project and outline our findings.


conference on high performance computing (supercomputing) | 1996

Application-Level Scheduling on Distributed Heterogeneous Networks

Fran Berman; Richard Wolski; Silvia Figueira; Jennifer M. Schopf; Gary Shao

Heterogeneous networks are increasingly being used as platforms for resource-intensive distributed parallel applications. A critical contributor to the performance of such applications is the scheduling of constituent application tasks on the network. Since often the distributed resources cannot be brought under the control of a single global scheduler, the application must be scheduled by the user. To obtain the best performance, the user must take into account both application-specific and dynamic system information in developing a schedule which meets his or her performance criteria. In this paper, we define a set of principles underlying application-level scheduling and describe our work-in-progress building AppLeS (application-level scheduling) agents. We illustrate the application-level scheduling approach with a detailed description and results for a distributed 2D Jacobi application on two production heterogeneous platforms.


Cluster Computing | 1998

Dynamically forecasting network performance using the Network Weather Service

Richard Wolski

The Network Weather Service is a generalizable and extensible facility designed to provide dynamic resource performance forecasts in metacomputing environments. In this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different sites. Such network forecasts are needed both to support scheduling (Berman et al., 1996) and, by the metacomputing software infrastructure, to develop quality-of-service guarantees (DeFanti et al., to appear; Grimshaw et al., 1994).


ieee international conference on high performance computing data and analytics | 2001

Analyzing Market-Based Resource Allocation Strategies for the Computational Grid

Richard Wolski; James S. Plank; John Brevik; Todd Bryan

In this paper, the authors investigate G-commerce—computational economies for controlling resource allocation in computational Grid settings. They define hypothetical resource consumers (representing users and Grid-aware applications) and resource producers (representing resource owners who “sell” their resources to the Grid). The authors then measure the efficiency of resource allocation under two different market conditions—commodities markets and auctions—and compare both market strategies in terms of price stability, market equilibrium, consumer efficiency, and producer efficiency. The results indicate that commodities markets are a better choice for controlling Grid resources than previously defined auction strategies.


high performance distributed computing | 1997

Forecasting network performance to support dynamic scheduling using the network weather service

Richard Wolski

The Network Weather Service is a generalizable and extensible facility designed to provide dynamic resource performance forecasts in metacomputing environments. In this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different sites. Such network forecasts are needed both to support scheduling, and by the metacomputing software infrastructure to develop quality-of-service guarantees.


european conference on parallel processing | 2005

Modeling machine availability in enterprise and wide-area distributed computing environments

Daniel Nurmi; John Brevik; Richard Wolski

In this paper, we consider the problem of modeling machine availability in enterprise-area and wide-area distributed computing settings. Using availability data gathered from three different environments, we detail the suitability of four potential statistical distributions for each data set: exponential, Pareto, Weibull, and hyperexponential. In each case, we use software we have developed to determine the necessary parameters automatically from each data collection. To gauge suitability, we present both graphical and statistical evaluations of the accuracy with each distribution fits each data set. For all three data sets, we find that a hyperexponential model fits slightly more accurately than a Weibull, but that both are substantially better choices than either an exponential or Pareto. These results indicate that either a hyperexponential or Weibull model effectively represents machine availability in enterprise and Internet computing environments.


conference on high performance computing (supercomputing) | 1997

Implementing a Performance Forecasting System for Metacomputing The Network Weather Service

Richard Wolski; Neil Spring; Christopher N. Peterson

In this paper we describe the design and implementation of a system called the Network Weather Service (NWS) that takes periodic measurements of deliverable resource performance from distributed networked resources, and uses numerical models to dynamically generate forecasts of future performance levels. These performance forecasts, along with measures of performance fluctuation (e.g. the mean square prediction error) and forecast lifetime that the NWS generates, are made available to schedulers and other resource management mechanisms at runtime so that they may determine the quality-of-service that will be available from each resource.We describe the architecture of the NWS and implementations that we have developed and are currently deploying for the Legion [13] and Globus/Nexus [7] metacomputing infrastructures. We also detail NWS forecasts of resource performance using both the Legion and Globus/Nexus implementations. Our results show that simple forecasting techniques substantially outperform measurements of current conditions (commonly used to gauge resource availability and load) in terms of prediction accuracy. In addition, the techniques we have employed are almost as accurate as substantially more complex modeling methods. We compare our techniques to a sophisticated time-series analysis system in terms of forecasting accuracy and computational complexity.


high performance distributed computing | 1996

Scheduling from the perspective of the application

Francine Berman; Richard Wolski

Metacomputing is the aggregation of distributed and high-performance resources on coordinated networks. With careful scheduling, resource-intensive applications can be implemented efficiently on metacomputing systems at the sizes of interest to developers and users. In this paper, we focus on the problem of scheduling applications on metacomputing systems. We introduce the concept of application-centric scheduling in which everything about the system is evaluated in terms of its impact on the application. Application-centric scheduling is used by virtually all metacomputer programmers to achieve performance on metacomputing systems. We describe two successful metacomputing applications to illustrate this approach, and describe AppLeS (Application-Level Scheduling) agents which generalize the application-centric scheduling approach. Finally, we show preliminary results which compare AppLeS-derived schedules with conventional strip and blocked schedules for a 2D Jacobi code.


Cluster Computing | 2000

Predicting the CPU availability of time-shared Unix systems on the computational grid

Richard Wolski; Neil Spring; Jim Hayes

In this paper we focus on the problem of making short and medium term forecasts of CPU availability on time‐shared Unix systems. We evaluate the accuracy with which availability can be measured using Unix load average, the Unix utility vmstat, and the Network Weather Service CPU sensor that uses both. We also examine the autocorrelation between successive CPU measurements to determine their degree of self‐similarity. While our observations show a long‐range autocorrelation dependence, we demonstrate how this dependence manifests itself in the short and medium term predictability of the CPU resources in our study.

Collaboration


Dive into the Richard Wolski's collaboration.

Top Co-Authors

Avatar

John Brevik

California State University

View shared research outputs
Top Co-Authors

Avatar

Daniel Nurmi

University of California

View shared research outputs
Top Co-Authors

Avatar

Chandra Krintz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henri Casanova

University of California

View shared research outputs
Top Co-Authors

Avatar

D. Martin Swany

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Gary Shao

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge