Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Silvia Figueira is active.

Publication


Featured researches published by Silvia Figueira.


IEEE Transactions on Parallel and Distributed Systems | 2003

Adaptive computing on the Grid using AppLeS

Francine Berman; Richard Wolski; Henri Casanova; Walfredo Cirne; Holly Dail; Marcio Faerman; Silvia Figueira; Jim Hayes; Graziano Obertelli; Jennifer M. Schopf; Gary Shao; Shava Smallen; Neil Spring; Alan Su; Dmitrii Zagorodnov

Ensembles of distributed, heterogeneous resources, also known as computational grids, have emerged as critical platforms for high-performance and resource-intensive applications. Such platforms provide the potential for applications to aggregate enormous bandwidth, computational power, memory, secondary storage, and other resources during a single execution. However, achieving this performance potential in dynamic, heterogeneous environments is challenging. Recent experience with distributed applications indicates that adaptivity is fundamental to achieving application performance in dynamic grid environments. The AppLeS (Application Level Scheduling) project provides a methodology, application software, and software environments for adaptively scheduling and deploying applications in heterogeneous, multiuser grid environments. We discuss the AppLeS project and outline our findings.


conference on high performance computing (supercomputing) | 1996

Application-Level Scheduling on Distributed Heterogeneous Networks

Fran Berman; Richard Wolski; Silvia Figueira; Jennifer M. Schopf; Gary Shao

Heterogeneous networks are increasingly being used as platforms for resource-intensive distributed parallel applications. A critical contributor to the performance of such applications is the scheduling of constituent application tasks on the network. Since often the distributed resources cannot be brought under the control of a single global scheduler, the application must be scheduled by the user. To obtain the best performance, the user must take into account both application-specific and dynamic system information in developing a schedule which meets his or her performance criteria. In this paper, we define a set of principles underlying application-level scheduling and describe our work-in-progress building AppLeS (application-level scheduling) agents. We illustrate the application-level scheduling approach with a detailed description and results for a distributed 2D Jacobi application on two production heterogeneous platforms.


Future Generation Computer Systems | 2007

Elastic reservations for efficient bandwidth utilization in LambdaGrids

Sumit Naiksatam; Silvia Figueira

We introduce the concept of elastic reservation of bandwidth capacity to mitigate the problem of bandwidth fragmentation in LambdaGrids and present a network model which can support elastic reservations. We also define the Elastic Scheduling Problem (ESP), which succinctly captures the optimal utilization objective of elastic reservations. Analysis of ESP reveals that it is an NP-complete problem. Hence we present a heuristic algorithm, Squeeze In Stretch Out (SISO), for tackling ESP in polynomial time. SISO achieves good bandwidth utilization in simulation and efficiently handles the dynamic sharing of bandwidth between advance and immediate reservation requests. We also explore the impact of cost incentives for adopting elastic reservations on both the service provider and the user. In general, the approach for elastic reservation and scheduling presented in this paper is applicable to any concurrently accessible resource where the usage characteristics are quasi-flexible.


cluster computing and the grid | 2004

DWDM-RAM: enabling Grid services with dynamic optical networks

Silvia Figueira; Sumit Naiksatam; Howard J. Cohen; Doug Cutrell; Paul Daspit; David Gutierrez; Doan B. Hoang; Tal Lavian; Joe Mambretti; Steve Merrill; Franco Travostino

Advances in Grid technology enable the deployment of data-intensive distributed applications, which require moving terabytes or even petabytes of data between data banks. The current underlying networks cannot provide dedicated links with adequate end-to-end sustained bandwidth to support the requirements of these Grid applications. DWDM-RAM is a novel service-oriented architecture, which harnesses the enormous bandwidth potential of optical networks and demonstrates their on-demand usage on the OMNInet. Preliminary experiments suggest that dynamic optical networks, such as the OMNInet, are the ideal option for transferring such massive amounts of data. DWDM-RAM incorporates an OGSI/OGSA compliant service interface and promotes greater convergence between dynamic optical networks and data intensive Grid computing.


modeling, analysis, and simulation on computer and telecommunication systems | 2006

Flexible Time-Windows for Advance Reservation Scheduling

Neena R. Kaushik; Silvia Figueira; Stephen A. Chiappari

Advance-reservation is an essential feature of any system in which resources may need to be co-allocated at predetermined times. In this paper, we discuss unconstrained advance reservations, which use flexible time-windows to lower blocking probability and, consequently, increase resource utilization. We claim and show using simulations that the minimum window size, which theoretically brings the blocking probability to zero, in a first-come-first-served advance reservation model without time-slots, equals the waiting time in a queue-based on-demand model. We also show, with simulations, the effect of the window size on the blocking probability and on the resource utilization, for an advance reservation model with time-slots, for different types of arrival and service times. We then compare the blocking probabilities obtained by on-demand reservations, advance reservations, and unconstrained advance reservations with flexibility.


IEEE Transactions on Parallel and Distributed Systems | 2001

A slowdown model for applications executing on time-shared clusters of workstations

Silvia Figueira; Francine Berman

Distributed applications executing on clustered environments typically share resources (computers and network links) with other applications. In such systems, application execution may be retarded by the competition for these shared resources. In this paper, we define a model that calculates the slowdown imposed on applications in time-shared multi-user clusters. Our model focuses on three kinds of slowdown: local slowdown, which synthesizes the effect of contention for CPU in a single workstation; communication slowdown, which synthesizes the effect of contention for the workstations and network links on communication costs; and aggregate slowdown, which determines the effect of contention on a parallel task caused by other applications executing on the entire cluster, i.e., on the nodes used by the parallel application. We verify empirically that this model provides an accurate estimate of application performance for a set of compute-intensive parallel applications on different clusters with a variety of emulated loads.


high performance distributed computing | 1996

Modeling the effects of contention on the performance of heterogeneous applications

Silvia Figueira; Francine Berman

Fast networks have made it possible to coordinate distributed heterogeneous CPU, memory and storage resources to provide a powerful platform for executing high-performance applications. However, the performance of these applications on such systems is highly dependent on the allocation and efficient coordination of application tasks. A key component for a performance-efficient allocation strategy is a predictive model which provides a realistic estimate of application performance under varying resource loads. In this paper, we present a model for predicting the effects of contention on application behavior in heterogeneous systems. In particular, our model calculates the slowdown imposed on communication and computation for non-dedicated two-machine heterogeneous platforms. We describe the model for the Sun/CM2 and Sun/Paragon coupled heterogeneous systems. We present experiments on production systems with emulated contention which show the predicted communication and computation costs to be within 15% on average of the actual costs.


cluster computing and the grid | 2005

Analyzing the advance reservation of lightpaths in lambda-grids

Sumit Naiksatam; Silvia Figueira; Stephen A. Chiappari; Nirdosh Bhatnagar

The scheme of advance reservations in dynamically provisioned optical networks is novel, and there are no grid-based applications designed to utilize this scheme. We formally define and analyze this scheme and present a constrained mathematical model for advance reservations. We also introduce FONTS - the Flexible Optical Network Traffic Simulator, a tool for simulating advance reservation, on-demand, and periodic data transfer requests. FONTS is based on a stochastic model and incorporates a variety of variables, which have been identified to accurately model advance reservation requests. FONTS validates the mathematical model and also helps to analyze complex scenarios beyond the scope of this paper.


ieee international conference on cloud computing technology and science | 2012

Hadoop and memcached: Performance and power characterization and analysis

Joseph Issa; Silvia Figueira

Given the rapid expansion in cloud computing in the past few years, there is a driving necessity of having cloud workloads running on a backend servers analyzed and characterized for performance and power consumption. In this research, we focus on Hadoop framework and Memcached, which are distributed model frameworks for processing large scale data intensive applications for different purposes. Hadoop is used for short jobs requiring low response time; it is a popular open source implementation of MapReduce for the analysis of large datasets, while Memcached is a high performance distributed memory object caching system that could speed up throughput of web applications by reducing the effect of bottlenecks on database load. In this paper, we characterize different workloads running on Hadoop framework and Memcached for different processor configurations and microarchitecture parameters. We implement an analytical estimation model for performance and power using different server processor microarchitecture parameters. The proposed analytical estimation model uses analytical method to scale different processor microarchitecture parameters such as CPI with respect to processor core frequency. We also propose an analytical model to estimate power consumption scaling for different processor core frequency. The combination of both performance and power consumption analytical models enables the estimation of performance per watt for different cloud benchmarks. The proposed estimation models are verified to estimate power and performance with less than 10% error deviation.


high performance computing for computational science (vector and parallel processing) | 2008

Data Replication and the Storage Capacity of Data Grids

Silvia Figueira; Tan Trieu

Storage is undoubtedly one of the main resources in data grids, and planning the capacity of storage nodes is an important step in any data-grid design. This paper focuses on storage-capacity planning for data grids. We have developed a tool to calculate, for a specific scenario, the minimum capacity required for each storage node in a grid, and we have used this tool to show that different strategies used for data replication may lead to different storage requirements, affecting the storage-capacity planning.

Collaboration


Dive into the Silvia Figueira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Issa

Santa Clara University

View shared research outputs
Top Co-Authors

Avatar

Unyoung Kim

Santa Clara University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge