Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adolfo Rodriguez is active.

Publication


Featured researches published by Adolfo Rodriguez.


symposium on operating systems principles | 2003

Bullet: high bandwidth data dissemination using an overlay mesh

Dejan Kostic; Adolfo Rodriguez; Jeannie R. Albrecht; Amin Vahdat

In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.Key contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing. In a tree, it is critical that a nodes parent delivers a high rate of application data to each child. In Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate.


2002 IEEE Open Architectures and Network Programming Proceedings. OPENARCH 2002 (Cat. No.02EX571) | 2002

Opus: an overlay peer utility service

Rebecca Braynard; Dejan Kostic; Adolfo Rodriguez; Jeffrey S. Chase; Amin Vahdat

Today, an increasing number of important network services, such as content distribution, replicated services, and storage systems, are deploying overlays across multiple Internet sites to deliver better performance, reliability and adaptability. Currently however, such network services must individually reimplement substantially similar functionality. For example, applications must configure the overlay to meet their specific demands for scale, service quality and reliability. Further, they must dynamically map data and functions onto network resources-including servers, storage, and network paths-to adapt to changes in load or network conditions. In this paper, we present Opus, a large-scale overlay utility service that provides a common platform and the necessary abstractions for simultaneously hosting multiple distributed applications. In our utility model, wide-area resource mapping is guided by an applications specification of performance and availability targets. Opus then allocates available nodes to meet the requirements of competing applications based on dynamically changing system characteristics. Specifically, we describe issues and initial results associated with: i) developing a general architecture that enables a broad range of applications to push their functionality across the network, ii) constructing overlays that match both the performance and reliability characteristics of individual applications and scale to thousands of participating nodes, iii) using Service Level Agreements to dynamically allocate utility resources among competing applications, and iv) developing decentralized techniques for tracking global system characteristics through the use of hierarchy, aggregation, and approximation.


ACM Transactions on Computer Systems | 2008

High-bandwidth data dissemination for large-scale distributed systems

Dejan Kostic; Alex C. Snoeren; Amin Vahdat; Ryan Braud; Charles Edwin Killian; James W. Anderson; Jeannie R. Albrecht; Adolfo Rodriguez; Erik Vandekieft

This article focuses on the multireceiver data dissemination problem. Initially, IP multicast formed the basis for efficiently supporting such distribution. More recently, overlay networks have emerged to support point-to-multipoint communication. Both techniques focus on constructing trees rooted at the source to distribute content among all interested receivers. We argue, however, that trees have two fundamental limitations for data dissemination. First, since all data comes from a single parent, participants must often continuously probe in search of a parent with an acceptable level of bandwidth. Second, due to packet losses and failures, available bandwidth is monotonically decreasing down the tree. To address these limitations, we present Bullet, a data dissemination mesh that takes advantage of the computational and storage capabilities of end hosts to create a distribution structure where a node receives data in parallel from multiple peers. For the mesh to deliver improved bandwidth and reliability, we need to solve several key problems: (i) disseminating disjoint data over the mesh, (ii) locating missing content, (iii) finding who to peer with (peering strategy), (iv) retrieving data at the right rate from all peers (flow control), and (v) recovering from failures and adapting to dynamically changing network conditions. Additionally, the system should be self-adjusting and should have few user-adjustable parameter settings. We describe our approach to addressing all of these problems in a working, deployed system across the Internet. Bullet outperforms state-of-the-art systems, including BitTorrent, by 25-70% and exhibits strong performance and reliability in a range of deployment settings. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.


international conference on distributed computing systems | 2004

Scalability in adaptive multi-metric overlays

Adolfo Rodriguez; Dejan Kostic; Amin Vahdat

Increasing application requirements have placed heavy emphasis on building overlay networks to efficiently deliver data to multiple receivers. A key performance challenge is simultaneously achieving adaptivity to changing network conditions and scalability to large numbers of users. In addition, most current algorithms focus on a single performance metric, such as delay or bandwidth, particular to individual application requirements. We introduce a two-fold approach for creating robust, high-performance overlays called adaptive multimetric overlays (AMMO). First, AMMO uses an adaptive, highly-parallel, and metric-independent protocol, TreeMaint, to build and maintain overlay trees. Second, AMMO provides a mechanism for comparing overlay edges along specified application performance goals to guide TreeMaint transformations. We have used AMMO to implement and evaluate a single-metric (bandwidth-optimized) tree similar to Overcast and a two-metric (delay-constrained, cost-optimized) overlay.


international conference on communications | 2008

An Autonomic Service Delivery Platform for Service-Oriented Network Environments

Robert D. Callaway; Michael Devetsikiotis; Yannis Viniotis; Adolfo Rodriguez

In this paper, we propose a novel autonomic service delivery platform for service-oriented network environments. The platform enables a self-optimizing infrastructure that balances the goals of maximizing the business value derived from processing service requests and the optimal utilization of IT resources. We believe that our proposal is the first of its kind to integrate several well-established theoretical and practical techniques from networking, microeconomics, and service-oriented computing to form a fully distributed service delivery platform. The principal component of the platform is a utility-based cooperative service routing protocol that disseminates congestion-based prices among intermediaries to enable the dynamic routing of service requests from consumers to providers. We provide the motivation for such a platform and formally present our proposed architecture. We discuss the underlying analytical framework for the service routing protocol, as well as key methodologies which together provide a robust framework for our service delivery platform that is applicable to the next-generation of middleware and telecommunications architectures. We discuss issues regarding the fairness of service rate allocations, as well as the use of nonconcave utility functions in the service routing protocol. We also provide numerical results that demonstrate the ability of the platform to provide optimal routing of service requests.


international workshop on peer to peer systems | 2002

Self-Organizing Subsets: From Each According to His Abilities, to Each According to His Needs

Amin Vahdat; Jeffrey S. Chase; Rebecca Braynard; Dejan Kostic; Patrick Reynolds; Adolfo Rodriguez

The key principles behind current peer-to-peer research include fully distributing service functionality among all nodes participating in the system and routing individual requests based on a small amount of locally maintained state. The goals extend much further than just improving raw system performance: such systems must survive massive concurrent failures, denial of service attacks, etc. These efforts are uncovering fundamental issues in the design and deployment of distributed services. However, the work ignores a number of practical issues with the deployment of general peer-to-peer systems, including i) the overhead of maintaining consistency among peers replicating mutable data and ii) the resource waste incurred by the replication necessary to counteract the loss in locality that results from random content distribution.We argue that the key challenge in peer-to-peer research is not to distribute service functions among all participants, but rather to distribute functions to meet target levels of availability, survivability, and performance. In many cases, only a subset of participating hosts should take on server roles. The benefit of peer-to-peer architectures then comes from massive diversity rather than massive decentralization: with high probability, there is always some node available to provide the required functionality should the need arise.


global communications conference | 2006

QRP01-2: Challenges in Service-Oriented Networking

Robert D. Callaway; Adolfo Rodriguez; Michael Devetsikiotis; Gennaro A. Cuomo

We believe that application-aware networks will be a core component in the development and deployment of emerging network services. However, previous attempts at enabling application-awareness in the network have failed due to issues with security, resource allocation, and cost of deployment. The emergence of the Extensible Markup Language (XML), an open standard that enables data interoperability, along with advances in hardware, software, and networking technologies, serves as the catalyst for the development of service-oriented networking (SON). SON enables network components to become application-aware, so that they are able to understand data encoded in XML and act upon that data intelligently to make routing decisions, enforce QoS or security policies, or transform the data into an alternate representation. This paper describes the motivation behind service-oriented networking, the potential benefits of introducing application-aware network devices into service-oriented architectures, and discusses research challenges in the development of SON-enabled network appliances.


SPE/DOE Symposium on Improved Oil Recovery | 2006

Upscaling of Hydraulic Properties of Fractured Porous Media: Full Permeability Tensor and Continuum Scale Simulations

Adolfo Rodriguez; Hector Klie; Shuyu Sun; Xiuli Gai; Mary F. Wheeler; Horacio Florez

The simulation of flow and transport phenomena in fractured media is a challenging problem. Despite existing advances in computer capabilities, the fact that fractures can occur over a wide range of scales within porous media compromises the development of detailed flow simulations. Current discrete approaches are limited to systems that contain a small number of fractures. Alternatively, continuum approaches require the input of effective parameters that must be obtained as accurately as possible, based on the actual fracture network or its statistical description. In this work, a novel method based on the utilization of the Delta-Y transformation is introduced for obtaining the effective permeability tensor of a 2D fracture network. This approach entails a detailed description of the fracture network, where each fracture is represented as a segment with a given length, orientation and permeability value. A fine rectangular grid is then superimposed on the network, and the fractures are discretized so that each one of them is represented as a connected sequence of bonds on the grid with a hydraulic conductivity proportional to the ratio of effective permeability over fracture discretization length. The next step consists of the selection of a coarser rectangular grid on which the continuum simulation is performed. In order to obtain the permeability tensor for each one of the resulting blocks, the Delta-Y method is used. Finally, the resulting continuum permeability tensor is used to simulate the steady-state flow problem, and the results are compared with the actual flow pattern yielded by the fracture network simulation. The results obtained with both methods follow a similar flux pattern across the reservoir system. This shows that the proposed approach allows for efficient perform upscaling of hydraulic properties by honoring both the underlying physics and details of fracture network connectivity.


global communications conference | 2009

Evaluation of Multi-Point to Single-Point Service Traffic Shaping in an Enterprise Network

Keerthana Boloor; Marcelo Dias de Amorim; Bob Callaway; Adolfo Rodriguez; Yannis Viniotis

Service providers within an enterprise network are often governed by client service contracts (CSC) that specify, among other constraints, the rate at which a particular service instance may be accessed. The service can be accessed via multiple points (typically middleware appliances) in a proxy tier configuration. The CSC and thus the rate specified have to be collectively respected by all the middleware appliances. The appliances locally shape the service requests to respect the global contract. We investigate the case where the CSC limits the rate to a service to X requests with an enforcement/observation interval of T seconds across all the middleware appliances. In this paper, we extend, implement, and investigate the Credit-based Algorithm for Service Traffic Shaping (CASTS) in a production level enterprise network setting. CASTS is a decentralized algorithm for service traffic shaping in middleware appliances. We show that CASTS respects the CSC and improves the responsiveness of the system to the variations of the input rate and leads to larger service capacity when compared to the traditional static allocation approach.


ECMOR X - 10th European Conference on the Mathematics of Oil Recovery | 2006

A Learning Computational Engine for History Matching

Rafael Banchs; Hector Klie; Adolfo Rodriguez; Sunil G. Thomas; Mary F. Wheeler

The main objective of the present work is to propose and evaluate a learning computational engine for history matching, which is based on a hybrid multilevel search methodology. According to this methodology, the parameter space is globally explored and sampled by the simultaneous perturbation stochastic approximation (SPSA) algorithm at a given resolution level. This estimation is followed by further analysis by using a neural learning engine for evaluating the sensitiveness of the objective function with respect to variations of each individual model parameter in the vicinity of the promising optimal solution explored by the SPSA algorithm. The proposed methodology is used to numerically determine how additional sources of information may aid in reducing the ill-posedness associated with permeability estimation via conventional history matching procedures. The additional sources of information considered in this work are related to pressures, concentrations and fluid velocities at given locations in a reliable fashion, which in practical scenarios might be estimated from high resolution seismic surveys, or directly obtained as in situ measurements provided by sensors. This additional information is incorporated, along with production data, into a multi-objective function that is mismatched between the observed and the predicted data. The preliminary results presented in this work shed light on future research avenues for optimizing the use of additional sources of information such as seismic or sensor data in history matching procedures.

Collaboration


Dive into the Adolfo Rodriguez's collaboration.

Top Co-Authors

Avatar

Dejan Kostic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mary F. Wheeler

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Devetsikiotis

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge