Diana J. Arroyo
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Diana J. Arroyo.
Ibm Journal of Research and Development | 2014
William C. Arnold; Diana J. Arroyo; Wolfgang Segmuller; Mike Spreitzer; Malgorzata Steinder; Asser N. Tantawi
The software defined environment (SDE) provides a powerful programmable interface to a cloud infrastructure through an abstraction of compute, network, and storage resources. A workload refers to the application to be deployed in such an infrastructure. To take advantage of the SDE interface, the workload is described using a declarative workload definition language and is then deployed in the infrastructure through an automated workload orchestration and optimization layer. This paper describes the architecture and algorithms that make up this layer. Given a definition of the workload, including the virtual components of the application and their resource needs, as well as other meta-information relating to factors such as performance, availability, and privacy, the function of the workload orchestration and optimization layer is to map virtual resources to physical resources and realize such a mapping in the infrastructure. This mapping, known as placement, is optimized so that the infrastructure is efficiently utilized, and the workload requirements are satisfied. We present the overall architecture of the workload orchestration and optimization runtime. We focus on the workload placement problem and describe our optimization framework. Then, we consider a real application, IBM Connections, as a use-case to demonstrate the orchestration and optimization functionalities.
winter simulation conference | 2011
Zohar Feldman; Michael Masin; Asser N. Tantawi; Diana J. Arroyo; Malgorzata Steinder
In this work, we optimize the admission policy of application deployment requests submitted to data centers. Data centers are typically comprised of many physical servers. However, their resources are limited, and occasionally demand can be higher than what the system can handle, resulting with lost opportunities. Since different requests typically have different revenue margins and resource requirements, the decision whether to admit a deployment, made on time of submission, is not trivial. We use the Markov Decision Process (MDP) framework to model this problem, and draw upon the Approximate Dynamic Programming (ADP) paradigm to devise optimized admission policies. We resort to approximate methods because typical data centers are too large to solve by standard methods. We show that our algorithms achieve substantial revenue improvements, and they are scalable to large centers.
network operations and management symposium | 2012
Claris Castillo; Asser N. Tantawi; Diana J. Arroyo; Malgorzata Steinder
In this work we are concerned with the cost associated with replicating intermediate data for dataflows in Cloud environments. This cost is attributed to the extra resources required to create and maintain the additional replicas for a given data set. Existing data-analytic platforms such as Hadoop provide for fault-tolerance guarantee by relying on aggressive replication of intermediate data. We argue that the decision to replicate along with the number of replicas should be a function of the resource usage and utility of the data in order to minimize the cost of reliability. Furthermore, the utility of the data is determined by the structure of the dataflow and the reliability of the system. We propose a replication technique, which takes into account resource usage, system reliability and the characteristic of the dataflow to decide what data to replicate and when to replicate. The replication decision is obtained by solving a constrained integer programming problem given information about the dataflow up to a decision point. In addition, we built a working prototype, CARDIO of our technique which shows through experimental evaluation using a real testbed that finds an optimal solution.
modeling, analysis, and simulation on computer and telecommunication systems | 2014
Diana J. Arroyo; Iqbal Mohomed
We demonstrate a prototype system called COLD that we are developing at IBM Research which provides optimized deployment of workload in the cloud. A workload refers to an application, consisting of virtual entities (e.g. VM, volume), to be deployed in a cloud infrastructure, consisting of physical entities (e.g. PM, storage). The resource requirements of the virtual entities, as well as metadata describing relations among virtual entities (e.g. location proximity requirement), are described using a declarative workload definition language, namely using a HOT template. COLD provides a clean separation between the underlying mechanisms offered by a given cloud environment for lifecycle management of virtual resources and policies that influence optimal placement. The current implementation of COLD is tied to Open Stack, though we have plans in the future to make it cloud agnostic. It is designed to be fault-tolerant. In a software defined environment of an Open Stack cloud with various hyper visors, we show the operation of COLD to place various Heat stacks that contain specific policies such as rack-level antic location.
Archive | 2009
Diana J. Arroyo; Steven D. Clay; Malgorzata Steinder; Ian Whalley; Brian L. White Eagle
Archive | 2006
Diana J. Arroyo; George Robert Blakley; Damir A. Jamsek; Sridhar R. Muppidi; Kimberly D. Simon; Ronald B. Williams
Archive | 2012
Diana J. Arroyo; Claris Castillo; James E. Hanson; Wolfgang Segmuller; Michael J. Spreitzer; Malgorzata Steinder; Asser N. Tantawi; Ian Whalley
Archive | 2006
Diana J. Arroyo; George Robert Blakley; Damir A. Jamsek; Sridhar R. Muppidi; Kimberly D. Simon; Ronald B. Williams
Archive | 2006
Diana J. Arroyo; George Robert Blakley; Damir A. Jamsek; Sridhar R. Muppidi; Kimberly D. Simon; Ronald B. Williams
Archive | 2005
Diana J. Arroyo; George Robert Blakley; Damir A. Jamsek; Sridhar R. Muppidi; Kimberly D. Simon; Ronald B. Williams