Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cipriano A. Santos is active.

Publication


Featured researches published by Cipriano A. Santos.


integrated network management | 2005

Quartermaster - a resource utility system

Sharad Singhal; Martin F. Arlitt; Dirk Beyer; Sven Graupner; Vijay Machiraju; Jim Pruyne; Jerry Rolia; Akhil Sahai; Cipriano A. Santos; Julie Ward; Xiaoyun Zhu

Utility computing is envisioned as the future of enterprise IT environments. Achieving utility computing is a daunting task, because enterprise users have diverse and complex needs. In this paper we describe quartermaster, an integrated set of tools that addresses some of these needs. Quartermaster supports the entire lifecycle of computing tasks - including design, deployment, operation, and decommissioning of each task. Although individual components of this lifecycle have been addressed in earlier work, quartermaster integrates them in a unified framework using model-based automation. All tools within quartermaster are integrated using models based on the common information model (CIM), an industry-standard model from the distributed management task force (DMTF). The paper discusses the quartermaster implementation, and describes two case studies using quartermaster.


International Journal of Network Management | 2008

Automated application component placement in data centers using mathematical programming

Xiaoyun Zhu; Cipriano A. Santos; Dirk Beyer; Julie Ward; Sharad Singhal

In this article we address the application component placement (ACP) problem for a data center. The problem is defined as follows: for a given topology of a network consisting of switches, servers and storage devices with varying capabilities, and for a given specification of a component-based distributed application, decide which physical server should be assigned to each application component, such that the applications processing, communication and storage requirements are satisfied without creating bottlenecks in the infrastructure, and that scarce resources are used most efficiently. We explain how the ACP problem differs from traditional task assignment in distributed systems, or existing grid scheduling problems. We describe our approach of formalizing this problem using a mathematical optimization framework and further formulating it as a mixed integer program (MIP). We then present our ACP solver using GAMS and CPLEX to automate the decision-making process. The solver was numerically tested on a number of examples, ranging from a 125-server real data center to a set of hypothetical data centers with increasing size. In all cases the ACP solver found an optimal solution within a reasonably short time. In a numerical simulation comparing our solver to a random selection algorithm, our solver resulted in much more efficient use of scarce network resources and allowed more applications to be placed in the same infrastructure.


acm symposium on parallel algorithms and architectures | 2005

Value-maximizing deadline scheduling and its application to animation rendering

Eric Anderson; Dirk Beyer; Kamalika Chaudhuri; Terence Kelly; Norman Salazar; Cipriano A. Santos; Ram Swaminathan; Robert Endre Tarjan; Janet L. Wiener; Yunhong Zhou

We describe a new class of utility-maximization scheduling problem with precedence constraints, the disconnected staged scheduling problem (DSSP). DSSP is a nonpreemptive multiprocessor deadline scheduling problem that arises in several commercially-important applications, including animation rendering, protein analysis, and seismic signal processing. DSSP differs from most previously-studied deadline scheduling problems because the graph of precedence constraints among tasks within jobs is disconnected, with one component per job. Another difference is that in practice we often lack accurate estimates of task execution times, and so purely offline solutions are not possible. However we do know the set of jobs and their precedence constraints up front and therefore some offline planning is possible.Our solution decomposes DSSP into an offline job selection phase followed by an online task dispatching phase. We model the former as a knapsack problem and explore several solutions to it, describe a new dispatching algorithm for the latter, and compare both with existing methods. Our theoretical results show that while DSSP is NP-hard and inapproximable in general, our two-phase scheduling method guarantees a good performance bound for many special cases. Our empirical results include an evaluation of scheduling algorithms on a real animation-rendering workload; we present a characterization of this workload in a companion paper. The workload records eight weeks of activity on a 1,000-CPU cluster used to render portions of the full-length animated feature film Shrek 2 in 2004. We show that our improved scheduling algorithms can substantially increase the aggregate value of completed jobs compared to existing practices. Our new task dispatching algorithm LCPF performs well by several metrics, including job completion times as well as the aggregate value of completed jobs.


distributed systems operations and management | 2004

Policy-based resource assignment in utility computing environments

Cipriano A. Santos; Akhil Sahai; Xiaoyun Zhu; Dirk Beyer; Vijay Machiraju; Sharad Singhal

In utility computing environments, multiple users and applications are served from the same resource pool. To maintain service level objectives and maintain high levels of utilization in the resource pool, it is desirable that resources be assigned in a manner consistent with operator policies, while ensuring that shared resources (e.g., networks) within the pool do not become bottlenecks. This paper addresses how operator policies (preferences) can be included in the resource assignment problem as soft constraints. We provide the problem formulation and use two examples of soft constraints to illustrate the method. Experimental results demonstrate impact of policies on the solution.


workshop on software and performance | 2002

Web transaction analysis and optimization (TAO)

Pankaj K. Garg; Ming Hao; Cipriano A. Santos; Hsiu-Khuern Tang; Alex Zhang

In the TAO project we develop metrics, models, and infrastructure to effectively manage the performance of Web applications. We use WebMon, a novel instrumentation tool to obtain profile data for web interactions, from end-user and system component perspectives. Our analysis techniques help determine important classes of web users and their transactions. The analysis is embedded in visualization and optimization modules, enabling efficient reporting for system and business administrators, and automated resource scheduling and planning. In this paper we present an overview of TAO, and highlight some of its novel aspects, e.g., use of pixel-bar charts, web request classification, and integrated demand and capacity planning.


Interfaces | 2013

HP Enterprise Services Uses Optimization for Resource Planning

Cipriano A. Santos; Tere Gonzalez; Haitao Li; Kay-Yut Chen; Dirk Beyer; Sundaresh Biligi; Qi Feng; Ravindra Kumar; Shelen Jain; Ranga Ramanujam; Alex Zhang

The main responsibility of resource and delivery managers at Hewlett-Packard HP Enterprise Services HPES involves matching resources skilled professionals with jobs that project opportunities require. The previous Solution Opportunity Approval and Review SOAR process at HPES addressed uncertainty by producing decentralized project staffing decisions. This often led to many last-minute subjective, sometimes costly, resource allocation decisions. Based on our research, we developed a decision support tool for resource planning RP to enhance the SOAR process. It optimizes matching professionals who have diverse delivery roles and skills to jobs and projects across geographical locations while explicitly accounting for both demand and supply uncertainties. It also embeds capabilities for managers to incorporate tacit human knowledge and judgment information into the decision-making process. With its 2009 deployment in Best Shore, Bangalore operations of HPES, the RP tool’s significant benefits include reduced service delivery costs, increased workforce utilization, and profitability.


soft computing | 2018

Solving binary cutting stock with matheuristics using particle swarm optimization and simulated annealing

Ivan Adrian Lopez Sanchez; Jaime Mora Vargas; Cipriano A. Santos; Miguel González Mendoza; Cesar J. Montiel Moctezuma

In last decade, researchers have focused on improving existing methodologies through hybrid algorithms; these are a combination of algorithms between a metaheuristic with other metaheuristic and an exact method, to solve combinatorial optimization problems in the best possible way. This work presents a benchmark of different methodologies to solve the binary cutting stock problem using a column generation framework, this framework is divided into master and subproblem, master problem is solved using a classical integer linear programming, and the subproblem is solved using metaheuristic algorithms (genetic algorithm, simulated annealing and particle swarm optimization). This benchmark analysis is aimed to compare hybrid metaheuristics results with an exact methodology.


mexican international conference on artificial intelligence | 2014

Solving Binary Cutting Stock with Matheuristics

Ivan Adrian Lopez Sanchez; Jaime Mora Vargas; Cipriano A. Santos; Miguel González Mendoza

Many Combinatorial Optimization (CO) problems are classifed as NP - complete problems. The process of solving CO problems in an efficient manner is important since several industry, government and scientific problems can be statedin this form. This work presents a benchmark of three different methodologies to solve the Binary Cutting Stock (BCS) problem; exact methodology by applying Column Generation (CG), a Genetic Algorithm (GA) and an hybrid between exact methods and Genetic Algorithms in a Column Generation framework which we denominate Matheuristic (MA). This benchmark analysis is aimed to show Matheuristic solution quality is as good as the obtained by the exact methodology. Details about implementation and computational performance are discussed.


measurement and modeling of computer systems | 2005

Deadline scheduling for animation rendering

Eric Anderson; Dirk Beyer; Kamalika Chaudhuri; Terence Kelly; Norman Salazar; Cipriano A. Santos; Ram Swaminathan; Robert Endre Tarjan; Janet L. Wiener; Yunhong Zhou

We describe a new class of scheduling problem with precedence constraints, the disconnected staged scheduling problem (dssp). dssp is a nonpreemptive multiprocessor deadline scheduling problem; we seek to maximize the aggregate value of jobs that complete by a specified deadline. It arises in many commercially-important domains including bioinformatics and seismic signal processing. Our interest in dssp began with the practical problem of scheduling computer animation rendering jobs. Each job represents a brief film clip and consists of several stages that must be processed in order (e.g., physical simulation, model baking, frame rendering, and clip assembly). Each stage in turn consists of computational tasks that may be run in parallel; all tasks in a stage must finish before any task in the next stage can start. A job completes if and only if all of its tasks complete. Precedence constraints exist among tasks within a job, but not among tasks in different jobs. Jobs run overnight and yield value only if they complete before the artists who submitted them return the following morning. Demand frequently exceeds available CPU capacity, making it impossible to complete all submitted jobs by the deadline. The set of jobs is known in advance but their computational demands (e.g., the run times of tasks) are not precisely known. Existing scheduling practices rely on priority schedulers, which are not well suited to dssp because ordinal priorities cannot adequately express the value of jobs. Furthermore priority schedulers make job selection decisions as byproducts of task sequencing decisions. Our approach is to assign to jobs completion rewards whose sums and ratios are meaningful, and to perform job selection and task sequencing separately. We present both theoretical analysis and empirical evaluation of our dssp solution. Our empirical results are based on an eight-week trace of 2,388 jobs collected in a 1,000-CPU production system that rendered part of film Shrek 2 in 2004. We show that our two-phase method improves aggregate reward and achieves near-optimal performance under


file and storage technologies | 2004

Designing for Disasters

Kimberly Keeton; Cipriano A. Santos; Dirk Beyer; Jeffrey S. Chase; John Wilkes

Collaboration


Dive into the Cipriano A. Santos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge