Luis Diego Briceno
Colorado State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luis Diego Briceno.
international parallel and distributed processing symposium | 2007
Jay Smith; Luis Diego Briceno; Anthony A. Maciejewski; Howard Jay Siegel; Timothy Renner; Vladimir Shestak; Joshua Ladd; Andrew M. Sutton; David L. Janovy; Sudha Govindasamy; Amin Alqudah; Rinku Dewri; Puneet Prakash
Heterogeneous distributed computing systems often must operate in an environment where system parameters are subject to uncertainty. Robustness can be defined as the degree to which a system can function correctly in the presence of parameter values different from those assumed. We present a methodology for quantifying the robustness of resource allocations in a dynamic environment where task execution times are stochastic. The methodology is evaluated through measuring the robustness of three different resource allocation heuristics within the context of a stochastic dynamic environment. A Bayesian regression model is fit to the combined results of the three heuristics to demonstrate the correlation between the stochastic robustness metric and the presented performance metric. The correlation results demonstrated the significant potential of the stochastic robustness metric to predict the relative performance of the three heuristics given a common objective function.
The Journal of Supercomputing | 2013
B. Dalton Young; Jonathan Apodaca; Luis Diego Briceno; Jay Smith; Sudeep Pasricha; Anthony A. Maciejewski; Howard Jay Siegel; Bhavesh Khemka; Shirish Bahirat; Adrian Ramirez; Yong Zou
Energy-efficient resource allocation within clusters and data centers is important because of the growing cost of energy. We study the problem of energy-constrained dynamic allocation of tasks to a heterogeneous cluster computing environment. Our goal is to complete as many tasks by their individual deadlines and within the system energy constraint as possible given that task execution times are uncertain and the system is oversubscribed at times. We use Dynamic Voltage and Frequency Scaling (DVFS) to balance the energy consumption and execution time of each task. We design and evaluate (via simulation) a set of heuristics and filtering mechanisms for making allocations in our system. We show that the appropriate choice of filtering mechanisms improves performance more than the choice of heuristic (among the heuristics we tested).
IEEE Transactions on Parallel and Distributed Systems | 2011
Luis Diego Briceno; Howard Jay Siegel; Anthony A. Maciejewski; Mohana Oltikar; Jeff Brateman; Joe White; Jonathan R. Martin; Keith Knapp
This work considers the satellite data processing portion of a space-based weather monitoring system. It uses a heterogeneous distributed processing platform. There is uncertainty in the arrival time of new data sets to be processed, and resource allocation must be robust with respect to this uncertainty. The tasks to be executed by the platform are classified into two broad categories: high priority (e.g., telemetry, tracking, and control), and revenue generating (e.g., data processing and data research). In this environment, the resource allocation of the high-priority tasks must be done before the resource allocation of the revenue generating tasks. A two-part allocation scheme is presented in this research. The goal of first part is to find a resource allocation that minimizes makespan of the high-priority tasks. The robustness for the first part of the mapping is defined as the difference between this time and the expected arrival of the next data set. For the second part, the robustness of the mapping is the difference between the expected arrival time and the time at which the revenue earned is equal to the operating cost. Thus, the heuristics for the second part find a mapping that minimizes the time for the revenue (gained by completing revenue generating tasks) to be equal to the cost. Different resource allocation heuristics are designed and evaluated using simulations, and their performance is compared to a mathematical bound.
IEEE Transactions on Computers | 2015
Bhavesh Khemka; Ryan Friese; Luis Diego Briceno; Howard Jay Siegel; Anthony A. Maciejewski; Gregory A. Koenig; Chris Groër; Gene Okonski; Marcia Hilton; Rajendra Rambharos; Steve Poole
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop low utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. The ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.
ieee international symposium on parallel & distributed processing, workshops and phd forum | 2011
Luis Diego Briceno; Bhavesh Khemka; Howard Jay Siegel; Anthony A. Maciejewski; Christopher S Groer; Gregory A. Koenig; Gene Okonski; Stephen W. Poole
This study considers a heterogeneous computing system and corresponding workload being investigated by the Extreme Scale Systems Center (ESSC) at Oak Ridge National Laboratory (ORNL). The ESSC is part of a collaborative effort between the Department of Energy (DOE) and the Department of Defense (DoD) to deliver research, tools, software, and technologies that can be integrated, deployed, and used in both DOE and DoD environments. The heterogeneous system and workload described here are representative of a prototypical computing environment being studied as part of this collaboration. Each task can exhibit a time-varying emph{importance} or emph{utility} to the overall enterprise. In this system, an arriving task has an associated priority and precedence. The priority is used to describe the importance of a task, and precedence is used to describe how soon the task must be executed. These two metrics are combined to create a utility function curve that indicates how valuable it is for the system to complete a task at any given moment. This research focuses on using time-utility functions to generate a metric that can be used to compare the performance of different resource schedulers in a heterogeneous computing system. The contributions of this paper are: (a) a mathematical model of a heterogeneous computing system where tasks arrive dynamically and need to be assigned based on their priority, precedence, utility characteristic class, and task execution type, (b) the use of priority and precedence to generate time-utility functions that describe the value a task has at any given time, (c) the derivation of a metric based on the total utility gained from completing tasks to measure the performance of the computing environment, and (d) a comparison of the performance of resource allocation heuristics in this environment
acs/ieee international conference on computer systems and applications | 2011
Jonathan Apodaca; Dalton Young; Luis Diego Briceno; Jay Smith; Sudeep Pasricha; Anthony A. Maciejewski; Howard Jay Siegel; Shirish Bahirat; Bhavesh Khemka; Adrian Ramirez; Young Zou
In a heterogeneous environment, uncertainty in system parameters may cause performance features to degrade considerably. It then becomes necessary to design a system that is robust. Robustness can be defined as the degree to which a system can function in the presence of inputs different from those assumed. In this research, we focus on the design of robust static resource allocation heuristics suitable for a heterogeneous compute cluster that minimize the energy required to complete a given workload. In this study, we mathematically model and simulate a heterogeneous computing system that is assumed part of a larger warehouse scale computing environment. Task execution times/energy consumption may vary significantly across different data sets in our heterogeneous cluster; therefore, the execution time of each task on each node is modeled as a random variable. A resource allocation is considered robust if the probability that all tasks complete by a system deadline is at least 90%. To minimize the energy consumption of a specific resource allocation, dynamic voltage frequency scaling (DVFS) is employed. However, other factors, such as system overhead (spent on fans, disks, memory, etc.) must also be mathematically modeled when considering minimization of energy consumption. In this research, we propose three different heuristics that employ DVFS to minimize energy consumed by a set of tasks in our heterogeneous computing system. Finally, a lower bound on energy consumption is provided to gauge the performance of our heuristics.
international parallel and distributed processing symposium | 2007
Luis Diego Briceno; Mohana Oltikar; Howard Jay Siegel; Anthony A. Maciejewski
Heterogeneous computing (HC) is the coordinated use of different types of machines, networks, and interfaces to maximize the combined performance and/or cost effectiveness of the system. Heuristics for allocating resources in an HC system have different optimization criteria. A common optimization criterion is to minimize the completion time of the last to finish machine (makespan). In some environments, it is useful to minimize the finishing times of the other machines in the system, i.e., those machines that are not the last to finish. Consider a production environment where a set of known tasks are to be mapped to resources off-line before execution begins. Minimizing the finishing times of all the machines will provide the earliest available ready time for these machines to execute tasks that were not initially considered. In this study, we examine an iterative approach that decreases machine finishing times by repeatedly running a resource allocation heuristic. The goal of this study is to investigate whether this iterative procedure can reduce the finishing time of some machines compared to the mapping initially generated by the heuristic. We show that the effectiveness of the iterative approach is heuristic dependent and study the behavior of the iterative approach for each of the chosen heuristics. This work which identifies heuristics can and cannot attain improvements in the completion time of non-make span machines using this iterative approach.
IEEE Transactions on Parallel and Distributed Systems | 2015
Mark A. Oxley; Sudeep Pasricha; Anthony A. Maciejewski; Howard Jay Siegel; Jonathan Apodaca; Dalton Young; Luis Diego Briceno; Jay Smith; Shirish Bahirat; Bhavesh Khemka; Adrian Ramirez; Yong Zou
Today’s data centers face the issue of balancing electricity use and completion times of their workloads. Rising electricity costs are forcing data center operators to either operate within an electricity budget or to reduce electricity use as much as possible while still maintaining service agreements. Energy-aware resource allocation is one technique a system administrator can employ to address both problems: optimizing the workload completion time (makespan) when given an energy budget, or to minimize energy consumption subject to service guarantees (such as adhering to deadlines). In this paper, we study the problem of energy-aware static resource allocation in an environment where a collection of independent (non-communicating) tasks (“bag-of-tasks”) is assigned to a heterogeneous computing system. Computing systems often operate in environments where task execution times vary (e.g., due to cache misses or data dependent execution times). We model these execution times stochastically, using probability density functions. We want our resource allocations to be robust against these variations, where we define energy-robustness as the probability that the energy budget is not violated, and makespan-robustness as the probability a makespan deadline is not violated. We develop and analyze several heuristics for energy-aware resource allocation for both energy-constrained and deadline-constrained problems.
Journal of Parallel and Distributed Computing | 2013
Luis Diego Briceno; Jay Smith; Howard Jay Siegel; Anthony A. Maciejewski; Paul Maxwell; Russ Wakefield; Abdulla M. Al-Qawasmeh; Ron Chi-Lung Chiang; Jiayin Li
In this study, we consider an environment composed of a heterogeneous cluster of multicore-based machines used to analyze satellite data. The workload involves large data sets and is subject to a deadline constraint. Multiple applications, each represented by a directed acyclic graph (DAG), are allocated to a dedicated heterogeneous distributed computing system. Each vertex in the DAG represents a task that needs to be executed and task execution times vary substantially across machines. The goal of this research is to assign the tasks in applications to a heterogeneous multicore-based parallel system in such a way that all applications complete before a common deadline, and their completion times are robust against uncertainties in execution times. We define a measure that quantifies robustness in this environment. We design, compare, and evaluate five static resource allocation heuristics that attempt to maximize robustness. We consider six different scenarios with different ratios of computation versus communication, and loose and tight deadlines.
international conference on parallel processing | 2011
B. Dalton Young; Jonathan Apodaca; Luis Diego Briceno; Jay Smith; Sudeep Pasricha; Anthony A. Maciejewski; Howard Jay Siegel; Bhavesh Khemka; Shirish Bahirat; Adrian Ramirez; Yong Zou
Energy-efficient resource allocation within clusters and data centers is important because of the growing cost of energy. We study the problem of energy-constrained dynamic allocation of tasks to a heterogeneous cluster computing environment. Our goal is to complete as many tasks by their individual deadlines and within the system energy constraint as possible given that task execution times are uncertain and the system is oversubscribed at times. We use Dynamic Voltage and Frequency Scaling (DVFS) to balance the energy consumption and execution time of each task. We design and evaluate (via simulation) a set of heuristics and filtering mechanisms for making allocations in our system. We show that the appropriate choice of filtering mechanisms improves performance more than the choice of heuristics (among the heuristics we tested).