Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neal Barcelo is active.

Publication


Featured researches published by Neal Barcelo.


latin american symposium on theoretical informatics | 2016

Chasing Convex Bodies and Functions

Antonios Antoniadis; Neal Barcelo; Michael Nugent; Kirk Pruhs; Kevin Schewior; Michele Scquizzato

We consider three related online problems: Online Convex Optimization, Convex Body Chasing, and Lazy Convex Body Chasing. In Online Convex Optimization the input is an online sequence of convex functions over some Euclidean space. In response to a function, the online algorithm can move to any destination point in the Euclidean space. The cost is the total distance moved plus the sum of the function costs at the destination points. Lazy Convex Body Chasing is a special case of Online Convex Optimization where the function is zero in some convex region, and grows linearly with the distance from this region. And Convex Body Chasing is a special case of Lazy Convex Body Chasing where the destination point has to be in the convex region. We show that these problems are equivalent in the sense that if any of these problems have an O(1)-competitive algorithm then all of the problems have an O(1)-competitive algorithm. By leveraging these results we then obtain the first O(1)-competitive algorithm for Online Convex Optimization in two dimensions, and give the first O(1)-competitive algorithm for chasing linear subspaces. We also give a simple algorithm and O(1)-competitiveness analysis for chasing lines.


workshop on approximation and online algorithms | 2014

A o(n)-Competitive Deterministic Algorithm for Online Matching on a Line

Antonios Antoniadis; Neal Barcelo; Michael Nugent; Kirk Pruhs; Michele Scquizzato

Online matching on a line involves matching an online stream of items of various sizes to stored items of various sizes, with the objective of minimizing the average discrepancy in size between matched items. The best previously known upper and lower bounds on the optimal deterministic competitive ratio are linear in the number of items, and constant, respectively. We show that online matching on a line is essentially equivalent to a particular search problem, that we call \(k\) -lost cows. We then obtain the first deterministic sub-linearly competitive algorithm for online matching on a line by giving such an algorithm for the \(k\)-lost cows problem.


symposium on theoretical aspects of computer science | 2014

Efficient Computation of Optimal Energy and Fractional Weighted Flow Trade-off Schedules

Antonios Antoniadis; Neal Barcelo; Mario E. Consuegra; Peter Kling; Michael Nugent; Kirk Pruhs; Michele Scquizzato

We give a polynomial time algorithm to compute an optimal energy and fractional weighted flow trade-off schedule for a speed-scalable processor with discrete speeds. Our algorithm uses a geometric approach that is based on structural properties obtained from a primal-dual formulation of the problem.


latin american symposium on theoretical informatics | 2014

Packet Forwarding Algorithms in a Line Network

Antonios Antoniadis; Neal Barcelo; Daniel G. Cole; Kyle Fox; Benjamin Moseley; Michael Nugent; Kirk Pruhs

We initiate a competitive analysis of packet forwarding policies for maximum and average flow in a line network. We show that the policies Earliest Arrival and Furthest-To-Go are scalable, but not constant competitive, for maximum flow. We show that there is no constant competitive algorithm for average flow.


mathematical foundations of computer science | 2015

On the Complexity of Speed Scaling

Neal Barcelo; Peter Kling; Michael Nugent; Kirk Pruhs; Michele Scquizzato

The most commonly studied energy management technique is speed scaling, which involves operating the processor in a slow, energy-efficient mode at non-critical times, and in a fast, energy-inefficient mode at critical times. The natural resulting optimization problems involve scheduling jobs on a speed-scalable processor and have conflicting dual objectives of minimizing energy usage and minimizing waiting times. One can formulate many different optimization problems depending on how one models the processor (e.g., whether allowed speeds are discrete or continuous, and the nature of relationship between speed and power), the performance objective (e.g., whether jobs are of equal or unequal importance, and whether one is interested in minimizing waiting times of jobs or of work), and how one handles the dual objective (e.g., whether they are combined in a single objective, or whether one objective is transformed into a constraint). There are a handful of papers in the algorithmic literature that each give an efficient algorithm for a particular formulation. In contrast, the goal of this paper is to look at a reasonably full landscape of all the possible formulations. We give several general reductions which, in some sense, reduce the number of problems that are distinct in a complexity theoretic sense. We show that some of the problems, for which there are efficient algorithms for a fixed speed processor, turn out to be NP-hard. We give efficient algorithms for some of the other problems. Finally, we identify those problems that appear to not be resolvable by standard techniques or by the techniques that we develop in this paper for the other problems.


conference on innovations in theoretical computer science | 2014

Energy-efficient circuit design

Antonios Antoniadis; Neal Barcelo; Michael Nugent; Kirk Pruhs; Michele Scquizzato

We initiate the theoretical investigation of energy-efficient circuit design. We assume that the circuit design specifies the circuit layout as well as the supply voltages for the gates. To obtain maximum energy efficiency, the circuit design must balance the conflicting demands of minimizing the energy used per gate, and minimizing the number of gates in the circuit; If the energy supplied to the gates is small, then functional failures are likely, necessitating a circuit layout that is more fault-tolerant, and thus that has more gates. By leveraging previous work on fault-tolerant circuit design, we show general upper and lower bounds on the amount of energy required by a circuit to compute a given relation. We show that some circuits would be asymptotically more energy efficient if heterogeneous supply voltages were allowed, and show that for some circuits the most energy-efficient supply voltages are homogeneous over all gates.


International Green Computing Conference | 2014

Complexity-theoretic obstacles to achieving energy savings with near-threshold computing

Antonios Antoniadis; Neal Barcelo; Michael Nugent; Kirk Pruhs; Michele Scquizzato

In the traditional approach to circuit design the supply voltages for each transistor/gate are set sufficiently high so that with sufficiently high probability no transistor fails. One potential method to attain more energy-efficient circuits is Near-Threshold Computing, which simply means that the supply voltages are designed to be closer to the threshold voltage. However, this energy saving comes at a cost of a greater probability of functional failure, which necessitates that the circuits must be more fault tolerant, and thus contain more gates. Thus achieving energy savings with Near-Threshold Computing involves properly balancing the energy used per gate with the number of gates used. We show that if there is a better (in terms of worst-case relative error with respect to energy) method than the traditional approach then P = NP, and thus there is a complexity theoretic obstacle to achieving energy savings with Near-Threshold Computing.


mathematical foundations of computer science | 2015

Almost All Functions Require Exponential Energy

Neal Barcelo; Michael Nugent; Kirk Pruhs; Michele Scquizzato

One potential method to attain more energy-efficient circuits with the current technology is Near-Threshold Computing, which means using less energy per gate by designing the supply voltages to be closer to the threshold voltage of transistors. However, this energy savings comes at a cost of a greater probability of gate failure, which necessitates that the circuits must be more fault-tolerant, and thus contain more gates. Thus achieving energy savings with Near-Threshold Computing involves properly balancing the energy used per gate with the number of gates used. The main result of this paper is that almost all Boolean functions require circuits that use exponential energy, even if allowed circuits using heterogeneous supply voltages. This is not an immediate consequence of Shannon’s classic result that almost all functions require exponential sized circuits of faultless gates because, as we show, the same circuit layout can compute many different functions, depending on the value of the supply voltages. The key step in the proof is to upper bound the number of different functions that one circuit layout can compute. We also show that the Boolean functions that require exponential energy are exactly the Boolean functions that require exponentially many faulty gates.


MedAlg'12 Proceedings of the First Mediterranean conference on Design and Analysis of Algorithms | 2012

Energy efficient caching for phase-change memory

Neal Barcelo; Miao Zhou; Daniel G. Cole; Michael Nugent; Kirk Pruhs

Phase-Change Memory (PCM) has the potential to replace DRAM as the primary memory technology due to its non-volatility, scalability, and high energy efficiency. However, the adoption of PCM will require technological solutions to surmount some deficiencies of PCM, such as writes requiring significantly more energy and time than reads. One way to limit the number of writes is by adopting a last-level cache replacement policy that is aware of the asymmetric nature of PCM read/write costs. We first develop a cache replacement algorithm, Asymmetric Landlord (AL), and show that it is theoretically optimal in that it gives the best possible guarantee on relative error. We also propose an algorithm Variable Aging (VA), which is a variation of AL. We have carried out a simulation analysis comparing the algorithms LRU, N-Chance, AL, and VA. For benchmarks that are a mixture of reads and writes, VA is comparable or better than N-Chance, even for the best choice of N, and uses at least 11% less energy than LRU. For read dominated benchmarks, we find that AL and VA are comparable to LRU, while N-Chance (using the N that was best for benchmarks that were a mixture of reads and writes) uses at least 20% more energy.


conference on combinatorial optimization and applications | 2016

Optimal Speed Scaling with a Solar Cell

Neal Barcelo; Peter Kling; Michael Nugent; Kirk Pruhs

We consider the setting of a sensor that consists of a speed-scalable processor, a battery, and a solar cell that harvests energy from its environment at a time-invariant recharge rate. The processor must process a collection of jobs of various sizes. Jobs arrive at different times and have different deadlines. The objective is to minimize the *recharge rate*, which is the rate at which the device has to harvest energy in order to feasibly schedule all jobs. The main result is a polynomial-time combinatorial algorithm for processors with a natural set of discrete speed/power pairs.

Collaboration


Dive into the Neal Barcelo's collaboration.

Top Co-Authors

Avatar

Kirk Pruhs

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Michael Nugent

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Kling

University of Paderborn

View shared research outputs
Top Co-Authors

Avatar

Daniel G. Cole

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Benjamin Moseley

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Miao Zhou

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge