Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John D. Lafferty is active.

Publication


Featured researches published by John D. Lafferty.


architectural support for programming languages and operating systems | 2015

A Probabilistic Graphical Model-based Approach for Minimizing Energy Under Performance Constraints

Nikita Mishra; Huazhe Zhang; John D. Lafferty; Henry Hoffmann

In many deployments, computer systems are underutilized -- meaning that applications have performance requirements that demand less than full system capacity. Ideally, we would take advantage of this under-utilization by allocating system resources so that the performance requirements are met and energy is minimized. This optimization problem is complicated by the fact that the performance and power consumption of various system configurations are often application -- or even input -- dependent. Thus, practically, minimizing energy for a performance constraint requires fast, accurate estimations of application-dependent performance and power tradeoffs. This paper investigates machine learning techniques that enable energy savings by learning Pareto-optimal power and performance tradeoffs. Specifically, we propose LEO, a probabilistic graphical model-based learning system that provides accurate online estimates of an applications power and performance as a function of system configuration. We compare LEO to (1) offline learning, (2) online learning, (3) a heuristic approach, and (4) the true optimal solution. We find that LEO produces the most accurate estimates and near optimal energy savings.


architectural support for programming languages and operating systems | 2018

CALOREE: Learning Control for Predictable Latency and Low Energy

Nikita Mishra; Connor Imes; John D. Lafferty; Henry Hoffmann

Many modern computing systems must provide reliable latency with minimal energy. Two central challenges arise when allocating system resources to meet these conflicting goals: (1) complexity modern hardware exposes diverse resources with complicated interactions and (2) dynamics latency must be maintained despite unpredictable changes in operating environment or input. Machine learning accurately models the latency of complex, interacting resources, but does not address system dynamics; control theory adjusts to dynamic changes, but struggles with complex resource interaction. We therefore propose CALOREE, a resource manager that learns key control parameters to meet latency requirements with minimal energy in complex, dynamic en- vironments. CALOREE breaks resource allocation into two sub-tasks: learning how interacting resources affect speedup, and controlling speedup to meet latency requirements with minimal energy. CALOREE deines a general control system whose parameters are customized by a learning framework while maintaining control-theoretic formal guarantees that the latency goal will be met. We test CALOREEs ability to deliver reliable latency on heterogeneous ARM big.LITTLE architectures in both single and multi-application scenarios. Compared to the best prior learning and control solutions, CALOREE reduces deadline misses by 60% and energy consumption by 13%.


international conference on autonomic computing | 2017

ESP: A Machine Learning Approach to Predicting Application Interference

Nikita Mishra; John D. Lafferty; Henry Hoffmann

Independent applications co-scheduled on the same hardware will interfere with one another, affecting performance in complicated ways. Predicting this interference is key to efficiently scheduling applications on shared hardware, but forming accurate predictions is difficult because there are many shared hardware features that could lead to the interference. In this paper we investigate machine learning approaches (specifically, regularization) to understand the relation between those hardware features and application interference. We propose ESP, a highly accurate and fast regularization technique for application interference prediction. To demonstrate this practicality, we implement ESP and integrate it into a scheduler for both single and multi-node Linux/x86 systems and compare the scheduling performance to state-of-the-art heuristics. We find that ESP-based schedulers increase throughput by 1.25-1.8× depending on the scheduling scenario. Additionally, we find that ESPs accurate predictions allow schedulers to avoid catastrophic decisions, which heuristic approaches fundamentally cannot detect.


international symposium on information theory | 2013

Mismatched estimation and relative entropy in vector Gaussian channels

Minhua Chen; John D. Lafferty

We derive a novel relation between mismatched estimation and relative entropy (KL divergence) in vector Gaussian channels under the mean squared estimation criterion. This relation includes as special cases several previous results connecting estimation theory and information theory. A direct proof is provided, together with a verification using Gaussian inputs. An interesting relationship between the KL divergence and Fisher divergence is derived as a direct consequence of our work. The relations established here are potentially useful for inference in graphical models and the design of information systems.


IEEE Transactions on Information Theory | 2018

Denoising Flows on Trees

Sabyasachi Chatterjee; John D. Lafferty

We study the estimation of flows on trees, a structured generalization of isotonic regression. A tree flow is defined recursively as a positive flow value into a node that is partitioned into an outgoing flow to the children nodes, with some amount of the flow possibly leaking outside. We study the behavior of the least squares estimator for flows, and the associated minimax lower bounds. We characterize the risk of the least squares estimator in two regimes. In the first regime, the diameter of the tree grows at most logarithmically with the number of nodes. In the second regime, the tree contains many long paths. The results are compared with known risk bounds for isotonic regression. In the many long paths regime, we find that the least squares estimator is not minimax rate optimal for flow estimation.


neural information processing systems | 2015

A convergent Gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements

Qinqing Zheng; John D. Lafferty


neural information processing systems | 2012

Exponential Concentration for Mutual Information Estimation with Application to Forests

Han Liu; Larry Wasserman; John D. Lafferty


international conference on machine learning | 2012

Sparse Additive Functional and Kernel CCA

Sivaraman Balakrishnan; Kriti Puniyani; John D. Lafferty


international conference on machine learning | 2013

The Bigraphical Lasso

Alfredo A. Kalaitzis; John D. Lafferty; Neil D. Lawrence; Shuheng Zhou


international conference on machine learning | 2012

The Nonparanormal SKEPTIC

Han Liu; Fang Han; Ming Yuan; Larry Wasserman; John D. Lafferty

Collaboration


Dive into the John D. Lafferty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Larry Wasserman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuancheng Zhu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Han Liu

Princeton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haijie Gu

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge