David Wingate
Analog Devices
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Wingate.
international conference on autonomic computing | 2010
Jonathan Eastep; David Wingate; Marco D. Santambrogio; Anant Agarwal
As multicore processors become increasingly prevalent, system complexity is skyrocketing. The advent of the asymmetric multicore compounds this - it is no longer practical for an average programmer to balance the system constraints associated with todays multicores and worry about new problems like asymmetric partitioning and thread interference. Adaptive, or self-aware, computing has been proposed as one method to help application and system programmers confront this complexity. These systems take some of the burden off of programmers by monitoring themselves and optimizing or adapting to meet their goals. This paper introduces a self-aware synchronization library for multicores and asymmetric multicores called Smartlocks. Smartlocks is a spin-lock library that adapts its internal implementation during execution using heuristics and machine learning to optimize toward a user-defined goal, which may relate to performance or problem-specific criteria. Smartlocks builds upon adaptation techniques from prior work like reactive locks [1], but introduces a novel form of adaptation that we term lock acquisition scheduling designed specifically to address asymmetries in multicores. Lock acquisition scheduling is optimizing which waiter will get the lock next for the best long-term effect when multiple threads (or processes) are spinning for a lock. This work demonstrates that lock scheduling is important for addressing asymmetries in multicores. We study scenarios where core speeds vary both dynamically and intrinsically under thermal throttling and manufacturing variability, respectively, and we show that Smartlocks significantly outperforms conventional spin-locks and reactive locks. Based on our findings, we provide guidelines for application scenarios where Smartlocks works best versus less optimally.
international conference on machine learning | 2006
David Wingate; Satinder P. Singh
The recent Predictive Linear Gaussian model (or PLG) improves upon traditional linear dynamical system models by using a predictive representation of state, which makes consistent parameter estimation possible without any loss of modeling power and while using fewer parameters. In this paper we extend the PLG to model stochastic, nonlinear dynamical systems by using kernel methods. With a Gaussian kernel, the model admits closed form solutions to the state update equations due to conjugacy between the dynamics and the state representation. We also explore an efficient sigma-point approximation to the state updates, and show how all of the model parameters can be learned directly from data (and can be learned on-line with the Kernel Recursive Least-Squares algorithm). We empirically compare the model and its approximation to the original PLG and discuss their relative advantages.
international joint conference on artificial intelligence | 2011
David Wingate; Noah D. Goodman; Daniel M. Roy; Leslie Pack Kaelbling; Joshua B. Tenenbaum
We consider the problem of learning to act in partially observable, continuous-state-and-action worlds where we have abstract prior knowledge about the structure of the optimal policy in the form of a distribution over policies. Using ideas from planning-as-inference reductions and Bayesian unsupervised learning, we cast Markov Chain Monte Carlo as a stochastic, hill-climbing policy search algorithm. Importantly, this algorithms search bias is directly tied to the prior and its MCMC proposal kernels, which means we can draw on the full Bayesian toolbox to express the search bias, including nonparametric priors and structured, recursive processes like grammars over action sequences. Furthermore, we can reason about uncertainty in the search bias itself by constructing a hierarchical prior and reasoning about latent variables that determine the abstract structure of the policy. This yields an adaptive search algorithm--our algorithm learns to learn a structured policy efficiently. We show how inference over the latent variables in these policy priors enables intra- and intertask transfer of abstract knowledge. We demonstrate the flexibility of this approach by learning meta search biases, by constructing a nonparametric finite state controller to model memory, by discovering motor primitives using a simple grammar over primitive actions, and by combining all three.
adaptive agents and multi-agents systems | 2007
David Wingate; Satinder P. Singh
Models of agent-environment interaction that use predictive state representations (PSRs) have mainly focused on the case of discrete observations and actions. The theory of discrete PSRs uses an elegant construct called the system dynamics matrix and derives the notion of predictive state as a sufficient statistic via the rank of the matrix. With continuous observations and actions, such a matrix and its rank no longer exist. In this paper, we show how to define an analogous construct for the continuous case, called the system dynamics distributions, and use information theoretic notions to define a sufficient statistic and thus state. Given this new construct, we use kernel density estimation to learn approximate system dynamics distributions from data, and use information-theoretic tools to derive algorithms for discovery of state and learning of model parameters. We illustrate our new modeling method on two example problems.
international conference on machine learning | 2004
David Wingate; Kevin D. Seppi
We present an examination of the state-of-the-art for using value iteration to solve large-scale discrete Markov Decision Processes. We introduce an architecture which combines three independent performance enhancements (the intelligent prioritization of computation, state partitioning, and massively parallel processing) into a single algorithm. We show that each idea improves performance in a different way, meaning that algorithm designers do not have to trade one improvement for another. We give special attention to parallelization issues, discussing how to efficiently partition states, distribute partitions to processors, minimize message passing and ensure high scalability. We present experimental results which demonstrate that this approach solves large problems in reasonable time.
international conference on autonomic computing | 2011
Jonathan Eastep; David Wingate; Anant Agarwal
As multicores become prevalent, the complexity of programming is skyrocketing. One major difficulty is efficiently orchestrating collaboration among threads through shared data structures. Unfortunately, choosing and hand-tuning data structure algorithms to get good performance across a variety of machines and inputs is a herculean task to add to the fundamental difficulty of getting a parallel program correct. To help mitigate these complexities, this work develops a new class of parallel data structures called Smart Data Structures that leverage online machine learning to adapt automatically. We prototype and evaluate an open source library of Smart Data Structures for common parallel programming needs and demonstrate significant improvements over the best existing algorithms under a variety of conditions. Our results indicate that learning is a promising technique for balancing and adapting to complex, time-varying tradeoffs and achieving the best performance available.
international conference on machine learning | 2008
David Wingate; Satinder P. Singh
Exponential Family PSR (EFPSR) models capture stochastic dynamical systems by representing state as the parameters of an exponential family distribution over a shortterm window of future observations. They are appealing from a learning perspective because they are fully observed (meaning expressions for maximum likelihood do not involve hidden quantities), but are still expressive enough to both capture existing models and predict new models. While maximum-likelihood learning algorithms for EFPSRs exist, they are not computationally feasible. We present a new, computationally efficient, learning algorithm based on an approximate likelihood function. The algorithm can be interpreted as attempting to induce stationary distributions of observations, features and states which match their empirically observed counterparts. The approximate likelihood, and the idea of matching stationary distributions, may apply to other models.
Archive | 2012
David Wingate
The concept of state is central to dynamical systems. In any timeseries problem—such as filtering, planning or forecasting—models and algorithms summarize important information from the past into some sort of state variable. In this chapter, we start with a broad examination of the concept of state, with emphasis on the fact that there are many possible representations of state for a given dynamical system, each with different theoretical and computational properties. We then focus on models with predictively defined representations of state that represent state as a set of statistics about the short-term future, as opposed to the classic approach of treating state as a latent, unobservable quantity. In other words, the past is summarized into predictions about the actions and observations in the short-term future, which can be used to make further predictions about the infinite future.While this representational idea applies to any dynamical system problem, it is particularly useful in a model-based RL context, when an agent must learn a representation of state and a model of system dynamics online: because the representation (and hence all of the model’s parameters) are defined using only statistics of observable quantities, their learning algorithms are often straightforward and have attractive theoretical properties. Here, we survey the basic concepts of predictively defined representations of state, important auxiliary constructs (such as the systems dynamics matrix), and theoretical results on their representational power and learnability.
international conference on machine learning and applications | 2004
Christopher K. Monson; David Wingate; Kevin D. Seppi; Todd S. Peterson
We present JoSTLe, an algorithm that performs value iteration on control problems with continuous actions, allowing this useful reinforcement learning technique to be applied to problems where a priori action discretization is inadequate. The algorithm is an extension of a variable resolution technique that works for problems with continuous states and discrete actions [6]. Results are given that indicate that JoSTLe is a promising step toward reinforcement learning in a fully continuous domain.
Mathematical Geosciences | 2016
David Wingate; Jonathan Kane; Matt Wolinsky; Zoltán Sylvester
Generating a realistic earth model that simultaneously fits data observed at multiple well locations has been a long-standing problem in petroleum geology. Two insights are offered for solving this problem in a Bayesian framework. The first is conceptual—it connects geologic inversion to the new field of probabilistic programming and shows that the usual description of a Bayesian problem in terms of a graphical model is inadequate for describing a process-based geologic model due to the dynamics of the generative algorithm. This is a paradigm shift in probabilistic modeling where stochastic generative models are represented using a syntax resembling modern programming languages. Probabilistic programming allows one to generalize this structure to include complex programming concepts, while also simplifying the process of developing new inference algorithms. The second insight is algorithmic and involves using variational inference to derive a simpler, more computationally tractable approximation to the posterior probability density function. If this surrogate distribution is close to the true posterior, it allows for very fast simulation of an arbitrary number of models that all fit the data equally well. This study focuses on the particular geologic formation known as submarine lobes: elongated pancake-like formations which are sequentially laid down, one on top of the other over geologic time, forming potential petroleum reservoirs. The location and orientation of the lobes at each time step are the variables that are optimized so that, at the final time step, all available well data are approximately fit. The methodology is illustrated on synthetic data as a proof-of-concept, and compared to several alternatives. An important conclusion is that, even though the variational approximation is crude, it produces better predictions than any point-based method, including maximum likelihood. The fact that probabilistic programming outperforms conventional Bayesian approaches in the case of lobe models offers the potential for attacking more complicated forward models where multiple geologic processes are simultaneously active.