Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vikash K. Mansinghka is active.

Publication


Featured researches published by Vikash K. Mansinghka.


international conference on artificial intelligence and statistics | 2014

A New Approach to Probabilistic Programming Inference

Frank D. Wood; Jan-Willem van de Meent; Vikash K. Mansinghka

We introduce and demonstrate a new approach to inference in expressive probabilistic programming languages based on particle Markov chain Monte Carlo. Our approach is simple to implement and easy to parallelize. It applies to Turing-complete probabilistic programming languages and supports accurate inference in models that make use of complex control ow, including stochastic recursion. It also includes primitives from Bayesian nonparametric statistics. Our experiments show that this approach can be more ecient than previously introduced single-site Metropolis-Hastings methods.


Psychological Review | 2013

Reconciling intuitive physics and newtonian mechanics for colliding objects

Adam N. Sanborn; Vikash K. Mansinghka; Thomas L. Griffiths

People have strong intuitions about the influence objects exert upon one another when they collide. Because peoples judgments appear to deviate from Newtonian mechanics, psychologists have suggested that people depend on a variety of task-specific heuristics. This leaves open the question of how these heuristics could be chosen, and how to integrate them into a unified model that can explain human judgments across a wide range of physical reasoning tasks. We propose an alternative framework, in which peoples judgments are based on optimal statistical inference over a Newtonian physical model that incorporates sensory noise and intrinsic uncertainty about the physical properties of the objects being viewed. This noisy Newton framework can be applied to a multitude of judgments, with peoples answers determined by the uncertainty they have for physical variables and the constraints of Newtonian mechanics. We investigate a range of effects in mass judgments that have been taken as strong evidence for heuristic use and show that they are well explained by the interplay between Newtonian constraints and sensory uncertainty. We also consider an extended model that handles causality judgments, and obtain good quantitative agreement with human judgments across tasks that involve different judgment types with a single consistent set of parameters.


computer vision and pattern recognition | 2015

Picture: A probabilistic programming language for scene perception

Tejas D. Kulkarni; Pushmeet Kohli; Joshua B. Tenenbaum; Vikash K. Mansinghka

Recent progress on probabilistic modeling and statistical learning, coupled with the availability of large training datasets, has led to remarkable progress in computer vision. Generative probabilistic models, or “analysis-by-synthesis” approaches, can capture rich scene structure but have been less widely applied than their discriminative counterparts, as they often require considerable problem-specific engineering in modeling and inference, and inference is typically seen as requiring slow, hypothesize-and-test Monte Carlo methods. Here we present Picture, a probabilistic programming language for scene understanding that allows researchers to express complex generative vision models, while automatically solving them using fast general-purpose inference machinery. Picture provides a stochastic scene language that can express generative models for arbitrary 2D/3D scenes, as well as a hierarchy of representation layers for comparing scene hypotheses with observed images by matching not simply pixels, but also more abstract features (e.g., contours, deep neural network activations). Inference can flexibly integrate advanced Monte Carlo strategies with fast bottom-up data-driven methods. Thus both representations and inference strategies can build directly on progress in discriminatively trained systems to make generative vision more robust and efficient. We use Picture to write programs for 3D face analysis, 3D human pose estimation, and 3D object reconstruction - each competitive with specially engineered baselines.


Philosophical Transactions of the Royal Society A | 2014

Markov chain algorithms: A template for building future robust low-power systems

Biplab Deka; Alex Aaen Birklykke; Henry Duwe; Vikash K. Mansinghka; Rakesh Kumar

Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems.


asilomar conference on signals, systems and computers | 2013

Markov chain algorithms: A template for building future robust low power systems

Biplab Deka; Alex Aaen Birklykke; Henry Duwe; Vikash K. Mansinghka; Rakesh Kumar

Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications - boolean satisfiability (SAT), sorting, LDPC decoding and clustering - how applications can be cast as Markov Chain algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using Markov Chains as an algorithmic template for future robust low power systems.


programming language design and implementation | 2018

Incremental inference for probabilistic programs

Marco F. Cusumano-Towner; Benjamin Bichsel; Timon Gehr; Martin T. Vechev; Vikash K. Mansinghka

We present a novel approach for approximate sampling in probabilistic programs based on incremental inference. The key idea is to adapt the samples for a program P into samples for a program Q, thereby avoiding the expensive sampling computation for program Q. To enable incremental inference in probabilistic programming, our work: (i) introduces the concept of a trace translator which adapts samples from P into samples of Q, (ii) phrases this translation approach in the context of sequential Monte Carlo (SMC), which gives theoretical guarantees that the adapted samples converge to the distribution induced by Q, and (iii) shows how to obtain a concrete trace translator by establishing a correspondence between the random choices of the two probabilistic programs. We implemented our approach in two different probabilistic programming systems and showed that, compared to methods that sample the program Q from scratch, incremental inference can lead to orders of magnitude increase in efficiency, depending on how closely related P and Q are.


Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages | 2018

A design proposal for Gen: probabilistic programming with fast custom inference via code generation

Marco F. Cusumano-Towner; Vikash K. Mansinghka

Probabilistic programming languages have the potential to make probabilistic modeling and inference easier to use in practice, but only if inference is sufficiently fast and accurate for real applications. Thus far, this has only been possible for domain-specific languages that focus on a restricted class of models and inference algorithms. This paper proposes a design for a probabilistic programming language called Gen, embedded in Julia, that aims to be sufficiently expressive and performant for general-purpose use. The language provides constructs for automatically generating optimized implementations of custom inference tactics based on static analysis of the target probabilistic model. This paper informally describes a language design for Gen, and shows that Gen is more expressive than Stan, a widely used language for hierarchical Bayesian modeling. A first benchmark shows that a prototype implementation of Gen can be as fast as Stan, only ∼1.4x slower than a hand-coded sampler in Julia, and ∼7,500x faster than Venture, one of the only other probabilistic languages with support for custom inference.


programming language design and implementation | 2018

Probabilistic programming with programmable inference

Vikash K. Mansinghka; Ulrich Schaechtle; Shivam Handa; Alexey Radul; Yutian Chen; Martin C. Rinard

We introduce inference metaprogramming for probabilistic programming languages, including new language constructs, a formalism, and the rst demonstration of e ectiveness in practice. Instead of relying on rigid black-box inference algorithms hard-coded into the language implementation as in previous probabilistic programming languages, infer- ence metaprogramming enables developers to 1) dynamically decompose inference problems into subproblems, 2) apply in- ference tactics to subproblems, 3) alternate between incorpo- rating new data and performing inference over existing data, and 4) explore multiple execution traces of the probabilis- tic program at once. Implemented tactics include gradient- based optimization, Markov chain Monte Carlo, variational inference, and sequental Monte Carlo techniques. Inference metaprogramming enables the concise expression of proba- bilistic models and inference algorithms across diverse elds, such as computer vision, data science, and robotics, within a single probabilistic programming language.


uncertainty in artificial intelligence | 2008

Church: a language for generative models

Noah D. Goodman; Vikash K. Mansinghka; Daniel M. Roy; Keith Bonawitz; Joshua B. Tenenbaum


arXiv: Artificial Intelligence | 2014

Venture: a higher-order probabilistic programming platform with programmable inference.

Vikash K. Mansinghka; Daniel Selsam; Yura N. Perov

Collaboration


Dive into the Vikash K. Mansinghka's collaboration.

Top Co-Authors

Avatar

Joshua B. Tenenbaum

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marco F. Cusumano-Towner

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexey Radul

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Charles Kemp

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Patrick Shafto

University of Louisville

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel M. Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eric Jonas

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tejas D. Kulkarni

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yutian Chen

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge