Alexey Radul
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexey Radul.
language and technology conference | 2006
Gregory Marton; Alexey Radul
The TREC Definition and Relationship questions are evaluated on the basis of information nuggets that may be contained in system responses. Human evaluators provide informal descriptions of each nugget, and judgements (assignments of nuggets to responses) for each response submitted by participants. While human evaluation is the most accurate way to compare systems, approximate automatic evaluation becomes critical during system development.We present Nuggeteer, a new automatic evaluation tool for nugget-based tasks. Like the first such tool, Pourpre, Nuggeteer uses words in common between candidate answer and answer key to approximate human judgements. Unlike Pourpre, but like human assessors, Nuggeteer creates a judgement for each candidate-nugget pair, and can use existing judgements instead of guessing. This creates a more readily interpretable aggregate score, and allows developers to track individual nuggets through the variants of their system. Nuggeteer is quantitatively comparable in performance to Pourpre, and provides qualitatively better feedback to developers.
dynamic languages symposium | 2007
Alexey Radul
Reasoning with probabilistic models is a widespread and successful technique in areas ranging from computer vision, to natural language processing, to bioinformatics. Currently, these reasoning systems are either coded from scratch in general-purpose languages or use formalisms such as Bayesian networks that have limited expressive power. In both cases, the resulting systems are difficult to modify, maintain, compose, and interoperate with. This work presents Probabilistic Scheme, an embedding of probabilistic computation into Scheme. This gives programmers an expressive language for implementing modular probabilistic models that integrate naturally with the rest of Scheme.
international workshop on restful design | 2010
Ian Jacobi; Alexey Radul
Traditionally, distributed computing problems have been solved by partitioning data into chunks small enough to be handled by commodity hardware. However, such partitioning is not possible in cases where there are a high number of dependencies or high dimensionality, such as in reasoning and expert systems, rendering such problems less tractable for distributed systems. By instead partitioning the problem, rather than the data, we can achieve a more general application of distributed computing. Partitioning the problem rather than the data may require tighter communication between members of the network, even though many networks can only be assumed to be weakly-connected. We believe that a decentralized implementation of propagator networks may resolve the problem. By placing several constraints on the merging of data transmitted over the network, we can easily synchronize information and achieve eventual convergence without implementing mechanisms needed for serialization. To this end, we present the design of a RESTful messaging mechanism, currently in the process of being implemented, that allows distributed propagator networks to be created, using mechanisms that result in eventual convergence of knowledge across a weakly-connected network. By utilizing a RESTful design of the mechanism, we can also achieve a reduction of bandwidth usage during synchronization through the use of caching.
programming language design and implementation | 2018
Vikash K. Mansinghka; Ulrich Schaechtle; Shivam Handa; Alexey Radul; Yutian Chen; Martin C. Rinard
We introduce inference metaprogramming for probabilistic programming languages, including new language constructs, a formalism, and the rst demonstration of e ectiveness in practice. Instead of relying on rigid black-box inference algorithms hard-coded into the language implementation as in previous probabilistic programming languages, infer- ence metaprogramming enables developers to 1) dynamically decompose inference problems into subproblems, 2) apply in- ference tactics to subproblems, 3) alternate between incorpo- rating new data and performing inference over existing data, and 4) explore multiple execution traces of the probabilis- tic program at once. Implemented tactics include gradient- based optimization, Markov chain Monte Carlo, variational inference, and sequental Monte Carlo techniques. Inference metaprogramming enables the concise expression of proba- bilistic models and inference algorithms across diverse elds, such as computer vision, data science, and robotics, within a single probabilistic programming language.
Archive | 2009
Gerald Jay Sussman; Alexey Radul
text retrieval conference | 2006
Boris Katz; Gregory Marton; Sue Felshin; Daniel Loreto; Ben Lu; Federico Mora; Özlem Uzuner; Michael McGraw-Herdeg; Natalie Cheung; Alexey Radul; Yuan Kui Shen; Yuan Luo; Gabriel Zaccak
arXiv: Learning | 2015
Ulrich Schaechtle; Ben Zinberg; Alexey Radul; Kostas Stathis; Vikash K. Mansinghka
arXiv: Artificial Intelligence | 2017
Marco F. Cusumano-Towner; Alexey Radul; David Wingate; Vikash K. Mansinghka
arXiv: Machine Learning | 2016
Ulrich Schaechtle; Feras Saad; Alexey Radul; Vikash K. Mansinghka
Archive | 2012
Alex Shinn; John Cowan; Arthur A. Gleckler; Steven Ganz; Alexey Radul; Olin Shivers; Aaron W. Hsu; Jeffrey T. Read; Alaric Snell-Pym; Bradley Lucier; David R. Rush; Gerald Jay Sussman; Emmanuel Medernach; Benjamin L. Russel; Richard Kelsey; William D. Clinger; Jonathan Rees; Michael Sperber; R. Kent Dybvig; Matthew Flatt; Anton Van Straaten