Ben Poole
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ben Poole.
european conference on computer vision | 2016
Jonathan T. Barron; Ben Poole
We present the bilateral solver, a novel algorithm for edge-aware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms. Our technique is capable of matching or improving upon state-of-the-art results on several different computer vision tasks (stereo, depth superresolution, colorization, and semantic segmentation) while being 10–1000\(\times \) faster than baseline techniques with comparable accuracy, and producing lower-error output than techniques with comparable runtimes. The bilateral solver is fast, robust, straightforward to generalize to new domains, and simple to integrate into deep learning pipelines.
Journal of Cognitive Neuroscience | 2011
John R. Anderson; Daniel Bothell; Jon M. Fincham; Abraham R. Anderson; Ben Poole; Yulin Qin
Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions. The activation predictions concerned both tonic activation that was constant in these regions during performance of the game and phasic activation that occurred when there was resource competition. The models predictions were confirmed about how tonic and phasic activation in different regions would vary with condition. These results support the Decomposition Hypothesis that the execution of a complex task can be decomposed into a set of information-processing components and that these components combine unchanged in different task conditions. In addition, individual differences in learning gains were predicted by individual differences in phasic activation in those regions that displayed highest tonic activity. This individual difference pattern suggests that the rate of learning of a complex skill is determined by capacity limits.
The Journal of Neuroscience | 2016
Jonathan C.S. Leong; Jennifer Judson Esch; Ben Poole; Surya Ganguli; Thomas R. Clandinin
Across animal phyla, motion vision relies on neurons that respond preferentially to stimuli moving in one, preferred direction over the opposite, null direction. In the elementary motion detector of Drosophila, direction selectivity emerges in two neuron types, T4 and T5, but the computational algorithm underlying this selectivity remains unknown. We find that the receptive fields of both T4 and T5 exhibit spatiotemporally offset light-preferring and dark-preferring subfields, each obliquely oriented in spacetime. In a linear-nonlinear modeling framework, the spatiotemporal organization of the T5 receptive field predicts the activity of T5 in response to motion stimuli. These findings demonstrate that direction selectivity emerges from the enhancement of responses to motion in the preferred direction, as well as the suppression of responses to motion in the null direction. Thus, remarkably, T5 incorporates the essential algorithmic strategies used by the Hassenstein–Reichardt correlator and the Barlow–Levick detector. Our model for T5 also provides an algorithmic explanation for the selectivity of T5 for moving dark edges: our model captures all two- and three-point spacetime correlations relevant to motion in this stimulus class. More broadly, our findings reveal the contribution of input pathway visual processing, specifically center-surround, temporally biphasic receptive fields, to the generation of direction selectivity in T5. As the spatiotemporal receptive field of T5 in Drosophila is common to the simple cell in vertebrate visual cortex, our stimulus-response model of T5 will inform efforts in an experimentally tractable context to identify more detailed, mechanistic models of a prevalent computation. SIGNIFICANCE STATEMENT Feature selective neurons respond preferentially to astonishingly specific stimuli, providing the neurobiological basis for perception. Direction selectivity serves as a paradigmatic model of feature selectivity that has been examined in many species. While insect elementary motion detectors have served as premiere experimental models of direction selectivity for 60 years, the central question of their underlying algorithm remains unanswered. Using in vivo two-photon imaging of intracellular calcium signals, we measure the receptive fields of the first direction-selective cells in the Drosophila visual system, and define the algorithm used to compute the direction of motion. Computational modeling of these receptive fields predicts responses to motion and reveals how this circuit efficiently captures many useful correlations intrinsic to moving dark edges.
bioRxiv | 2017
Logan Grosenick; Michael Broxton; Christina K. Kim; Conor Liston; Ben Poole; Samuel Yang; Aaron S. Andalman; Edward Scharff; Noy Cohen; Ofer Yizhar; Charu Ramakrishnan; Surya Ganguli; Patrick Suppes; Marc Levoy; Karl Deisseroth
Tracking the coordinated activity of cellular events through volumes of intact tissue is a major challenge in biology that has inspired significant technological innovation. Yet scanless measurement of the high-speed activity of individual neurons across three dimensions in scattering mammalian tissue remains an open problem. Here we develop and validate a computational imaging approach (SWIFT) that integrates high-dimensional, structured statistics with light field microscopy to allow the synchronous acquisition of single-neuron resolution activity throughout intact tissue volumes as fast as a camera can capture images (currently up to 100 Hz at full camera resolution), attaining rates needed to keep pace with emerging fast calcium and voltage sensors. We demonstrate that this large field-of-view, single-snapshot volume acquisition method—which requires only a simple and inexpensive modification to a standard fluorescence microscope—enables scanless capture of coordinated activity patterns throughout mammalian neural volumes. Further, the volumetric nature of SWIFT also allows fast in vivo imaging, motion correction, and cell identification throughout curved subcortical structures like the dorsal hippocampus, where cellular-resolution dynamics spanning hippocampal subfields can be simultaneously observed during a virtual context learning task in a behaving animal. SWIFT’s ability to rapidly and easily record from volumes of many cells across layers opens the door to widespread identification of dynamical motifs and timing dependencies among coordinated cell assemblies during adaptive, modulated, or maladaptive physiological processes in neural systems.
international conference on learning representations | 2017
Vincent Dumoulin; Ishmael Belghazi; Ben Poole; Alex Lamb; Martín Arjovsky; Olivier Mastropietro; Aaron C. Courville
international conference on learning representations | 2017
Eric Jang; Shixiang Gu; Ben Poole
international conference on learning representations | 2017
Luke Metz; Ben Poole; David Pfau; Jascha Sohl-Dickstein
international conference on machine learning | 2017
Maithra Raghu; Ben Poole; Jon M. Kleinberg; Surya Ganguli; Jascha Sohl-Dickstein
international conference on machine learning | 2014
Jascha Sohl-Dickstein; Ben Poole; Surya Ganguli
neural information processing systems | 2016
Ben Poole; Subhaneil Lahiri; Maithreyi Raghu; Jascha Sohl-Dickstein; Surya Ganguli