Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where H. Sebastian Seung is active.

Publication


Featured researches published by H. Sebastian Seung.


Machine Learning | 1997

Selective Sampling Using the Query by Committee Algorithm

Yoav Freund; H. Sebastian Seung; Eli Shamir; Naftali Tishby

We analyze the “query by committee” algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons.


Nature | 2013

Connectomic reconstruction of the inner plexiform layer in the mouse retina

Moritz Helmstaedter; Kevin L. Briggman; Srinivas C. Turaga; Viren Jain; H. Sebastian Seung; Winfried Denk

Comprehensive high-resolution structural maps are central to functional exploration and understanding in biology. For the nervous system, in which high resolution and large spatial extent are both needed, such maps are scarce as they challenge data acquisition and analysis capabilities. Here we present for the mouse inner plexiform layer—the main computational neuropil region in the mammalian retina—the dense reconstruction of 950 neurons and their mutual contacts. This was achieved by applying a combination of crowd-sourced manual annotation and machine-learning-based volume segmentation to serial block-face electron microscopy data. We characterize a new type of retinal bipolar interneuron and show that we can subdivide a known type based on connectivity. Circuit motifs that emerge from our data indicate a functional mechanism for a known cellular response in a ganglion cell that detects localized motion, and predict that another ganglion cell is motion sensitive.


Neuron | 2000

Stability of the Memory of Eye Position in a Recurrent Network of Conductance-Based Model Neurons

H. Sebastian Seung; Daniel D. Lee; Ben Y. Reis; David W. Tank

Studies of the neural correlates of short-term memory in a wide variety of brain areas have found that transient inputs can cause persistent changes in rates of action potential firing, through a mechanism that remains unknown. In a premotor area that is responsible for holding the eyes still during fixation, persistent neural firing encodes the angular position of the eyes in a characteristic manner: below a threshold position the neuron is silent, and above it the firing rate is linearly related to position. Both the threshold and linear slope vary from neuron to neuron. We have reproduced this behavior in a biophysically plausible network model. Persistence depends on precise tuning of the strength of synaptic feedback, and a relatively long synaptic time constant improves the robustness to mistuning.


Nature | 2000

Digital selection and analogue amplification coexist in a cortex-inspiredsilicon circuit

Richard H. R. Hahnloser; Rahul Sarpeshkar; Misha Mahowald; Rodney J. Douglas; H. Sebastian Seung

Digital circuits such as the flip-flop use feedback to achieve multi-stability and nonlinearity to restore signals to logical levels, for example 0 and 1. Analogue feedback circuits are generally designed to operate linearly, so that signals are over a range, and the response is unique. By contrast, the response of cortical circuits to sensory stimulation can be both multistable and graded. We propose that the neocortex combines digital selection of an active set of neurons with analogue response by dynamically varying the positive feedback inherent in its recurrent connections. Strong positive feedback causes differential instabilities that drive the selection of a set of active neurons under the constraints embedded in the synaptic weights. Once selected, the active neurons generate weaker, stable feedback that provides analogue amplification of the input. Here we present our model of cortical processing as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex.


Nature Methods | 2012

Serial two-photon tomography for automated ex vivo mouse brain imaging.

Timothy Ragan; Lolahon R. Kadiri; Kannan Umadevi Venkataraju; Karsten Bahlmann; Jason Sutin; Julian Taranda; Ignacio Arganda-Carreras; Yongsoo Kim; H. Sebastian Seung; Pavel Osten

Here we describe an automated method, named serial two-photon (STP) tomography, that achieves high-throughput fluorescence imaging of mouse brains by integrating two-photon microscopy and tissue sectioning. STP tomography generates high-resolution datasets that are free of distortions and can be readily warped in three dimensions, for example, for comparing multiple anatomical tracings. This method opens the door to routine systematic studies of neuroanatomy in mouse models of human brain disorders.


Nature | 2014

Space-time wiring specificity supports direction selectivity in the retina

Jinseop S. Kim; Matthew J. Greene; Aleksandar Zlateski; Kisuk Lee; Mark F. Richardson; Srinivas C. Turaga; Michael Purcaro; Matthew Balkam; Amy Robinson; Bardia Fallah Behabadi; Michael Campos; Winfried Denk; H. Sebastian Seung; EyeWirers

How does the mammalian retina detect motion? This classic problem in visual neuroscience has remained unsolved for 50 years. In search of clues, here we reconstruct Off-type starburst amacrine cells (SACs) and bipolar cells (BCs) in serial electron microscopic images with help from EyeWire, an online community of ‘citizen neuroscientists’. On the basis of quantitative analyses of contact area and branch depth in the retina, we find evidence that one BC type prefers to wire with a SAC dendrite near the SAC soma, whereas another BC type prefers to wire far from the soma. The near type is known to lag the far type in time of visual response. A mathematical model shows how such ‘space–time wiring specificity’ could endow SAC dendrites with receptive fields that are oriented in space–time and therefore respond selectively to stimuli that move in the outward direction from the soma.


Science | 2014

Distinct Profiles of Myelin Distribution Along Single Axons of Pyramidal Neurons in the Neocortex

Giulio Srubek Tomassy; Daniel R. Berger; Hsu-Hsin Chen; Narayanan Kasthuri; Kenneth J. Hayworth; Alessandro Vercelli; H. Sebastian Seung; Jeff W. Lichtman; Paola Arlotta

Patchy Insulation Myelin insulates neuronal axons such that their electrical signals travel faster and more efficiently. However, not all axons are myelinated equally. Tomassy et al. (p. 319; see the Perspective by Fields) obtained detailed images from two snippets of the adult mouse brain and generated three-dimensional reconstructions of individual neurons and their myelination patterns. The images show that some axons have long, unmyelinated stretches, which might offer sites for building new connections. Thus, myelination is not an all-or-none phenomenon but rather is a characteristic of what may be a specific dialogue between the neuron and the surrounding myelin-producing cells. Mouse neurons display different and distinctive patterns of myelination. [Also see Perspective by Fields] Myelin is a defining feature of the vertebrate nervous system. Variability in the thickness of the myelin envelope is a structural feature affecting the conduction of neuronal signals. Conversely, the distribution of myelinated tracts along the length of axons has been assumed to be uniform. Here, we traced high-throughput electron microscopy reconstructions of single axons of pyramidal neurons in the mouse neocortex and built high-resolution maps of myelination. We find that individual neurons have distinct longitudinal distribution of myelin. Neurons in the superficial layers displayed the most diversified profiles, including a new pattern where myelinated segments are interspersed with long, unmyelinated tracts. Our data indicate that the profile of longitudinal distribution of myelin is an integral feature of neuronal identity and may have evolved as a strategy to modulate long-distance communication in the neocortex.


Neural Computation | 2010

Convolutional networks can learn to generate affinity graphs for image segmentation

Srinivas C. Turaga; Joseph F. Murray; Viren Jain; Fabian Roth; Moritz Helmstaedter; Kevin L. Briggman; Winfried Denk; H. Sebastian Seung

Many image segmentation algorithms first generate an affinity graph and then partition it. We present a machine learning approach to computing an affinity graph using a convolutional network (CN) trained using ground truth provided by human experts. The CN affinity graph can be paired with any standard partitioning algorithm and improves segmentation accuracy significantly compared to standard hand-designed affinity functions. We apply our algorithm to the challenging 3D segmentation problem of reconstructing neuronal processes from volumetric electron microscopy (EM) and show that we are able to learn a good affinity graph directly from the raw EM images. Further, we show that our affinity graph improves the segmentation accuracy of both simple and sophisticated graph partitioning algorithms. In contrast to previous work, we do not rely on prior knowledge in the form of hand-designed image features or image preprocessing. Thus, we expect our algorithm to generalize effectively to arbitrary image types.


Neuron | 2007

Motor learning with unstable neural representations.

Uri Rokni; Andrew G. Richardson; Emilio Bizzi; H. Sebastian Seung

It is often assumed that learning takes place by changing an otherwise stable neural representation. To test this assumption, we studied changes in the directional tuning of primate motor cortical neurons during reaching movements performed in familiar and novel environments. During the familiar task, tuning curves exhibited slow random drift. During learning of the novel task, random drift was accompanied by systematic shifts of tuning curves. Our analysis suggests that motor learning is based on a surprisingly unstable neural representation. To explain these results, we propose that motor cortex is a redundant neural network, i.e., any single behavior can be realized by multiple configurations of synaptic strengths. We further hypothesize that synaptic modifications underlying learning contain a random component, which causes wandering among synaptic configurations with equivalent behaviors but different neural representations. We use a simple model to explore the implications of these assumptions.


computer vision and pattern recognition | 2010

Boundary Learning by Optimization with Topological Constraints

Viren Jain; Benjamin Bollmann; Mark Richardson; Daniel R. Berger; Moritz Helmstaedter; Kevin L. Briggman; Winfried Denk; Jared B. Bowden; John M. Mendenhall; Wickliffe C. Abraham; Kristen M. Harris; Narayanan Kasthuri; Kenneth J. Hayworth; Richard Schalek; Juan Carlos Tapia; Jeff W. Lichtman; H. Sebastian Seung

Recent studies have shown that machine learning can improve the accuracy of detecting object boundaries in images. In the standard approach, a boundary detector is trained by minimizing its pixel-level disagreement with human boundary tracings. This naive metric is problematic because it is overly sensitive to boundary locations. This problem is solved by metrics provided with the Berkeley Segmentation Dataset, but these can be insensitive to topo-logical differences, such as gaps in boundaries. Furthermore, the Berkeley metrics have not been useful as cost functions for supervised learning. Using concepts from digital topology, we propose a new metric called the warping error that tolerates disagreements over boundary location, penalizes topological disagreements, and can be used directly as a cost function for learning boundary detection, in a method that we call Boundary Learning by Optimization with Topological Constraints (BLOTC). We trained boundary detectors on electron microscopic images of neurons, using both BLOTC and standard training. BLOTC produced substantially better performance on a 1.2 million pixel test set, as measured by both the warping error and the Rand index evaluated on segmentations generated from the boundary labelings. We also find our approach yields significantly better segmentation performance than either gPb-OWT-UCM or multiscale normalized cut, as well as Boosted Edge Learning trained directly on our data.

Collaboration


Dive into the H. Sebastian Seung's collaboration.

Top Co-Authors

Avatar

Aleksandar Zlateski

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Srinivas C. Turaga

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Viren Jain

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kisuk Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel R. Berger

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ignacio Arganda-Carreras

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge