Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ueli Rutishauser is active.

Publication


Featured researches published by Ueli Rutishauser.


computer vision and pattern recognition | 2004

Is bottom-up attention useful for object recognition?

Ueli Rutishauser; Dirk Walther; Christof Koch; Pietro Perona

A key problem in learning multiple objects from unlabeled images is that it is a priori impossible to tell which part of the image corresponds to each individual object, and which part is irrelevant clutter which is not associated to the objects. We investigate empirically to what extent pure bottom-up attention can extract useful information about the location, size and shape of objects from images and demonstrate how this information can be utilized to enable unsupervised learning of objects from unlabeled images. Our experiments demonstrate that the proposed approach to using bottom-up attention is indeed useful for a variety of applications.


Nature | 2010

Human memory strength is predicted by theta-frequency phase-locking of single neurons

Ueli Rutishauser; Ian B. Ross; Adam N. Mamelak; Erin M. Schuman

Learning from novel experiences is a major task of the central nervous system. In mammals, the medial temporal lobe is crucial for this rapid form of learning. The modification of synapses and neuronal circuits through plasticity is thought to underlie memory formation. The induction of synaptic plasticity is favoured by coordinated action-potential timing across populations of neurons. Such coordinated activity of neural populations can give rise to oscillations of different frequencies, recorded in local field potentials. Brain oscillations in the theta frequency range (3–8 Hz) are often associated with the favourable induction of synaptic plasticity as well as behavioural memory. Here we report the activity of single neurons recorded together with the local field potential in humans engaged in a learning task. We show that successful memory formation in humans is predicted by a tight coordination of spike timing with the local theta oscillation. More stereotyped spiking predicts better memory, as indicated by higher retrieval confidence reported by subjects. These findings provide a link between the known modulation of theta oscillations by many memory-modulating behaviours and circuit mechanisms of plasticity.


Computer Vision and Image Understanding | 2005

Selective visual attention enables learning and recognition of multiple objects in cluttered scenes

Dirk Walther; Ueli Rutishauser; Christof Koch; Pietro Perona

A key problem in learning representations of multiple objects from unlabeled images is that it is a priori impossible to tell which part of the image corresponds to each individual object, and which part is irrelevant clutter. Distinguishing individual objects in a scene would allow unsupervised learning of multiple objects from unlabeled images. There is psychophysical and neurophysiological evidence that the brain employs visual attention to select relevant parts of the image and to serialize the perception of individual objects. We propose a method for the selection of salient regions likely to contain objects, based on bottom-up visual attention. By comparing the performance of David Lowes recognition algorithm with and without attention, we demonstrate in our experiments that the proposed approach can enable one-shot learning of multiple objects from complex scenes, and that it can strongly improve learning and recognition performance in the presence of large amounts of clutter.


Neuron | 2006

Single-Trial Learning of Novel Stimuli by Individual Neurons of the Human Hippocampus-Amygdala Complex

Ueli Rutishauser; Adam N. Mamelak; Erin M. Schuman

The ability to distinguish novel from familiar stimuli allows nervous systems to rapidly encode significant events following even a single exposure to a stimulus. This detection of novelty is necessary for many types of learning. Neurons in the medial temporal lobe (MTL) are critically involved in the acquisition of long-term declarative memories. During a learning task, we recorded from individual MTL neurons in vivo using microwire electrodes implanted in human epilepsy surgery patients. We report here the discovery of two classes of neurons in the hippocampus and amygdala that exhibit single-trial learning: novelty and familiarity detectors, which show a selective increase in firing for new and old stimuli, respectively. The neurons retain memory for the stimulus for 24 hr. Thus, neurons in the MTL contain information sufficient for reliable novelty-familiarity discrimination and also show rapid plasticity as a result of single-trial learning.


systems man and cybernetics | 2005

Control and learning of ambience by an intelligent building

Ueli Rutishauser; Josef M. Joller; Rodney J. Douglas

Modern approaches to the architecture of living and working environments emphasize the dynamic reconfiguration of space and function to meet the needs, comfort, and preferences of its inhabitants. Although it is possible for a human operator to specify a configuration explicitly, the size, sophistication, and dynamic requirements of modern buildings demands that they have autonomous intelligence that could satisfy the needs of its inhabitants without human intervention. We describe a multiagent framework for such intelligent building control that is deployed in a commercial building equipped with sensors and effectors. Multiple agents control subparts of the environment using fuzzy rules that link sensors and effectors. The agents communicate with one another by asynchronous, interest-based messaging. They implement a novel unsupervised online real-time learning algorithm that constructs a fuzzy rule-base, derived from very sparse data in a nonstationary environment. We have developed methods for evaluating the performance of systems of this kind. Our results demonstrate that the framework and the learning algorithm significantly improve the performance of the building.


Journal of Vision | 2007

Probabilistic modeling of eye movement data during conjunction search via feature-based attention.

Ueli Rutishauser; Christof Koch

Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Synthesizing cognition in neuromorphic electronic systems

Emre Neftci; Jonathan Binas; Ueli Rutishauser; Elisabetta Chicca; Giacomo Indiveri; Rodney J. Douglas

Significance Neuromorphic emulations express the dynamics of neural systems in analogous electronic circuits, offering a distributed, low-power technology for constructing intelligent systems. However, neuromorphic circuits are inherently imprecise and noisy, and there has been no systematic method for configuring reliable behavioral dynamics on these substrates. We describe such a method, which is able to install simple cognitive behavior on the neuromorphic substrate. Our approach casts light on the general question of how the neuronal circuits of the brain, and also future neuromorphic technologies, could implement cognitive behavior in a principled manner. The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Activity of human hippocampal and amygdala neurons during retrieval of declarative memories

Ueli Rutishauser; Erin M. Schuman; Adam N. Mamelak

Episodic memories allow us to remember not only that we have seen an item before but also where and when we have seen it (context). Sometimes, we can confidently report that we have seen something (familiarity) but cannot recollect where or when it was seen. Thus, the two components of episodic recall, familiarity and recollection, can be behaviorally dissociated. It is not clear, however, whether these two components of memory are represented separately by distinct brain structures or different populations of neurons in a single anatomical structure. Here, we report that the spiking activity of single neurons in the human hippocampus and amygdala [the medial temporal lobe (MTL)] contain information about both components of memory. We analyzed a class of neurons that changed its firing rate to the second presentation of a previously novel stimulus. We found that the neuronal activity evoked by the presentation of a familiar stimulus (during retrieval) distinguishes stimuli that will be successfully recollected from stimuli that will not be recollected. Importantly, the ability to predict whether a stimulus is familiar is not influenced by whether the stimulus will later be recollected. We thus conclude that human MTL neurons contain information about both components of memory. These data support a continuous strength of memory model of MTL function: the stronger the neuronal response, the better the memory.


Current Biology | 2011

Single-Unit Responses Selective for Whole Faces in the Human Amygdala

Ueli Rutishauser; Oana Tudusciuc; Dirk Neumann; Adam N. Mamelak; A. Christopher Heller; Ian B. Ross; Linda Philpott; William W. Sutherling; Ralph Adolphs

The human amygdala is critical for social cognition from faces, as borne out by impairments in recognizing facial emotion following amygdala lesions [1] and differential activation of the amygdala by faces [2-5]. Single-unit recordings in the primate amygdala have documented responses selective for faces, their identity, or emotional expression [6, 7], yet how the amygdala represents face information remains unknown. Does it encode specific features of faces that are particularly critical for recognizing emotions (such as the eyes), or does it encode the whole face, a level of representation that might be the proximal substrate for subsequent social cognition? We investigated this question by recording from over 200 single neurons in the amygdalae of seven neurosurgical patients with implanted depth electrodes [8]. We found that approximately half of all neurons responded to faces or parts of faces. Approximately 20% of all neurons responded selectively only to the whole face. Although responding most to whole faces, these neurons paradoxically responded more when only a small part of the face was shown compared to when almost the entire face was shown. We suggest that the human amygdala plays a predominant role in representing global information about faces, possibly achieved through inhibition between individual facial features.


Neural Computation | 2009

State-dependent computation using coupled recurrent networks

Ueli Rutishauser; Rodney J. Douglas

Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state is withdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit.

Collaboration


Dive into the Ueli Rutishauser's collaboration.

Top Co-Authors

Avatar

Adam N. Mamelak

City of Hope National Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christof Koch

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph Adolphs

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan Kamiński

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jean-Jacques E. Slotine

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeffrey M. Chung

Cedars-Sinai Medical Center

View shared research outputs
Top Co-Authors

Avatar

Oana Tudusciuc

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge