Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Misha Mahowald is active.

Publication


Featured researches published by Misha Mahowald.


Nature | 2000

Digital selection and analogue amplification coexist in a cortex-inspiredsilicon circuit

Richard H. R. Hahnloser; Rahul Sarpeshkar; Misha Mahowald; Rodney J. Douglas; H. Sebastian Seung

Digital circuits such as the flip-flop use feedback to achieve multi-stability and nonlinearity to restore signals to logical levels, for example 0 and 1. Analogue feedback circuits are generally designed to operate linearly, so that signals are over a range, and the response is unique. By contrast, the response of cortical circuits to sensory stimulation can be both multistable and graded. We propose that the neocortex combines digital selection of an active set of neurons with analogue response by dynamically varying the positive feedback inherent in its recurrent connections. Strong positive feedback causes differential instabilities that drive the selection of a set of active neurons under the constraints embedded in the synaptic weights. Once selected, the active neurons generate weaker, stable feedback that provides analogue amplification of the input. Here we present our model of cortical processing as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex.


Archive | 1994

An Analog VLSI System for Stereoscopic Vision

Misha Mahowald

Foreword. Preface. 1: Synthesis. 2: The Silicon Retina. 2.1. Anatomical Models. 2.2. Architecture of the Silicon Retina. 2.3. Photoreceptors. 2.4. Horizontal Cells. 2.5. Bipolar Cells. 2.6. Physical Constraints on Information Processing. 2.7. Emergent Properties. 3: The Silicon Optic Nerve. 3.1. Summary of Existing Techniques. 3.2. Address-Event Representation. 3.3. Model of Data-Transfer Timing Efficiency. 3.4. Data Transfer in One Dimension. 3.5. Two-Dimensional Retina--Receiver System. 3.6. Advantages of Address Events. 4: Stereopsis. 4.1. Stereocorrespondence. 4.2. Neurophysiology. 4.3. Stereocorrespondence Algorithms. 4.4. Stereocorrespondence Chip. 4.5. Experiments. 4.6. Stereocorrespondence as a Model of Cortical Function. 5: System. A: Simple Circuits. A.1. Transistors. A.2. Current Mirrors. A.3. Differential Pairs. A.4. Transconductance Amplifiers. A.5. Low-Pass Filter. A.6. Resistor. Bibliography. Index.


Scientific American | 1991

The Silicon Retina

Misha Mahowald; Carver A. Mead

• The USB Revolution Robotic vision is the most fascinating and feasible application for neuromorphic engineering, since processing images in real time with low power consumption is the field’s most critical requirement. Conventional machine vision systems have been implemented using CMOS (complimentary metal-oxide semiconductor) imagers or CCD (charge-coupled device) cameras that are interfaced to digital processing systems operating with serial algorithms. These systems often consume too much power, are too large, and compute with too high a cost.1 Though neuromorphic technology has advantages in these areas, there are some disadvantages too: current implementations have less-programmable architectures, for example, than digital processing technologies. In addition, digital image processing has a long history and highly-developed hardware and software for pattern recognition are readily available. We therefore think it is practical—at least at the current stage of progress in neuromorphic engineering—to combine neuromorphic sensors with conventional digital technology to implement, for robot vision, the computational essence of what the brain does. On this basis, we designed a neuromorphic vision system consisting of analog VLSI (very-large silicon integration) neuromorphic chips and field-programmable gate array (FPGA) circuits. Figure 1 shows a block diagram of the system, which consists of silicon retinas, ‘simple-cell’ chips (named after the simple cell in the V1 area of the brain) and FPGA circuits. The silicon retina is implemented with active pixel sensors (conventionallysampled photo sensors)2 and has a concentric center-surround Laplacian-Gaussian-like receptive field.2 Its output image is transferred to the simple-cell chips serially. These chips then aggregate analog pixel outputs from the silicon retina to generate an orientation-selective response similar to the simple-cell response in the primary visual cortex.3 The architecture mimics the feed-forward model proposed by Hubel and Wiesel,4 and efficiently computes the two dimensional Gabor-like receptive field using the concentric center-surround receptive field. The signal transfer from the silicon retina to the simple cell chip is performed using analog voltage, aided by analog memories embedded in each pixel of the simple cell chip. The output image of the simple-cell chip is then converted into a digital signal and fed into the FPGA circuits, where the image is further processed with programReceptive field Silicon retina


Nature Neuroscience | 1999

Feedback interactions between neuronal pointers and maps for attentional processing

Richard H. R. Hahnloser; Rodney J. Douglas; Misha Mahowald; Klaus Hepp

Neural networks combining local excitatory feedback with recurrent inhibition are valuable models of neocortical processing. However, incorporating the attentional modulation observed in cortical neurons is problematic. We propose a simple architecture for attentional processing. Our network consists of two reciprocally connected populations of excitatory neurons; a large population (the map) processes a feedforward sensory input, and a small population (the pointer) modulates location and intensity of this processing in an attentional manner dependent on a control input to the pointer. This pointer-map network has rich dynamics despite its simple architecture and explains general computational features related to attention/intention observed in neocortex, making it interesting both theoretically and experimentally.


Journal of Neurocytology | 1996

The role of synapses in cortical computation.

Rodney J. Douglas; Misha Mahowald; Kevan A. C. Martin; Kenneth J. Stratford

SummaryThe synapse, first introduced as a physiological hypothesis by C. S. Sherrington at the close of the nineteenth century, has, 100 years on, become the nexus for anatomical and functional investigations of interneuronal communication. A number of hypotheses have been proposed that give local synaptic interactions specific roles in generating an algebra or logic for computations in the neocortex. Experimental work, however, has provided little support for such schemes. Instead, both structural and functional studies indicate that characteristically cortical functions, e. g., the identification of the motion or orientation of objects, involve computations that must be achieved with high accuracy through the collective action of hundreds or thousands of neurons connected in recurrent microcircuits. Some important principles that emerge from this collective action can effectively be captured by simple electronic models. More detailed models explain the nature of the complex computations performed by the cortical circuits and how the computations remain so remarkably robust in the face of a number of sources of noise, including variability in the anatomical connections, large variance in the synaptic responses and in the tria-to-trial output of single neurons, and weak or degraded input signals.


NeuroImage | 1996

Neuroinformatics as Explanatory Neuroscience

Rodney J. Douglas; Misha Mahowald; Kevan A. C. Martin

There are two points of view about the meaning of neuroinformatics, which wemay write neuro-Informatics and Neuro-informatics, to reflect their different emphases. The proponents of neuro-Informatics hold that it is the application of conventional informatics to the domain of neuroscience. By contrast, the proponents of Neuro-informatics hold that it studies information processing by nervous systems. There is a very significant conceptual difference between these two views, which arise very naturally out of two contrary views of science. The first view, neuro-Informatics, arises out of the philosophy that science is description, and so the major task in modern science is to accumulate and catalogue data. Thus, neuro-Informaticians look to informatics as a maturing information technology based on general purpose computing principles. For the proponents of this goal, informatics is a tool to aid neuroscience. The aid it gives is to catalogue and manipulate neuroscientific data. The hidden assumption is that scientific data are absolute, and that once we have enough data, we will inevitably be able to answer the hard questions. The second view, Neuro-informatics, arises out of the philosophy that science is explanation, and so the major task is to extract predictive principles. Neuroinformaticians take the view that nervous systems are probably qualitatively different from the general purpose computing principles that have dominated the past few decades. Reasons for anticipating these differences are not hard to find. Indeed, many of them were pointed out by von Neumann, the very inventor of general purpose computers. In the view of Neuroinformaticians, resources should be focused on the substantive problem of neuroscience: What is the nature of computation in biological nervous systems? Our research, at the Institute of Neuroinformatics in Zurich, follows the latter point of view. More specifically, we aim to cast the neural computational processes in an electronic medium, using analog very large scale integration (aVLSI) technology. Carver Mead (1989) introduced the term neuromorphic engineering for this new approach based on the design and fabrication of artificial neural systems, such as vision systems, head–eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems (Douglas et al., 1995). Neuromorphic systems try to emulate the organization and function of biological nervous systems—they are a method of exploring the principles of neural computation from the vantage points of both neuroscience on the one hand and engineering and computer science on the other. Implicit in neuromorphic engineering is the hypothesis that neural computation may be qualitatively different from classical computers and computation. The enormous success of digital technology and general purpose computers in performing abstract tasks bred confidence that neural computation could be simply captured by those tools. In fact, general purpose computers have been quite unsuccessful in performing autonomously tasks that require any degree of sophisticated sensorimotor interaction with the real world. Even rather primitive biological nervous systems are able to extract meaningful information from a noisy world in real time, but artificial systems still lag far behind such performance.


Nature | 1991

A silicon neuron

Misha Mahowald; Rodney J. Douglas


Annual Review of Neuroscience | 1995

NEUROMORPHIC ANALOGUE VLSI

Rodney J. Douglas; Misha Mahowald; Carver A. Mead


neural information processing systems | 1996

A Spike Based Learning Neuron in Analog VLSI

Philipp Häfliger; Misha Mahowald; Lloyd Watts


Archive | 1991

A silicon neuron Nature

Misha Mahowald; Rodney J. Douglas

Collaboration


Dive into the Misha Mahowald's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rahul Sarpeshkar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christof Koch

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Humbert Suarez

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lloyd Watts

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge