Minjoon Kouh
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Minjoon Kouh.
Neural Computation | 2008
Minjoon Kouh; Tomaso Poggio
A few distinct cortical operations have been postulated over the past few years, suggested by experimental data on nonlinear neural response across different areas in the cortex. Among these, the energy model proposes the summation of quadrature pairs following a squaring nonlinearity in order to explain phase invariance of complex V1 cells. The divisive normalization model assumes a gain-controlling, divisive inhibition to explain sigmoid-like response profiles within a pool of neurons. A gaussian-like operation hypothesizes a bell-shaped response tuned to a specific, optimal pattern of activation of the presynaptic inputs. A max-like operation assumes the selection and transmission of the most active response among a set of neural inputs. We propose that these distinct neural operations can be computed by the same canonical circuitry, involving divisive normalization and polynomial nonlinearities, for different parameter values within the circuit. Hence, this canonical circuit may provide a unifying framework for several circuit models, such as the divisive normalization and the energy models. As a case in point, we consider a feedforward hierarchical model of the ventral pathway of the primate visual cortex, which is built on a combination of the gaussian-like and max-like operations. We show that when the two operations are approximated by the circuit proposed here, the model is capable of generating selective and invariant neural responses and performing object recognition, in good agreement with neurophysiological data.
The Journal of Neuroscience | 2007
Davide Zoccolan; Minjoon Kouh; Tomaso Poggio; James J. DiCarlo
Object recognition requires both selectivity among different objects and tolerance to vastly different retinal images of the same object, resulting from natural variation in (e.g.) position, size, illumination, and clutter. Thus, discovering neuronal responses that have object selectivity and tolerance to identity-preserving transformations is fundamental to understanding object recognition. Although selectivity and tolerance are found at the highest level of the primate ventral visual stream [the inferotemporal cortex (IT)], both properties are highly varied and poorly understood. If an IT neuron has very sharp selectivity for a unique combination of object features (“diagnostic features”), this might automatically endow it with high tolerance. However, this relationship cannot be taken as given; although some IT neurons are highly object selective and some are highly tolerant, the empirical connection of these key properties is unknown. In this study, we systematically measured both object selectivity and tolerance to different identity-preserving image transformations in the spiking responses of a population of monkey IT neurons. We found that IT neurons with high object selectivity typically have low tolerance (and vice versa), regardless of how object selectivity was quantified and the type of tolerance examined. The discovery of this trade-off illuminates object selectivity and tolerance in IT and unifies a range of previous, seemingly disparate results. This finding also argues against the idea that diagnostic conjunctions of features guarantee tolerance. Instead, it is naturally explained by object recognition models in which object selectivity is built through AND-like tuning mechanisms.
Network: Computation In Neural Systems | 2009
Minjoon Kouh; Tatyana O. Sharpee
This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear–nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér–Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear–nonlinear models from neural data.
Proceedings of the National Academy of Sciences of the United States of America | 2013
Tatyana O. Sharpee; Minjoon Kouh; John H. Reynolds
Humans can rapidly recognize a multitude of objects despite differences in their appearance. The neural mechanisms that endow high-level sensory neurons with both selectivity to complex stimulus features and “tolerance” or invariance to identity-preserving transformations, such as spatial translation, remain poorly understood. Previous studies have demonstrated that both tolerance and selectivity to conjunctions of features are increased at successive stages of the ventral visual stream that mediates visual recognition. Within a given area, such as visual area V4 or the inferotemporal cortex, tolerance has been found to be inversely related to the sparseness of neural responses, which in turn was positively correlated with conjunction selectivity. However, the direct relationship between tolerance and conjunction selectivity has been difficult to establish, with different studies reporting either an inverse or no significant relationship. To resolve this, we measured V4 responses to natural scenes, and using recently developed statistical techniques, we estimated both the relevant stimulus features and the range of translation invariance for each neuron. Focusing the analysis on tuning to curvature, a tractable example of conjunction selectivity, we found that neurons that were tuned to more curved contours had smaller ranges of position invariance and produced sparser responses to natural stimuli. These trade-offs provide empirical support for recent theories of how the visual system estimates 3D shapes from shading and texture flows, as well as the tiling hypothesis of the visual space for different curvature values.
Journal of Vision | 2010
Charles F. Cadieu; Minjoon Kouh; Maximilian Riesenhuber; Tomaso Poggio
Abstract : The computational processes in the intermediate stages of the ventral pathway responsible for visual object recognition are not well understood. A recent physiological study by A. Pasupathy and C. Connor in intermediate area V4 using contour stimuli, proposes that a population of V4 neurons display object-centered, position-specific curvature tuning. The standard model of object recognition, a recently developed model to account for recognition properties of IT cells (extending classical suggestions by Hubel, Wiesel and others), is used here to model the response of the V4 cells described in Pasupathy and Connor. Our results show that a feedforward, network level mechanism can exhibit selectivity and invariance properties that correspond to the responses of the V4 cells. These results suggest how object-centered, position-specific curvature tuning of V4 cells may arise from combinations of complex V1 cell responses. Furthermore, the model makes predictions about the responses of the same V4 cells studied by Pasupathy and Connor to novel gray level patterns, such as gratings and natural images. These predictions suggest specific experiments to further explore shape representation in V4.
The Physics Teacher | 2011
Alae Kawam; Minjoon Kouh
In an introductory physics course where students first learn about vectors, they oftentimes struggle with the concept of vector addition and decomposition. For example, the classic physics problem involving a mass on an inclined plane requires the decomposition of the force of gravity into two directions that are parallel and perpendicular to the ramp. It takes time and effort for the students to become proficient with such a vector decomposition process. Here, we present simple lab experiments to help students learn and practice the vector concepts, by working with a familiar low-cost accelerometer, the Wii Remote (“Wiimote”).
The Physics Teacher | 2013
Minjoon Kouh; Danielle Holz; Alae Kawam; Mary Lamont
The advent of new sensor technologies can provide new ways of exploring fundamental physics. In this paper, we show how a Wiimote, which is a handheld remote controller for the Nintendo Wii video game system with an accelerometer, can be used to study the dynamics of circular motion with a very simple setup such as an old record player or a bicycle wheel.
Neural Computation | 2017
Minjoon Kouh
In a sensory neural network, where a population of presynaptic neurons sends information to a downstream neuron, maximizing information transmission depends on utilizing the full operating range of the output of the postsynaptic neuron. Because the convergence of presynaptic inputs naturally biases higher outputs, a sparse input distribution would counter such bias and optimize information transmission.
The Physics Teacher | 2016
Minjoon Kouh
Typically, introductory physics courses are taught with a combination of lectures and laboratories in which students have opportunities to discover the natural laws through hands-on activities in small groups. This article reports the use of Google Drive, a free online document-sharing tool, in physics laboratories for pooling experimental data from the whole class. This pedagogical method was reported earlier, and the present article offers a few more examples of such “whole class” laboratories.
Journal of Neurophysiology | 2007
Charles F. Cadieu; Minjoon Kouh; Anitha Pasupathy; Charles E. Connor; Maximilian Riesenhuber; Tomaso Poggio