Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ennio Mingolla is active.

Publication


Featured researches published by Ennio Mingolla.


Journal of Vision | 2012

Dynamic coding of border-ownership in visual cortex

Oliver W. Layton; Ennio Mingolla; Arash Yazdanbakhsh

Humans are capable of rapidly determining whether regions in a visual scene appear as figures in the foreground or as background, yet how figure-ground segregation occurs in the primate visual system is unknown. Figures in the environment are perceived to own their borders, and recent neurophysiology has demonstrated that certain cells in primate visual area V2 have border-ownership selectivity. We present a dynamic model based on physiological data that indicates areas V1, V2, and V4 act as an interareal network to determine border-ownership. Our model predicts that competition between curvature- sensitive cells in V4 that have on-surround receptive fields of different sizes can determine likely figure locations and rapidly propagate the information interareally to V2 border-ownership cells that receive contrast information from V1. In the model border-ownership is an emergent property produced by the dynamic interactions between V1, V2, and V4, one which could not be determined by any single cortical area alone.


Neural Networks | 2013

A neural model of visual figure-ground segregation from kinetic occlusion

Timothy Barnes; Ennio Mingolla

Freezing is an effective defense strategy for some prey, because their predators rely on visual motion to distinguish objects from their surroundings. An object moving over a background progressively covers (deletes) and uncovers (accretes) background texture while simultaneously producing discontinuities in the optic flow field. These events unambiguously specify kinetic occlusion and can produce a crisp edge, depth perception, and figure-ground segmentation between identically textured surfaces--percepts which all disappear without motion. Given two abutting regions of uniform random texture with different motion velocities, one region appears to be situated farther away and behind the other (i.e., the ground) if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities. Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e., the figure) if the boundary is moving coherently with the moving texture. A computational model of visual areas V1 and V2 shows how interactions between orientation- and direction-selective cells first create a motion-defined boundary and then signal kinetic occlusion at that boundary. Activation of model occlusion detectors tuned to a particular velocity results in the model assigning the adjacent surface with a matching velocity to the far depth. A weak speed-depth bias brings faster-moving texture regions forward in depth in the absence of occlusion (shearing motion). These processes together reproduce human psychophysical reports of depth ordering for key cases of kinetic occlusion displays.


Frontiers in Psychology | 2014

Neural dynamics of feedforward and feedback processing in figure-ground segregation.

Oliver W. Layton; Ennio Mingolla; Arash Yazdanbakhsh

Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figures interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation.


Computational Intelligence and Neuroscience | 2016

Mitigation of Effects of Occlusion on Object Recognition with Deep Neural Networks through Low-Level Image Completion

Benjamin Chandler; Ennio Mingolla

Heavily occluded objects are more difficult for classification algorithms to identify correctly than unoccluded objects. This effect is rare and thus hard to measure with datasets like ImageNet and PASCAL VOC, however, owing to biases in human-generated image pose selection. We introduce a dataset that emphasizes occlusion and additions to a standard convolutional neural network aimed at increasing invariance to occlusion. An unmodified convolutional neural network trained and tested on the new dataset rapidly degrades to chance-level accuracy as occlusion increases. Training with occluded data slows this decline but still yields poor performance with high occlusion. Integrating novel preprocessing stages to segment the input and inpaint occlusions is an effective mitigation. A convolutional network so modified is nearly as effective with more than 81% of pixels occluded as it is with no occlusion. Such a network is also more accurate on unoccluded images than an otherwise identical network that has been trained with only unoccluded images. These results depend on successful segmentation. The occlusions in our dataset are deliberately easy to segment from the figure and background. Achieving similar results on a more challenging dataset would require finding a method to split figure, background, and occluding pixels in the input.


Journal of Vision | 2013

Modeling a space-variant cortical representation for apparent motion

Jeremy Wurbs; Ennio Mingolla; Arash Yazdanbakhsh

Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.


PLOS ONE | 2015

Tuning Properties of MT and MSTd and Divisive Interactions for Eye-Movement Compensation.

Bo Cao; Ennio Mingolla; Arash Yazdanbakhsh

The primate brain intelligently processes visual information from the world as the eyes move constantly. The brain must take into account visual motion induced by eye movements, so that visual information about the outside world can be recovered. Certain neurons in the dorsal part of monkey medial superior temporal area (MSTd) play an important role in integrating information about eye movements and visual motion. When a monkey tracks a moving target with its eyes, these neurons respond to visual motion as well as to smooth pursuit eye movements. Furthermore, the responses of some MSTd neurons to the motion of objects in the world are very similar during pursuit and during fixation, even though the visual information on the retina is altered by the pursuit eye movement. We call these neurons compensatory pursuit neurons. In this study we develop a computational model of MSTd compensatory pursuit neurons based on physiological data from single unit studies. Our model MSTd neurons can simulate the velocity tuning of monkey MSTd neurons. The model MSTd neurons also show the pursuit compensation property. We find that pursuit compensation can be achieved by divisive interaction between signals coding eye movements and signals coding visual motion. The model generates two implications that can be tested in future experiments: (1) compensatory pursuit neurons in MSTd should have the same direction preference for pursuit and retinal visual motion; (2) there should be non-compensatory pursuit neurons that show opposite preferred directions of pursuit and retinal visual motion.


Journal of Vision | 2013

A computational study of brightness-related responses in visual cortex

Bo Cao; Ennio Mingolla; Arash Yazdanbahksh


Journal of Vision | 2015

Computational Modeling of Depth-Ordering in Occlusion through Accretion or Deletion of Texture

Harald Ruda; Gennady Livitz; Guillaume Riesen; Ennio Mingolla


Journal of Vision | 2015

Effect of achromatic afterimage on spatial chromatic induction.

Guillaume Riesen; Gennady Livitiz; Rhea T. Eskew; Ennio Mingolla


Journal of Vision | 2014

Rapidly estimating numerosity independent of size-related distance or occlusion

Guillaume Riesen; Harald Ruda; Ennio Mingolla

Collaboration


Dive into the Ennio Mingolla's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harald Ruda

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Bo Cao

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge