Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan A. Marshall is active.

Publication


Featured researches published by Jonathan A. Marshall.


Neurocomputing | 2000

Neural model of temporal and stochastic properties of binocular rivalry

George J. Kalarickal; Jonathan A. Marshall

There is an intriguing temporal and stochastic relationship between stimulus strength and dominance duration in binocular rivalry (Blake, Psychol. Rev. 96 (1989) 145; Blake, Fox, McIntyre, J. Exp. Psychol. 88 (1971) 327; Levelt, 1965; Muller, Blake, Biol. Cybernet. 61 (1989) 223). Increasing the stimulus strength in the ipsilateral eye decreases the dominance duration of the stimulus in the contralateral eye. But, the dominance duration of the stimulus in the ipsilateral eye remains unchanged, and the alternation rate increases. In addition, during binocular rivalry successive dominance durations are independent. A simple neuro-biologically plausible cortical neural circuit exhibiting these characteristic properties is presented. In the model, the ipsilateral stimulus dominance duration is controlled by the contralateral stimulus strength.


Neural Networks | 1995

Adaptive perceptual pattern recognition by self-organizing neural networks: context, uncertainty, multiplicity, and scale

Jonathan A. Marshall

Abstract A new context-sensitive neural network, called an EXIN (excitatory + inhibitory) network, is described. EXIN networks self-organize in complex perceptual environments, in the presence of multiple superimposed patterns, multiple scales, and uncertainty. The networks use a new inhibitory learning rule, in addition to an excitatory learning rule, to allow superposition of multiple simultaneous neural activations (multiple winners), under strictly regulated circumstances, instead of forcing winner-take-all pattern classifications. The multiple activations represent uncertainty or multiplicity in perception and pattern recognition. Perceptual scission (breaking of linkages) between independent category groupings thus arises and allows effective global context-sensitive segmentation, constraint satisfaction, and exclusive credit attribution. A Weber Law neuron growth rule lets the network learn and classify input patterns despite variations in their spatial scale. Applications of the new techniques include segmentation of superimposed auditory or biosonar signals, segmentation of visual regions, and representation of visual transparency.


international symposium on neural networks | 1992

Development of perceptual context-sensitivity in unsupervised neural networks: parsing, grouping, and segmentation

Jonathan A. Marshall

A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner is described. Exposure to a perceptual environment during a developmental period configures the network to perform appropriate organization of sensory data. An anti-Hebbian learning rule causes some lateral inhibitory connection of sensory data. An anti-Hebbian learning rule causes some lateral inhibitory connection weights to weaken, thereby letting multiple neurons become simultaneously active. The rule lets other inhibitory weights remain strong; these enforce specific simultaneous contextual consistency constraints on allowable combinations of activations. EXIN networks perform near-optimal parallel parsing of multiple superimposed patterns, by simultaneous distributed activation multiple neurons. EXIN networks implement a form of credit assignment.<<ETX>>


International Journal of Neural Systems | 2000

Curved trajectory prediction using a self-organizing neural network.

Jonathan A. Marshall; Viswanath Srikanth

Existing neural network models are capable of tracking linear trajectories of moving visual objects. This paper describes an additional neural mechanism, disfacilitation, that enhances the ability of a visual system to track curved trajectories. The added mechanism combines information about an objects trajectory with information about changes in the objects trajectory, to improve the estimates for the objects next probable location. Computational simulations are presented that show how the neural mechanism can learn to track the speed of objects and how the network operates to predict the trajectories of accelerating and decelerating objects.


Neural and Stochastic Methods in Image and Signal Processing | 1992

Unsupervised learning of contextual constraints in neural networks for simultaneous visual processing of multiple objects

Jonathan A. Marshall

A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner, is described. EXIN networks develop efficient representation structures for higher-level visual tasks such as segmentation, grouping, transparency, depth perception, and size perception. Exposure to a perceptual environment during a developmental period serves to configure the network to perform appropriate organization of sensory data. A new anti-Hebbian inhibitory learning rule permits superposition of multiple simultaneous neural activations (multiple winners), while maintaining contextual consistency constraints, instead of forcing winner-take-all pattern classifications. The activations can represent multiple patterns simultaneously and can represent uncertainty. The network performs parallel parsing, credit attribution, and simultaneous constraint satisfaction. EXIN networks can learn to represent multiple oriented edges even where they intersect and can learn to represent multiple transparently overlaid surfaces defined by stereo or motion cues. In the case of stereo transparency, the inhibitory learning implements both a uniqueness constraint and permits coactivation of cells representing multiple disparities at the same image location. Thus two or more disparities can be active simultaneously without interference. This behavior is analogous to that of Prazdnys stereo vision algorithm, with the bonus that each binocular point is assigned a unique disparity. In a large implementation, such a NN would also be able to represent effectively the disparities of a cloud of points at random depths, like human observers, and unlike Prazdnys method


Network: Computation In Neural Systems | 1998

Generalization and exclusive allocation of credit in unsupervised category learning

Jonathan A. Marshall; Vinay S Gupta

A new way of measuring generalization in unsupervised learning is presented. The measure is based on an exclusive allocation, or credit assignment, criterion. In a classifier that satisfies the criterion, input patterns are parsed so that the credit for each input feature is assigned exclusively to one of multiple, possibly overlapping, output categories. Such a classifier achieves context-sensitive, global representations of pattern data. Two additional constraints, sequence masking and uncertainty multiplexing, are described; these can be used to refine the measure of generalization. The generalization performance of EXIN networks, winner-take-all competitive learning networks, linear decorrelator networks, and Nigrins SONNET-2 network are compared.


CNS '96 Proceedings of the annual conference on Computational neuroscience : trends in research, 1997: trends in research, 1997 | 1997

Modeling dynamic receptive field changes in primary visual cortex using inhibitory learning

Jonathan A. Marshall; George J. Kalarickal

The position, size, and shape of the visual receptive field (RF) of some primary visual cortical neurons change dynamically, in response to artificial scotoma conditioning in cats7 and to retinal lesions in cats and monkeys.3 The “EXIN” learning rules6 are used to model dynamic RF changes. The EXIN model is compared with an adaptation model11 and the LISSOM model.9,10 To emphasize the role of the lateral inhibitory learning rules, the EXIN and the LISSOM simulations were done with only lateral inhibitory learning. During scotoma conditioning, the EXIN model without feedforward learning produces centrifugal expansion of RFs initially inside the scotoma region, accompanied by increased responsiveness, without changes in spontaneous activation. The EXIN model without feedforward learning is more consistent with the neurophysiological data than are the adaptation model and the LISSOM model. The comparison between the EXIN and the LISSOM models suggests experiments to determine the role of feedforward excitatory and lateral inhibitory learning in producing dynamic RF changes during scotoma conditioning.


CNS '97 Proceedings of the sixth annual conference on Computational neuroscience : trends in research, 1998: trends in research, 1998 | 1998

Neural model of transfer-of-binding in visual relative motion perception

Jonathan A. Marshall; Charles Schmitt; George J. Kalarickal; Richard K. Alley

Human visual systems are much more sensitive to relative motion than to absolute motion. For example, the relative motion of two dots on a blank background is more easily detected than the motion of a single dot on a blank background. If both dots are also moved relative to the background, then their motion relative to each other remains more easily detected than their motion relative to the background. Each dot thus seems to provide a reference frame for the other’s motion.


CNS '97 Proceedings of the sixth annual conference on Computational neuroscience : trends in research, 1998: trends in research, 1998 | 1998

Modeling dynamic receptive field changes produced by intracortical microstimulation

George J. Kalarickal; Jonathan A. Marshall

Intracortical microstimulation (ICMS) of a localized site in the somatosensory cortex of rats and monkeys produces reorganization of receptive field (RF) topography over a large region of the cortex.1 ICMS excites nearly all afferents and excitatory and inhibitory cortical neurons within a few microns of the stimulating electrode. This stimulation produces nearly simultaneous activation of all pre- and postsynaptic elements and modulatory inputs close to the ICMS site. In addition, some of the afferent terminals receive ortho- and antidromic excitation. However, not all anti- and orthodromically excited afferents succeed in driving their target neurons above threshold.1 ICMS of the cortex for 2–6 hours produces a large (2-fold to over 20-fold) increase in the cortical representation of the skin region represented by the ICMS-site neurons before ICMS.1


neural information processing systems | 1992

Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections

Kevin E. Martin; Jonathan A. Marshall

Collaboration


Dive into the Jonathan A. Marshall's collaboration.

Top Co-Authors

Avatar

George J. Kalarickal

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Charles Schmitt

Renaissance Computing Institute

View shared research outputs
Top Co-Authors

Avatar

Richard K. Alley

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Kevin E. Martin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Christina A. Burbeck

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth B Graves

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert S. Hubbard

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Vinay S Gupta

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge