Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John A. Perrone is active.

Publication


Featured researches published by John A. Perrone.


Vision Research | 1994

A model of self-motion estimation within primate extrastriate visual cortex

John A. Perrone; Leland S. Stone

Perrone [(1992) Journal of the Optical Society of America A, 9, 177-194] recently proposed a template-based model of self-motion estimation which uses direction- and speed-tuned input sensors similar to neurons in area MT of primate visual cortex. Such an approach would generally require an unrealistically large number of templates (five continuous dimensions). However, because primates, including humans, have a number of oculomotor mechanisms which stabilize gaze during locomotion, we can greatly reduce the number of templates required (two continuous dimensions and one compressed and bounded dimension). We therefore refined the model to deal with the gaze-stabilization case and extended it to extract heading and relative depth simultaneously. The new model is consistent with previous human psychophysics and has the emergent property that its output detectors have similar response properties to neurons in area MST.


Journal of The Optical Society of America A-optics Image Science and Vision | 1992

Model for the computation of self-motion in biological systems

John A. Perrone

I present a method by which direction- and speed-tuned cells, such as those commonly found in the middle temporal area of the primate brain, can be used to analyze the patterns of retinal image motion that are generated during observer movement through the environment. For pure translation, the retinal image motion is radial in nature and expands out from a point that corresponds to the direction of heading. This heading direction can be found by the use of translation detectors that act as templates for the radial image motion. Each translation detector sums the outputs of direction- and speed-tuned motion sensors arranged such that their preferred direction of motion lies along the radial direction out from the detector center. The most active detector signifies the heading direction. Rotation detectors can be constructed in a similar fashion to detect areas of uniform image speed and direction in the motion field produced by observer rotation. A model consisting of both detector types can determine the heading direction independently of any rotational motion of the observer. The model can achieve this from the outputs of the two-dimensional motion sensors directly and does not assume the existence of accurate estimates of image speed and direction. It is robust to the aperture problem and is biologically realistic. The basic elements of the model have been shown to exist in the primate visual cortex.


Vision Research | 1997

Human heading estimation during visually simulated curvilinear motion

Leland S. Stone; John A. Perrone

Recent studies have suggested that humans cannot estimate their direction of forward translation (heading) from the resulting retinal motion (flow field) alone when rotation rates are higher than approximately 1 deg/sec. It has been argued that either oculomotor or static depth cues are necessary to disambiguate the rotational and translational components of the flow field and, thus, to support accurate heading estimation. We have re-examined this issue using visually simulated motion along a curved path towards a layout of random points as the stimulus. Our data show that, in this curvilinear motion paradigm, five of six observers could estimate their heading relatively accurately and precisely (error and uncertainty < approximately 4 deg), even for rotation rates as high as 16 deg/sec, without the benefit of either oculomotor or static depth cues signaling rotation rate. Such performance is inconsistent with models of human self-motion estimation that require rotation information from sources other than the flow field to cancel the rotational flow.


Vision Research | 2004

A visual motion sensor based on the properties of V1 and MT neurons.

John A. Perrone

The motion response properties of neurons increase in complexity as one moves from primary visual cortex (V1), up to higher cortical areas such as the middle temporal (MT) and the medial superior temporal area (MST). Many of the features of V1 neurons can now be replicated using computational models based on spatiotemporal filters. However until recently, relatively little was known about how the motion analysing properties of MT neurons could originate from the V1 neurons that provide their inputs. This has constrained the development of models of the MT-MST stages which have been linked to higher level motion processing tasks such as self-motion perception and depth estimation. I describe the construction of a motion sensor built up in stages from two spatiotemporal filters with properties based on V1 neurons. The resulting composite sensor is shown to have spatiotemporal frequency response profiles, speed and direction tuning responses that are comparable to MT neurons. The sensor is designed to work with digital images and can therefore be used as a realistic front-end to models of MT and MST neuron processing; it can be probed with the same two-dimensional motion stimuli used to test the neurons and has the potential to act as a building block for more complex models of motion processing.


Journal of Vision | 2008

Spatial integration by MT pattern neurons: a closer look at pattern-to-component effects and the role of speed tuning.

John A. Perrone; Richard J. Krauzlis

The primate visual system faces a difficult problem whenever it encounters the motion of an object moving over a patch of the retina. Objects typically contain a number of edges at different orientations and so a range of image velocities are generated within the receptive field of a neuron processing the object movement. It is still a mystery as to how these different velocities are combined into one unified and correct velocity. Neurons in area MT (V5) are considered to be the neural substrate for this motion integration process. Some MT neurons (pattern type) respond selectively to the correct global motion of an object, whereas others respond primarily to the individual components making up the pattern (component type). Recent findings from MT pattern cells tested with small patches of motion (N. J. Majaj, M. Carandini, & J. A. Movshon, 2007) have put further constraints on the possible mechanisms underlying MT pattern motion integration. We tested and refined an existing model of MT pattern neurons (J. A. Perrone, 2004) using these same small patch stimuli and found that it can accommodate these new findings. We also discovered that the speed of the test stimuli may have had an impact on the N. J. Majaj et al. (2007) results and that MT direction and speed tuning may be more closely linked than previously thought.


Perception | 1982

Visual Slant Underestimation: A General Model

John A. Perrone

A general model of visual slant underestimation is presented. It is based on the idea that two specific types of perceptual error occur in the evaluation of the slant angle by the observer. The reason for these errors occurring is postulated to be that reduced viewing conditions result in the deviation of the observers perceived straight-ahead direction from the true direction. Specifically this deviation is postulated to be in the direction of the nearest part of the surface in accord with conditions that exist in our everyday environment. In the case of a slanted rectangle correct registration of the projected length of half of the surface and the correct registration of the appropriate angle of convergence will result in perception being veridical. A mechanism is outlined which indicates how both of these factors can be in error, and an equation is developed which enables the predicted slant estimates to be calculated given the dimensions of the rectangle and its distance from the eye. Equations for the case of slanted surfaces viewed through apertures are also developed. The model is assessed in relation to past slant-perception experiments and is found to be a good predictor of the large amount of previously unexplained underestimation that occurred in these studies.


Journal of Vision | 2005

Economy of scale: a motion sensor with variable speed tuning.

John A. Perrone

We have previously presented a model of how neurons in the primate middle temporal (MT/V5) area can develop selectivity for image speed by using common properties of the V1 neurons that precede them in the visual motion pathway (J. A. Perrone & A. Thiele, 2002). The motion sensor developed in this model is based on two broad classes of V1 complex neurons (sustained and transient). The S-type neuron has low-pass temporal frequency tuning, p(omega), and the T-type has band-pass temporal frequency tuning, m(omega). The outputs from the S and T neurons are combined in a special way (weighted intersection mechanism [WIM]) to generate a sensor tuned to a particular speed, v. Here I go on to show that if the S and T temporal frequency tuning functions have a particular form (i.e., p(omega)/(m(omega) = k/omega), then a motion sensor with variable speed tuning can be generated from just two V1 neurons. A simple scaling of the S- or T-type neuron output before it is incorporated into the WIM model produces a motion sensor that can be tuned to a wide continuous range of optimal speeds.


Attention Perception & Psychophysics | 1986

Anisotropic responses to motion toward and away from the eye

John A. Perrone

When a rigid object moves toward the eye, it is usually perceived as being rigid. However, in the case of motion away from the eye, the motion and structure of the object are perceived nonveridically, with the percept tending to reflect the nonrigid transformations that are present in the retinal image. This difference in response to motion to and from the observer was quantified in an experiment using wire-frame computer-generated boxes which moved toward and away from the eye. Two theoretical systems are developed by which uniform three-dimensional velocity can be recovered from an expansion pattern of nonuniform velocity vectors. It is proposed that the human visual system uses two similar systems for processing motion in depth. The mechanism used for motion away from the eye produces perceptual errors because it is not suited to objects with a depth component.


Perception | 1980

Slant Underestimation: A Model Based on the Size of the Viewing Aperture

John A. Perrone

By analyzing the projection plane in terms of the projected size of different elements on a surface, it is shown how the direction of the perpendicular, from the station point to the surface, is an important variable in the derivation of the slant angle. It is also shown that the test surfaces used in traditional slant-perception experiments contain no information about this direction. A model is proposed which is based on the idea that the direction of the line from the eye to one edge of the viewing aperture is mistaken for the perpendicular, and two options are derived to show how the information in the optical array could be interpreted on the basis of the perpendicular lying in this new direction. It is shown that both of these options are dependent upon the size of the field of view of the test surface and both are underestimations as long as half of the angle measuring the field of view is less than the actual slant of the surface. The model is tested against some data from previously reported experiments and is found to provide a close fit.


The Journal of Neuroscience | 2006

A Single Mechanism Can Explain the Speed Tuning Properties of MT and V1 Complex Neurons

John A. Perrone

A recent study by Priebe et al., (2006) has shown that a small proportion (27%) of primate directionally selective, complex V1 neurons are tuned for the speed of image motion. In this study, I show that the weighted intersection mechanism (WIM) model, which was previously proposed to explain speed tuning in middle temporal neurons, can also explain the tuning found in complex V1 neurons. With the addition of a contrast gain mechanism, this model is able to replicate the effects of contrast on V1 speed tuning, a phenomenon that was recently discovered by Priebe et al., (2006). The WIM model simulations also indicate that V1 neuron spatiotemporal frequency response maps may be asymmetrical in shape and hence poorly characterized by the symmetrical two-dimensional Gaussian fitting function used by Priebe et al., (2006) to classify their cells. Therefore, the actual proportion of speed tuning among directional complex V1 cells may be higher than the 27% estimate suggested by these authors.

Collaboration


Dive into the John A. Perrone's collaboration.

Top Co-Authors

Avatar

Richard J. Krauzlis

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David G. Smith

National Institute of Water and Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge