Patrick A. Shoemaker
San Diego State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Patrick A. Shoemaker.
PLOS ONE | 2008
Steven D. Wiederman; Patrick A. Shoemaker; David C. O'Carroll
We present a computational model for target discrimination based on intracellular recordings from neurons in the fly visual system. Determining how insects detect and track small moving features, often against cluttered moving backgrounds, is an intriguing challenge, both from a physiological and a computational perspective. Previous research has characterized higher-order neurons within the fly brain, known as ‘small target motion detectors’ (STMD), that respond robustly to moving features, even when the velocity of the target is matched to the background (i.e. with no relative motion cues). We recorded from intermediate-order neurons in the fly visual system that are well suited as a component along the target detection pathway. This full-wave rectifying, transient cell (RTC) reveals independent adaptation to luminance changes of opposite signs (suggesting separate ON and OFF channels) and fast adaptive temporal mechanisms, similar to other cell types previously described. From this physiological data we have created a numerical model for target discrimination. This model includes nonlinear filtering based on the fly optics, the photoreceptors, the 1st order interneurons (Large Monopolar Cells), and the newly derived parameters for the RTC. We show that our RTC-based target detection model is well matched to properties described for the STMDs, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear ‘matched filter’ to successfully detect most targets from the background. Importantly, this model can explain this type of feature discrimination without the need for relative motion cues.
Biological Cybernetics | 2005
Patrick A. Shoemaker; David C. O’Carroll; Andrew D. Straw
The tangential neurons in the lobula plate region of the flies are known to respond to visual motion across broad receptive fields in visual space.When intracellular recordings are made from tangential neurons while the intact animal is stimulated visually with moving natural imagery,we find that neural response depends upon speed of motion but is nearly invariant with respect to variations in natural scenery. We refer to this invariance as velocity constancy. It is remarkable because natural scenes, in spite of similarities in spatial structure, vary considerably in contrast, and contrast dependence is a feature of neurons in the early visual pathway as well as of most models for the elementary operations of visual motion detection. Thus, we expect that operations must be present in the processing pathway that reduce contrast dependence in order to approximate velocity constancy.We consider models for such operations, including spatial filtering, motion adaptation, saturating nonlinearities, and nonlinear spatial integration by the tangential neurons themselves, and evaluate their effects in simulations of a tangential neuron and precursor processing in response to animated natural imagery. We conclude that all such features reduce interscene variance in response, but that the model system does not approach velocity constancy as closely as the biological tangential cell.
Current Biology | 2012
Jacob W. Aptekar; Patrick A. Shoemaker; Mark A. Frye
Visual figures may be distinguished based on elementary motion or higher-order non-Fourier features, and flies track both. The canonical elementary motion detector, a compact computation for Fourier motion direction and amplitude, can also encode higher-order signals provided elaborate preprocessing. However, the way in which a fly tracks a moving figure containing both elementary and higher-order signals has not been investigated. Using a novel white noise approach, we demonstrate that (1) the composite response to an object containing both elementary motion (EM) and uncorrelated higher-order figure motion (FM) reflects the linear superposition of each component; (2) the EM-driven component is velocity-dependent, whereas the FM component is driven by retinal position; (3) retinotopic variation in EM and FM responses are different from one another; (4) the FM subsystem superimposes saccadic turns upon smooth pursuit; and (5) the two systems in combination are necessary and sufficient to predict the full range of figure tracking behaviors, including those that generate no EM cues at all. This analysis requires an extension of the model that fly motion vision is based on simple elementary motion detectors and provides a novel method to characterize the subsystems responsible for the pursuit of visual figures.
The Journal of Neuroscience | 2013
Steven D. Wiederman; Patrick A. Shoemaker; David C. O'Carroll
In both vertebrates and invertebrates, evidence supports separation of luminance increments and decrements (ON and OFF channels) in early stages of visual processing (Hartline, 1938; Joesch et al., 2010); however, less is known about how these parallel pathways are recombined to encode form and motion. In Drosophila, genetic knockdown of inputs to putative ON and OFF pathways and direct recording from downstream neurons in the wide-field motion pathway reveal that local elementary motion detectors exist in pairs that separately correlate contrast polarity channels, ON with ON and OFF with OFF (Joesch et al., 2013). However, behavioral responses to reverse-phi motion of discrete features reveal additional correlations of the opposite signs (Clark et al., 2011). We here present intracellular recordings from feature detecting neurons in the dragonfly that provide direct physiological evidence for the correlation of OFF and ON pathways. These neurons show clear polarity selectivity for feature contrast, responding strongly to targets that are darker than the background and only weakly to dark contrasting edges. These dark target responses are much stronger than the linear combination of responses to ON and OFF edges. We compare these data with output from elementary motion detector-based models (Eichner et al., 2011; Clark et al., 2011), with and without stages of strong center-surround antagonism. Our data support an alternative elementary small target motion detector model, which derives dark target selectivity from the correlation of a delayed OFF with an un-delayed ON signal at each individual visual processing unit (Wiederman et al., 2008, 2009).
Frontiers in Neural Circuits | 2012
James R. Dunbier; Steven D. Wiederman; Patrick A. Shoemaker; David C. O'Carroll
Dragonflies detect and pursue targets such as other insects for feeding and conspecific interaction. They have a class of neurons highly specialized for this task in their lobula, the “small target motion detecting” (STMD) neurons. One such neuron, CSTMD1, reaches maximum response slowly over hundreds of milliseconds of target motion. Recording the intracellular response from CSTMD1 and a second neuron in this system, BSTMD1, we determined that for the neurons to reach maximum response levels, target motion must produce sequential local activation of elementary motion detecting elements. This facilitation effect is most pronounced when targets move at velocities slower than what was previously thought to be optimal. It is completely disrupted if targets are instantaneously displaced a few degrees from their current location. Additionally, we utilize a simple computational model to discount the parsimonious hypothesis that CSTMD1s slow build-up to maximum response is due to it incorporating a sluggish neural delay filter. Whilst the observed facilitation may be too slow to play a role in prey pursuit flights, which are typically rapidly resolved, we hypothesize that it helps maintain elevated sensitivity during prolonged, aerobatically intricate conspecific pursuits. Since the effect seems to be localized, it most likely enhances the relative salience of the most recently “seen” locations during such pursuit flights.
The Journal of Experimental Biology | 2014
Jessica L. Fox; Jacob W. Aptekar; Nadezhda M. Zolotova; Patrick A. Shoemaker; Mark A. Frye
The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input.
Frontiers in Behavioral Neuroscience | 2010
Jamie C. Theobald; Patrick A. Shoemaker; Dario L. Ringach; Mark A. Frye
The tiny brains of insects presumably impose significant computational limitations on algorithms controlling their behavior. Nevertheless, they perform fast and sophisticated visual maneuvers. This includes tracking features composed of second-order motion, in which the feature is defined by higher-order image statistics, but not simple correlations in luminance. Flies can track the true direction of even theta motions, in which the first-order (luminance) motion is directed opposite the second-order moving feature. We exploited this paradoxical feature tracking response to dissect the particular image properties that flies use to track moving objects. We find that theta motion detection is not simply a result of steering toward any spatially restricted flicker. Rather, our results show that fly high-order feature tracking responses can be broken down into positional and velocity components – in other words, the responses can be modeled as a superposition of two independent steering efforts. We isolate these elements to show that each has differing influence on phase and amplitude of steering responses, and together they explain the time course of second-order motion tracking responses during flight. These observations are relevant to natural scenes, where moving features can be much more complex.
International Symposium on Microelectronics and MEMS | 2001
Patrick A. Shoemaker; David C. O'Carroll; Andrew D. Straw
Visual detection and processing of motion in insects is thought to occur based on an elementary delay-and-correlate operation at an early stage in the visual pathway. The correlational elementary motion detector (EMD) indicates the presence of moving stimuli on the retina and is directionally sensitive, but it is a complex spatiotemporal filter and does not inherently encode important motion parameters such as velocity. However, additional processing, in combination with natural visual stimuli, may allow computation of useful motion parameters. One such feature is adaptation in response to motion, until recently thought to occur by modification of the delay time constant, but now shown to arise due mainly to adjustment of contrast gain. This adaptation renders EMD output less dependent on scene contrast and enables it to carry some velocity information. We describe an ongoing effort to characterize this system in engineering terms, and to implement an analog VLSI model of it. Building blocks for a correlational EMD, and a mechanism for computing and implementing adjustment of contrast gain are described. This circuitry is intended as front-end processing for classes of higher-level visual motion computation also performed by insects, including estimation of egomotion by optical flow, and detection of moving targets.
international conference on intelligent sensors, sensor networks and information processing | 2011
James R. Dunbier; Steven D. Wiederman; Patrick A. Shoemaker; David C. O'Carroll
Insects are an excellent model system for investigating computational mechanisms evolved for the challenging task of visualising and tracking small moving targets. We examined a well categorised small target motion detector (STMD) neuron, the dragonfly centrifugal STMD 1 (CSTMD1). This neuron has an unusually slow response onset, with a time course in the order of hundreds of milliseconds. A parsimonious explanation for this slow onset would be temporal low-pass filtering. However other authors have dismissed this and instead proposed a facilitation mechanism derived from second order motion detectors. We tested the spatial locality of response to continuous motion on non-contiguous paths and found spatial discontinuities in otherwise continuous motion reset the neuronal response. We modelled an array of elementary motion detectors (EMDs) in the insect visual pathway. We found that whilst individual components of the response can be explained simply by modifying the properties of the EMDs, the neurons response considered as a whole requires further elaborations within the system such as the proposed second order motion pathway.
Biological Cybernetics | 2011
Patrick A. Shoemaker; Andrew M. Hyslop; J. Sean Humbert
We generated panoramic imagery by simulating a fly-like robot carrying an imaging sensor, moving in free flight through a virtual arena bounded by walls, and containing obstructions. Flight was conducted under closed-loop control by a bio-inspired algorithm for visual guidance with feedback signals corresponding to the true optic flow that would be induced on an imager (computed by known kinematics and position of the robot relative to the environment). The robot had dynamics representative of a housefly-sized organism, although simplified to two-degree-of-freedom flight to generate uniaxial (azimuthal) optic flow on the retina in the plane of travel. Surfaces in the environment contained images of natural and man-made scenes that were captured by the moving sensor. Two bio-inspired motion detection algorithms and two computational optic flow estimation algorithms were applied to sequences of image data, and their performance as optic flow estimators was evaluated by estimating the mutual information between outputs and true optic flow in an equatorial section of the visual field. Mutual information for individual estimators at particular locations within the visual field was surprisingly low (less than 1 bit in all cases) and considerably poorer for the bio-inspired algorithms that the man-made computational algorithms. However, mutual information between weighted sums of these signals and comparable sums of the true optic flow showed significant increases for the bio-inspired algorithms, whereas such improvement did not occur for the computational algorithms. Such summation is representative of the spatial integration performed by wide-field motion-sensitive neurons in the third optic ganglia of flies.