Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul R. Schrater is active.

Publication


Featured researches published by Paul R. Schrater.


Proceedings of the National Academy of Sciences of the United States of America | 2002

Shape perception reduces activity in human primary visual cortex.

Scott O. Murray; Daniel Kersten; Bruno A. Olshausen; Paul R. Schrater; David L. Woods

Visual perception involves the grouping of individual elements into coherent patterns that reduce the descriptive complexity of a visual scene. The physiological basis of this perceptual simplification remains poorly understood. We used functional MRI to measure activity in a higher object processing area, the lateral occipital complex, and in primary visual cortex in response to visual elements that were either grouped into objects or randomly arranged. We observed significant activity increases in the lateral occipital complex and concurrent reductions of activity in primary visual cortex when elements formed coherent shapes, suggesting that activity in early visual areas is reduced as a result of grouping processes performed in higher areas. These findings are consistent with predictive coding models of vision that postulate that inferences of high-level areas are subtracted from incoming sensory information in lower areas through cortical feedback.


Journal of Cognitive Neuroscience | 2003

Patterns of Activity in the Categorical Representations of Objects

Thomas A. Carlson; Paul R. Schrater; Sheng He

Object perception has been a subject of extensive fMRI studies in recent years. Yet the nature of the cortical representation of objects in the human brain remains controversial. Analyses of fMRI data have traditionally focused on the activation of individual voxels associated with presentation of various stimuli. The current analysis approaches functional imaging data as collective information about the stimulus. Linking activity in the brain to a stimulus is treated as a pattern-classification problem. Linear discriminant analysis was used to reanalyze a set of data originally published by Ishai et al. (2000), available from the fMRIDC (accession no. 2-2000-1113D). Results of the new analysis reveal that patterns of activity that distinguish one category of objects from other categories are largely independent of one another, both in terms of the activity and spatial overlap. The information used to detect objects from phase-scrambled control stimuli is not essential in distinguishing one object category from another. Furthermore, performing an object-matching task during the scan significantly improved the ability to predict objects from controls, but had minimal effect on object classification, suggesting that the task-based attentional benefit was non-specific to object categories.


Annual Review of Neuroscience | 2012

Brain plasticity through the life span: learning to learn and action video games

Daphne Bavelier; C. Shawn Green; Alexandre Pouget; Paul R. Schrater

The ability of the human brain to learn is exceptional. Yet, learning is typically quite specific to the exact task used during training, a limiting factor for practical applications such as rehabilitation, workforce training, or education. The possibility of identifying training regimens that have a broad enough impact to transfer to a variety of tasks is thus highly appealing. This work reviews how complex training environments such as action video game play may actually foster brain plasticity and learning. This enhanced learning capacity, termed learning to learn, is considered in light of its computational requirements and putative neural mechanisms.


IEEE Transactions on Multimedia | 2002

Spatial contextual classification and prediction models for mining geospatial data

Shashi Shekhar; Paul R. Schrater; Ranga Raju Vatsavai; Weili Wu; Sanjay Chawla

Modeling spatial context (e.g., autocorrelation) is a key challenge in classification problems that arise in geospatial domains. Markov random fields (MRF) is a popular model for incorporating spatial context into image segmentation and land-use classification problems. The spatial autoregression (SAR) model, which is an extension of the classical regression model for incorporating spatial dependence, is popular for prediction and classification of spatial data in regional economics, natural resources, and ecological studies. There is little literature comparing these alternative approaches to facilitate the exchange of ideas. We argue that the SAR model makes more restrictive assumptions about the distribution of feature values and class boundaries than MRF. The relationship between SAR and MRF is analogous to the relationship between regression and Bayesian classifiers. This paper provides comparisons between the two models using a probabilistic and an experimental framework.


The Journal of Neuroscience | 2012

Multisensory Decision-Making in Rats and Humans

David Raposo; John P. Sheppard; Paul R. Schrater; Anne K. Churchland

We report a novel multisensory decision task designed to encourage subjects to combine information across both time and sensory modalities. We presented subjects, humans and rats, with multisensory event streams, consisting of a series of brief auditory and/or visual events. Subjects made judgments about whether the event rate of these streams was high or low. We have three main findings. First, we report that subjects can combine multisensory information over time to improve judgments about whether a fluctuating rate is high or low. Importantly, the improvement we observed was frequently close to, or better than, the statistically optimal prediction. Second, we found that subjects showed a clear multisensory enhancement both when the inputs in each modality were redundant and when they provided independent evidence about the rate. This latter finding suggests a model where event rates are estimated separately for each modality and fused at a later stage. Finally, because a similar multisensory enhancement was observed in both humans and rats, we conclude that the ability to optimally exploit sequentially presented multisensory information is not restricted to a particular species.


Vision Research | 1998

Local Velocity Representation: Evidence from Motion Adaptation

Paul R. Schrater; Eero P. Simoncelli

Adaptation to a moving visual pattern induces shifts in the perceived motion of subsequently viewed moving patterns. Explanations of such effects are typically based on adaptation-induced sensitivity changes in spatio-temporal frequency tuned mechanisms (STFMs). An alternative hypothesis is that adaptation occurs in mechanisms that independently encode direction and speed (DSMs). Yet a third possibility is that adaptation occurs in mechanisms that encode 2D pattern velocity (VMs). We performed a series of psychophysical experiments to examine predictions made by each of the three hypotheses. The results indicate that: (1) adaptation-induced shifts are relatively independent of spatial pattern of both adapting and test stimuli; (2) the shift in perceived direction of motion of a plaid stimulus after adaptation to a grating indicates a shift in the motion of the plaid pattern, and not a shift in the motion of the plaid components; and (3) the 2D pattern of shift in perceived velocity radiates away from the adaptation velocity, and is inseparable in speed and direction of motion. Taken together, these results are most consistent with the VM adaptation hypothesis.


The Journal of Neuroscience | 2007

Humans Trade Off Viewing Time and Movement Duration to Improve Visuomotor Accuracy in a Fast Reaching Task

Peter Battaglia; Paul R. Schrater

Previous research has shown that the brain uses statistical knowledge of both sensory and motor accuracy to optimize behavioral performance. Here, we present the results of a novel experiment in which participants could control both of these quantities at once. Specifically, maximum performance demanded the simultaneous choices of viewing and movement durations, which directly impacted visual and motor accuracy. Participants reached to a target indicated imprecisely by a two-dimensional distribution of dots within a 1200 ms time limit. By choosing when to reach, participants selected the quality of visual information regarding target location as well as the remaining time available to execute the reach. New dots, and consequently more visual information, appeared until the reach was initiated; after reach initiation, no new dots appeared. However, speed accuracy trade-offs in motor control make early reaches (much remaining time) precise and late reaches (little remaining time) imprecise. Based on each participants visual- and motor-only target-hitting performances, we computed an “ideal reacher” that selects reach initiation times that minimize predicted reach endpoint deviations from the true target location. The participants timing choices were qualitatively consistent with ideal predictions: choices varied with stimulus changes (but less than the predicted magnitude) and resulted in near-optimal performance despite the absence of direct feedback defining ideal performance. Our results suggest visual estimates, and their respective accuracies are passed to motor planning systems, which in turn predict the precision of potential reaches and control viewing and movement timing to favorably trade off visual and motor accuracy.


Journal of Intelligent and Robotic Systems | 2007

Optimal Camera Placement for Automated Surveillance Tasks

Robert Bodor; Andrew Drenner; Paul R. Schrater; Nikolaos Papanikolopoulos

Camera placement has an enormous impact on the performance of vision systems, but the best placement to maximize performance depends on the purpose of the system. As a result, this paper focuses largely on the problem of task-specific camera placement. We propose a new camera placement method that optimizes views to provide the highest resolution images of objects and motions in the scene that are critical for the performance of some specified task (e.g. motion recognition, visual metrology, part identification, etc.). A general analytical formulation of the observation problem is developed in terms of motion statistics of a scene and resolution of observed actions resulting in an aggregate observability measure. The goal of this system is to optimize across multiple cameras the aggregate observability of the set of actions performed in a defined area. The method considers dynamic and unpredictable environments, where the subject of interest changes in time. It does not attempt to measure or reconstruct surfaces or objects, and does not use an internal model of the subjects for reference. As a result, this method differs significantly in its core formulation from camera placement solutions applied to problems such as inspection, reconstruction or the Art Gallery class of problems. We present tests of the system’s optimized camera placement solutions using real-world data in both indoor and outdoor situations and robot-based experimentation using an all terrain robot vehicle-Jr robot in an indoor setting.


Journal of Vision | 2008

Perceptual multistability predicted by search model for Bayesian decisions.

Rashmi Sundareswara; Paul R. Schrater

Perceptual multistability refers to the phenomenon of spontaneous perceptual switching between two or more likely interpretations of an image. Although frequently explained by processes of adaptation or hysteresis, we show that perceptual switching can arise as a natural byproduct of perceptual decision making based on probabilistic (Bayesian) inference, which interprets images by combining probabilistic models of image formation with knowledge of scene regularities. Empirically, we investigated the effect of introducing scene regularities on Necker cube bistability by flanking the Necker cube with fields of unambiguous cubes that are oriented to coincide with one of the Necker cube percepts. We show that background cubes increase the time spent in percepts most similar to the background. To characterize changes in the temporal dynamics of the perceptual alternations beyond percept durations, we introduce Markov Renewal Processes (MRPs). MRPs provide a general mathematical framework for describing probabilistic switching behavior in finite state processes. Additionally, we introduce a simple theoretical model consistent with Bayesian models of vision that involves searching for good interpretations of an image by sampling a posterior distribution coupled with a decay process that favors recent to old interpretations. The model has the same quantitative characteristics as our human data and variation in model parameters can capture between-subject variation. Because the model produces the same kind of stochastic process found in human perceptual behavior, we conclude that multistability may represent an unavoidable by-product of normal perceptual (Bayesian) decision making with ambiguous images.


Vision Research | 2004

BOLD fMRI and psychophysical measurements of contrast response to broadband images

Cheryl A. Olman; Kamil Ugurbil; Paul R. Schrater; Daniel Kersten

We have measured the relationship between image contrast, perceived contrast, and BOLD fMRI activity in human early visual areas, for natural, whitened, pink noise, and white noise images. As root-mean-square contrast increases, BOLD response to natural images is stronger and saturates more rapidly than response to the whitened images. Perceived contrast and BOLD fMRI responses are higher for pink noise than for white noise patterns, by the same ratio as between natural and whitened images. Spatial phase structure has no measurable effect on perceived contrast or BOLD fMRI response. The fMRI and perceived contrast response results can be described by models of spatial frequency response in V1, that match the contrast sensitivity function at low contrasts, and have more uniform spatial frequency response at high contrasts.

Collaboration


Dive into the Paul R. Schrater's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alok Gupta

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

John Collins

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Ketter

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

C. Shawn Green

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge