Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph Schmidt is active.

Publication


Featured researches published by Joseph Schmidt.


Quarterly Journal of Experimental Psychology | 2009

Search guidance is proportional to the categorical specificity of a target cue

Joseph Schmidt; Gregory J. Zelinsky

Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.


NeuroImage | 2013

Neural correlates of attentional deployment within unpleasant pictures.

Jamie Ferri; Joseph Schmidt; Greg Hajcak; Turhan Canli

Attentional deployment is an emotion regulation strategy that involves shifting attentional focus towards or away from particular aspects of emotional stimuli. Previous studies have highlighted the prevalence of attentional deployment and demonstrated that it can have a significant impact on brain activity and behavior. However, little is known about the neural correlates of this strategy. The goal of the present studies was to examine the effect of attentional deployment on neural activity by directing attention to more or less arousing portions of unpleasant images. In Studies 1 and 2, participants passively viewed counterbalanced blocks of unpleasant images without a focus, unpleasant images with an arousing focus, unpleasant images with a non-arousing focus, neutral images without a focus, and neutral images with a non-arousing focus for 4000 ms each. In Study 2, eye-tracking data were collected on all participants during image acquisition. In both studies, affect ratings following each block indicated that participants felt significantly less negative affect after viewing unpleasant images with a non-arousing focus compared to unpleasant images with an arousing focus. In both studies, the unpleasant non-arousing focus condition compared to the unpleasant arousing focus condition was associated with increased activity in frontal and parietal regions implicated in inhibitory control and visual attention. In Study 2, the unpleasant non-arousing focus condition compared to the unpleasant arousing focus condition was associated with reduced activity in the amygdala and visual cortex. Collectively these data suggest that attending to a non-arousing portion of an unpleasant image successfully reduces subjective negative affect and recruits fronto-parietal networks implicated in inhibitory control. Moreover, when ensuring task compliance by monitoring eye movements, attentional deployment modulates amygdala activity.


Visual Cognition | 2009

An effect of referential scene constraint on search implies scene segmentation

Gregory J. Zelinsky; Joseph Schmidt

Subjects searched aerial images for a UFO target, which appeared hovering over one of five scene regions: Water, fields, foliage, roads, or buildings. Prior to search scene onset, subjects were either told the scene region where the target could be found (specified condition) or not (unspecified condition). Search times were faster and fewer eye movements were needed to acquire targets when the target region was specified. Subjects also distributed their fixations disproportionately in this region and tended to fixate the cued region sooner. We interpret these patterns as evidence for the use of referential scene constraints to partially confine search to a specified scene region. Importantly, this constraint cannot be due to learned associations between the scene and its regions, as these spatial relationships were unpredictable. These findings require the modification of existing theories of scene constraint to include segmentation processes that can rapidly bias search to cued regions.


Frontiers in Systems Neuroscience | 2013

Co-registration of eye movements and event-related potentials in connected-text paragraph reading.

John M. Henderson; Steven G. Luke; Joseph Schmidt; John E. Richards

Eyetracking during reading has provided a critical source of on-line behavioral data informing basic theory in language processing. Similarly, event-related potentials (ERPs) have provided an important on-line measure of the neural correlates of language processing. Recently there has been strong interest in co-registering eyetracking and ERPs from simultaneous recording to capitalize on the strengths of both techniques, but a challenge has been devising approaches for controlling artifacts produced by eye movements in the EEG waveform. In this paper we describe our approach to correcting for eye movements in EEG and demonstrate its applicability to reading. The method is based on independent components analysis, and uses three criteria for identifying components tied to saccades: (1) component loadings on the surface of the head are consistent with eye movements; (2) source analysis localizes component activity to the eyes, and (3) the temporal activation of the component occurred at the time of the eye movement and differed for right and left eye movements. We demonstrate this methods applicability to reading by comparing ERPs time-locked to fixation onset in two reading conditions. In the text-reading condition, participants read paragraphs of text. In the pseudo-reading control condition, participants moved their eyes through spatially similar pseudo-text that preserved word locations, word shapes, and paragraph spatial structure, but eliminated meaning. The corrected EEG, time-locked to fixation onsets, showed effects of reading condition in early ERP components. The results indicate that co-registration of eyetracking and EEG in connected-text paragraph reading is possible, and has the potential to become an important tool for investigating the cognitive and neural bases of on-line language processing in reading.


Vision Research | 2011

Visual search guidance is best after a short delay.

Joseph Schmidt; Gregory J. Zelinsky

Search displays are typically presented immediately after a target cue, but in the real-world, delays often exist between target designation and search. Experiments 1 and 2 asked how search guidance changes with delay. Targets were cued using a picture or text label, each for 3000ms, followed by a delay up to 9000ms before the search display. Search stimuli were realistic objects, and guidance was quantified using multiple eye movement measures. Text-based cues showed a non-significant trend towards greater guidance following any delay relative to a no-delay condition. However, guidance from a pictorial cue increased sharply 300-600ms after preview offset. Experiment 3 replicated this guidance enhancement using shorter preview durations while equating the time from cue onset to search onset, demonstrating that the guidance benefit is linked to preview offset rather than a more complete encoding of the target. Experiment 4 showed that enhanced guidance persists even with a mask flashed at preview offset, suggesting an explanation other than visual priming. We interpret our findings as evidence for the rapid consolidation of target information into a guiding representation, which attains its maximum effectiveness shortly after preview offset.


Journal of Experimental Psychology: Human Perception and Performance | 2015

Classifying mental states from eye movements during scene viewing.

Omid Kardan; Marc G. Berman; Grigori Yourganov; Joseph Schmidt; John M. Henderson

How eye movements reflect underlying cognitive processes during scene viewing has been a topic of considerable theoretical interest. In this study, we used eye-movement features and their distributions over time to successfully classify mental states as indexed by the behavioral task performed by participants. We recorded eye movements from 72 participants performing 3 scene-viewing tasks: visual search, scene memorization, and aesthetic preference. To classify these tasks, we used statistical features (mean, standard deviation, and skewness) of fixation durations and saccade amplitudes, as well as the total number of fixations. The same set of visual stimuli was used in all tasks to exclude the possibility that different salient scene features influenced eye movements across tasks. All of the tested classification algorithms were successful in predicting the task within a single participant. The linear discriminant algorithm was also successful in predicting the task for each participant when the training data came from other participants, suggesting some generalizability across participants. The number of fixations contributed most to task classification; however, the remaining features and, in particular, their covariance provided important task-specific information. These results provide evidence on how participants perform different visual tasks. In the visual search task, for example, participants exhibited more variance and skewness in fixation durations and saccade amplitudes, but also showed heightened correlation between fixation durations and the variance in fixation durations. In summary, these results point to the possibility that eye-movement features and their distributional properties can be used to classify mental states both within and across individuals.


Biological Psychology | 2012

Electrocortical and ocular indices of attention to fearful and neutral faces presented under high and low working memory load

Annmarie MacNamara; Joseph Schmidt; Gregory J. Zelinsky; Greg Hajcak

Working memory load reduces the late positive potential (LPP), consistent with the notion that functional activation of the DLPFC attenuates neural indices of sustained attention. Visual attention also modulates the LPP. In the present study, we sought to determine whether working memory load might exert its influence on ERPs by reducing fixations to arousing picture regions. We simultaneously recorded eye-tracking and EEG while participants performed a working memory task interspersed with the presentation of task-irrelevant fearful and neutral faces. As expected, fearful compared to neutral faces elicited larger N170 and LPP amplitudes; in addition, working memory load reduced the N170 and the LPP. Participants made more fixations to arousing regions of neutral faces and faces presented under high working memory load. Therefore, working memory load did not induce avoidance of arousing picture regions and visual attention cannot explain load effects on the N170 and LPP.


Visual Cognition | 2014

Eye movement control during scene viewing: Immediate degradation and enhancement effects of spatial frequency filtering

John M. Henderson; Jennifer Olejarczyk; Steven G. Luke; Joseph Schmidt

What controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations.


Journal of Vision | 2014

Dissociating temporal inhibition of return and saccadic momentum across multiple eye-movement tasks

Steven G. Luke; Tim J. Smith; Joseph Schmidt; John M. Henderson

Saccade latencies are longer prior to an eye movement to a recently fixated location than to control locations, a phenomenon known as oculomotor inhibition of return (O-IOR). There are theoretical reasons to expect that O-IOR would vary in magnitude across different eye movement tasks, but previous studies have produced contradictory evidence. However, this may have been because previous studies have not dissociated O-IOR and a related phenomenon, saccadic momentum, which is a bias to repeat saccade programs that also influences saccade latencies. The present study dissociated the influence of O-IOR and saccadic momentum across three complex visual tasks: scene search, scene memorization, and scene aesthetic preference. O-IOR was of similar magnitude across all three tasks, while saccadic momentum was weaker in scene search.


Visual Cognition | 2014

Are summary statistics enough? Evidence for the importance of shape in guiding visual search

Robert Alexander; Joseph Schmidt; Gregory J. Zelinsky

Peripheral vision outside the focus of attention may rely on summary statistics. We used a gaze-contingent paradigm to directly test this assumption by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Four-object search displays included one statistically-matched object that was replaced by an unaltered version of the object during the first eye movement. Targets were designated by previews, which were never altered. Two types of statistically-matched objects were tested: One that maintained global shape and one that did not. Differences in guidance were found between targets and statistically-matched objects when shape was not preserved, suggesting that they were not informationally equivalent. Responses were also slower after target fixation when shape was not preserved, suggesting an extrafoveal processing of the target that again used shape information. We conclude that summary statistics must include some global shape information to approximate the peripheral information used during search.

Collaboration


Dive into the Joseph Schmidt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven G. Luke

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Hajcak

Florida State University

View shared research outputs
Top Co-Authors

Avatar

Jennifer Olejarczyk

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Ashley Ercolino

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Jamie Ferri

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

John E. Richards

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge