Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerald P. McDonnell is active.

Publication


Featured researches published by Gerald P. McDonnell.


Basic and Applied Social Psychology | 2014

Perceptual Processes in the Cross-Race Effect: Evidence From Eyetracking

Gerald P. McDonnell; Brian H. Bornstein; Cindy Laub; Mark Mills; Michael D. Dodd

The cross-race effect (CRE) is the tendency to have better recognition accuracy for same-race than for other-race faces due to differential encoding strategies. Research exploring the nature of encoding differences has yielded few definitive conclusions. The present experiments explored this issue using an eyetracker during a recognition task involving White participants viewing White and African American faces. Participants fixated faster and longer on the upper features of White faces and the lower features of African American faces. When instructing participants to attend to certain features in African American faces, this pattern was exaggerated. Gaze patterns were related to improved recognition accuracy.


Acta Psychologica | 2012

Gaze cues influence memory…but not for long☆

Michael D. Dodd; Noah A. Weiss; Gerald P. McDonnell; Amara Sarwal; Alan Kingstone

Many factors influence the manner in which material is encoded into memory, with one of the most important determinants of subsequent memorability being the degree to which an item is attended at study. Attentional gaze manipulations - which occur when a task-irrelevant face at fixation looks towards or away from a target - have been shown to enhance attention such that stimuli that are gazed at elicit quicker responses. In the present study, four experiments were conducted to determine whether attentional gaze cues can also influence the recall of items appearing at gazed-at or gazed-away from locations. In Experiment 1, an irrelevant gaze cue at fixation preceded the presentation of to-be-remembered items, with each item remaining on screen for 1000ms. Gaze direction had no effect on memory for words. In Experiment 2, the presentation time for to-be-remembered items was reduced to 250ms or 500ms. Now gazed at items were more memorable. In Experiment 3, we manipulate the intentionality of the memory instruction and demonstrate that gaze cues influence memory even when participants are not explicitly attempting to memorize items. Finally, Experiment 4 demonstrates that these findings are specific to gaze cues as no memory effect is observed when arrow cues are presented. It is argued that gaze cues can modify memory for items, but that this effect is primarily attributable to shifts of attention away from target items when a gaze cue is invalid.


Teaching of Psychology | 2017

Should Students Have the Power to Change Course Structure

Gerald P. McDonnell; Michael D. Dodd

In the present article, we describe a course exercise in which students were administered four course evaluation forms throughout the semester, on which they provided their overall impressions of the class as well as their desire to change certain aspects of the course. Critically, during the semester, a total of three changes were made to the structure of the course as voted on by the students. Compared to the previous semester where students completed only end-of-semester evaluations, improvements in exam performance as well as instructor ratings were observed. Furthermore, students indicated that the changes made throughout the semester improved the course, and they hoped that other classes would adopt a similar classroom developmental strategy. This supports a growing body of evidence, suggesting that midsemester feedback is crucial for optimizing the learning environment for the student, particularly when concrete changes are made after the administration of course feedback.


Visual Cognition | 2018

You detect while I search: examining visual search efficiency in a joint search task

Gerald P. McDonnell; Mark Mills; Jordan Marshall; Joshua Zosky; Michael D. Dodd

ABSTRACT Numerous factors impact attentional allocation, with behaviour being strongly influenced by the interaction between individual intent and our visual environment. Traditionally, visual search efficiency has been studied under solo search conditions. Here, we propose a novel joint search paradigm where one individual controls the visual input available to another individual via a gaze contingent window (e.g., Participant 1 controls the window with their eye movements and Participant 2 – in an adjoining room – sees only stimuli that Participant 1 is fixating and responds to the target accordingly). Pairs of participants completed three blocks of a detection task that required them to: (1) search and detect the target individually, (2) search the display while their partner performed the detection task, or (3) detect while their partner searched. Search was most accurate when the person detecting was doing so for the second time while the person controlling the visual input was doing so for the first time, even when compared to participants with advanced solo or joint task experience (Experiments 2 and 3). Through surrendering control of one’s search strategy, we posit that there is a benefit of a reduced working memory load for the detector resulting in more accurate search. This paradigm creates a counterintuitive speed/accuracy trade-off which combines the heightened ability that comes from task experience (discrimination task) with the slower performance times associated with a novel task (the initial search) to create a potentially more efficient method of visual search.


Psychological Research-psychologische Forschung | 2015

How does implicit learning of search regularities alter the manner in which you search

Gerald P. McDonnell; Mark Mills; Leslie McCuller; Michael D. Dodd

Abstract Individuals are highly sensitive to statistical regularities in their visual environment, even when these patterns do not reach conscious awareness. Here, we examine whether oculomotor behavior is systematically altered when distractor/target configurations rarely repeat, but target location on an initial trial predicts the location of a target on the subsequent trial. The purpose of the current study was to explore whether this temporal-spatial contextual cueing in a conjunction search task influences both reaction time to the target and participant’s search strategy. Participants searched for a target through a gaze-contingent window in a display consisting of a large number of distractors, providing a target-present/absent response. Participants were faster to respond to the target on the predicted trial relative to the predictor trial in an implicit contextual cueing task but were no more likely to fixate first to the target quadrant on the predicted trial (Experiment 1). Furthermore, implicit learning was interrupted when instructing participants to vary their searching strategy across trials to eliminate visual scan similarity (Experiment 2). In Experiment 3, when participants were explicitly informed that a pattern was present at the start of the experiment, explicit learning was observed in both reaction time and eye movements. The present experiments provide evidence that implicit learning of sequential regularities regarding target locations is not based on learning more efficient scan paths, but is due to some other mechanism.


Journal of Vision | 2015

Examining the Relative Strength of Attentional Cues and the Time Course of Exogenous Orienting

Gerald P. McDonnell; Michael D. Dodd

The current study examined the relationship between endogenous, exogenous, and symbolic attention as it relates to working memory and attentional allocation. Previous research has established that exogenous cues result in early facilitation and later inhibition at cued locations (Posner & Cohen, 1984). In contrast, endogenous cues result in long-lasting facilitation, but not IOR, when responding to the location of a target. This pattern of results remain consistent even when participants are required to hold nonpredictive arrow cues in working memory (McDonnell & Dodd, 2013). The present experiments examined the characteristics of these attentional effects when the irrelevant spatial cue is relevant to a secondary memory task, requiring it to be processed. In Experiment 1, participants completed a standard Posner cueing task while holding in memory a colored placeholder that either matched or did not match the subsequent color of the exogenous cue. This is a departure from previous research where normally participants are instructed to ignore the exogenous cue or that it is irrelevant. Surprisingly, no facilitation was observed at early SOAs (200ms), but was followed by standard IOR at intermediate SOAs (500ms) and late SOAs (800ms; these effects replicated when decreasing the difficulty of the memory test). In Experiment 2, when participants maintained in memory an irrelevant arrow cue while responding to a target preceded by an exogenous cue, standard exogenous IOR effects were observed at intermediate and late SOAs, but once again no facilitation at early SOAs (presumably due to the attentional demands of the working memory load), independent of the nonpredictive arrow cue (25% or 50% predictive). However, when the arrow cue was made 75% predictive, exogenous inhibition was not observed at intermediate SOAs when the arrow cue was valid. The present experiments provide important insight into the interaction between working memory and attentional orienting.​ Meeting abstract presented at VSS 2015.


Journal of Vision | 2015

Human classifier: Can individuals deduce which task someone was performing based solely on their eye movements?

Michael D. Dodd; Brett Bahle; Mark Mills; Monica Rosen; Gerald P. McDonnell; Joseph MacInnes

Numerous investigations have revealed that eye movements and fixation locations differ as a function of how an individual is processing a scene (e.g., Castelhano et al., 2009; Dodd et al., 2009; Land & Hayhoe, 2001; Mills et al., 2011, Yarbus, 1967). As a consequence, a common question of interest is whether a participants task can be predicted from their observed pattern of eye movements. To that end, a number of researchers have taken a cue from the machine learning literature and attempted to train a task set classifier with varying degrees of success (e.g., Borji & Itti, 2014; Greene et al., 2012; Henderson et al., 2013). In the present experiments, we examine whether human participants can effectively classify task set based on the eye movements of others and how their performance compares to that of a recent classifier (MacInnes et al., VSS, 2014). Participants view either a) the fixation locations and fixation durations of an individual scanning a scene (independent of scanpath), b) the scanpaths of an individual scanning a scene (independent of fixation durations), or c) video playback of eye movement locations (preserving scanpath and duration information), as they attempt to determine whether the original task was visual search, memorization, or pleasantness rating. Moreover, eye movement information is provided to participants under conditions in which the original scene is present, or with the original scene absent. Participants perform this task at above-chance levels though there is considerable variability in performance as a function of task type (e.g. better at identifying search), whether the scene is present or absent, and whether the original task was performed under blocked or mixed (task-switching) conditions. These results provide important insight into our understanding of scene perception and the manner in which individuals interpret the eye movements of others. Meeting abstract presented at VSS 2015.


Journal of Experimental Psychology: Human Perception and Performance | 2013

Examining the Influence of a Spatially Irrelevant Working Memory Load on Attentional Allocation

Gerald P. McDonnell; Michael D. Dodd

The present study examined the influence of holding task-relevant gaze cues in working memory during a target detection task. Gaze cues shift attention in gaze-consistent directions, even when they are irrelevant to a primary detection task. It is unclear, however, whether gaze cues need to be perceived online to elicit these effects, or how these effects may be moderated if the gaze cues are relevant to a secondary task. In Experiment 1, participants encoded a face for a subsequent memory task, after which they performed an unrelated target detection task. Critically, gaze direction was irrelevant to the target detection task, but memory for the perceived face was tested at trial conclusion. Surprisingly, participants exhibited inhibition-of-return (IOR) and not facilitation, with slower response times for the gazed-at location. In Experiments 2, presentation duration and cue-target stimulus-onset asynchrony were manipulated and we continued to observe IOR with no early facilitation. Experiment 3 revealed facilitation but not IOR when the memory task was removed; Experiment 4 also revealed facilitation when the gaze cue memory task was replaced with arrows cues. The present experiments provide an important dissociation between perceiving cues online versus holding them in memory as it relates to attentional allocation.


Journal of Vision | 2014

Examining the influence of nonpredictive arrow cues and a working memory load on visual search performance

Gerald P. McDonnell; Michael D. Dodd


F1000Research | 2013

I still haven't found what you're looking for: searching for myself and then searching for you too

Michael D. Dodd; Mark Mills; Gerald P. McDonnell

Collaboration


Dive into the Gerald P. McDonnell's collaboration.

Top Co-Authors

Avatar

Michael D. Dodd

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Mark Mills

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Brett Bahle

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Brian H. Bornstein

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Cindy Laub

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Jordan Marshall

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Joshua Zosky

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Leslie McCuller

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Monica Rosen

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge