Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Mills is active.

Publication


Featured researches published by Mark Mills.


Journal of Vision | 2011

Examining the influence of task set on eye movements and fixations.

Mark Mills; Andrew Hollingworth; Stefan Van der Stigchel; Lesa Hoffman; Michael D. Dodd

The purpose of the present study was to examine the influence of task set on the spatial and temporal characteristics of eye movements during scene perception. In previous work, when strong control was exerted over the viewing task via specification of a target object (as in visual search), task set biased spatial, rather than temporal, parameters of eye movements. Here, we find that more participant-directed tasks (in which the task establishes general goals of viewing rather than specific objects to fixate) affect not only spatial (e.g., saccade amplitude) but also temporal parameters (e.g., fixation duration). Further, task set influenced the rate of change in fixation duration over the course of viewing but not saccade amplitude, suggesting independent mechanisms for control of these parameters.


Attention Perception & Psychophysics | 2010

Shift and deviate: Saccades reveal that shifts of covert attention evoked by trained spatial stimuli are obligatory

Stefan Van der Stigchel; Mark Mills; Michael D. Dodd

The premotor theory of attention predicts that motor movements, including manual movements and eye movements, are preceded by an obligatory shift of attention to the location of the planned response. We investigated whether the shifts of attention evoked by trained spatial cues (e.g., Dodd & Wilson, 2009) are obligatory by using an extreme prediction of the premotor theory: If individuals are trained to associate a color cue with a manual movement to the left or right, the shift of attention evoked by the color cue should also influence eye movements in an unrelated task. Participants were trained to associate an irrelevant color cue with left/right space via a training session in which directional responses were made. Experiment 1 showed that, posttraining, vertical saccades deviated in the direction of the trained response, despite the fact that the color cue was irrelevant. Experiment 2 showed that latencies of horizontal saccades were shorter when an eye movement had to be made in the direction of the trained response. These results demonstrate that the shifts of attention evoked by trained stimuli are obligatory, in addition to providing support for the premotor theory and for a connection between the attentional, motor, and oculomotor systems.


Attention Perception & Psychophysics | 2017

Human classifier: Observers can deduce task solely from eye movements

Brett Bahle; Mark Mills; Michael D. Dodd

Computer classifiers have been successful at classifying various tasks using eye movement statistics. However, the question of human classification of task from eye movements has rarely been studied. Across two experiments, we examined whether humans could classify task based solely on the eye movements of other individuals. In Experiment 1, human classifiers were shown one of three sets of eye movements: Fixations, which were displayed as blue circles, with larger circles meaning longer fixation durations; Scanpaths, which were displayed as yellow arrows; and Videos, in which a neon green dot moved around the screen. There was an additional Scene manipulation in which eye movement properties were displayed either on the original scene where the task (Search, Memory, or Rating) was performed or on a black background in which no scene information was available. Experiment 2 used similar methods but only displayed Fixations and Videos with the same Scene manipulation. The results of both experiments showed successful classification of Search. Interestingly, Search was best classified in the absence of the original scene, particularly in the Fixation condition. Memory also was classified above chance with the strongest classification occurring with Videos in the presence of the scene. Additional analyses on the pattern of correct responses in these two conditions demonstrated which eye movement properties successful classifiers were using. These findings demonstrate conditions under which humans can extract information from eye movement characteristics in addition to providing insight into the relative success/failure of previous computer classifiers.


Behavioural Brain Research | 2016

Political conservatism predicts asymmetries in emotional scene memory

Mark Mills; Frank J. Gonzalez; Karl Giuseffi; Benjamin Sievert; Kevin B. Smith; John R. Hibbing; Michael D. Dodd

Variation in political ideology has been linked to differences in attention to and processing of emotional stimuli, with stronger responses to negative versus positive stimuli (negativity bias) the more politically conservative one is. As memory is enhanced by attention, such findings predict that memory for negative versus positive stimuli should similarly be enhanced the more conservative one is. The present study tests this prediction by having participants study 120 positive, negative, and neutral scenes in preparation for a subsequent memory test. On the memory test, the same 120 scenes were presented along with 120 new scenes and participants were to respond whether a scene was old or new. Results on the memory test showed that negative scenes were more likely to be remembered than positive scenes, though, this was true only for political conservatives. That is, a larger negativity bias was found the more conservative one was. The effect was sizeable, explaining 45% of the variance across subjects in the effect of emotion. These findings demonstrate that the relationship between political ideology and asymmetries in emotion processing extend to memory and, furthermore, suggest that exploring the extent to which subject variation in interactions among emotion, attention, and memory is predicted by conservatism may provide new insights into theories of political ideology.


Journal of Experimental Psychology: General | 2014

The politics of the face-in-the-crowd.

Mark Mills; Kevin B. Smith; John R. Hibbing; Michael D. Dodd

Recent work indicates that the more conservative one is, the faster one is to fixate on negative stimuli, whereas the less conservative one is, the faster one is to fixate on positive stimuli. The present series of experiments used the face-in-the-crowd paradigm to examine whether variability in the efficiency with which positive and negative stimuli are detected underlies such speed differences. Participants searched for a discrepant facial expression (happy or angry) amid a varying number of neutral distractors (Experiments 1 and 4). A combination of response time and eye movement analyses indicated that variability in search efficiency explained speed differences for happy expressions, whereas variability in post-selectional processes explained speed differences for angry expressions. These results appear to be emotionally mediated as search performance did not vary with political temperament when displays were inverted (Experiment 2) or when controlled processing was required for successful task performance (Experiment 3). Taken together, the present results suggest political temperament is at least partially instantiated by attentional biases for emotional material.


Journal of Experimental Psychology: General | 2014

Which Way Is Which? Examining Global/Local Processing With Symbolic Cues

Mark Mills; Michael D. Dodd

A new method combining spatial-cueing and compound-stimulus paradigms draws on involuntary attentional orienting elicited by a spatially uninformative central arrow cue to investigate global/local processing under incidental processing conditions, wherein global/local levels were uninformative (do not aid performance) and task-irrelevant (need not be processed to perform the task). The task was peripheral target detection. Cues were compound arrows, which were either consistent (global/local arrows oriented in same direction) or inconsistent (global/local arrows oriented in opposite directions). Global/local processing was measured by spatial-cueing effects (response time [RT] difference between target locations validly cued by an arrow and targets at different locations), with the test of global/local advantage represented by the effect of cue-level for inconsistent cues (RT difference between global-valid and local-valid cues). Cue-target interval (stimulus-onset-asynchrony [SOA]) was manipulated to test whether global/local advantage varied with relative stimulus availability. Experiment 1 observed a Cue-Level × SOA interaction such that an early, large global cueing effect was followed by a later, smaller local cueing effect, indicative of a global-to-local shift in advantage. This occurred despite knowledge that global/local arrows were uninformative and task-irrelevant and could therefore be ignored, thus displaying key properties of an involuntary process. Experiment 2 added neutral cues (arrow at one level, rectangle at the other) and determined that the reversal was not due to inhibition of the globally cued location or to attenuation of global information but rather to the presence of conflicting spatial information. Experiments 3 and 4 ruled out alternative accounts for these results. These data indicate global precedence in attended but incidentally processed objects.


Basic and Applied Social Psychology | 2014

Perceptual Processes in the Cross-Race Effect: Evidence From Eyetracking

Gerald P. McDonnell; Brian H. Bornstein; Cindy Laub; Mark Mills; Michael D. Dodd

The cross-race effect (CRE) is the tendency to have better recognition accuracy for same-race than for other-race faces due to differential encoding strategies. Research exploring the nature of encoding differences has yielded few definitive conclusions. The present experiments explored this issue using an eyetracker during a recognition task involving White participants viewing White and African American faces. Participants fixated faster and longer on the upper features of White faces and the lower features of African American faces. When instructing participants to attend to certain features in African American faces, this pattern was exaggerated. Gaze patterns were related to improved recognition accuracy.


Journal of Experimental Psychology: General | 2018

Attention goes both ways: Shifting attention influences lexical decisions.

Mark Mills; Paul Boychuk; Alison L. Chasteen; Jay Pratt

Spatial components of concepts can influence the speed with which peripheral targets are responded to (e.g., the word God speeds responses to targets presented above fixation; devil speeds responses to targets presented below fixation). The basic premise underlying these conceptual cueing effects is that thinking of a spatial metaphor activates an internal spatial representation which in turn influences the allocation of attention in the visual field. An important step forward in understanding conceptual cues is determining whether the underlying process is bidirectional: Do shifts of attention facilitate activation of corresponding conceptual information? To test this, a peripheral cue was used to induce shifts of attention to a peripheral location, and the effect of this shift on concept processing was measured with a standard lexical-decision task in which participants made word/nonword responses to a letter string presented at fixation (Experiments 1 and 3), or with a modified lexical-decision task in which participants made English/Dutch judgments of a word presented auditorily (Experiment 2). If shifts of attention activate spatially compatible concepts, then shifting attention to a peripheral location should speed lexical decisions for spatially compatible concepts such that leftward shifts lead to faster lexical decisions of left relative to right concepts (and likewise for rightward, upward, and downward shifts). Our results support this prediction, suggesting that behaviors in the visual field can influence the activation of internal representations.


Journal of Experimental Psychology: Human Perception and Performance | 2017

Cerebral Hemodynamics During Scene Viewing: Hemispheric Lateralization Predicts Temporal Gaze Behavior Associated With Distinct Modes of Visual Processing

Mark Mills; Mohammed Alwatban; Benjamin Hage; Erin Barney; Edward Truemper; Gregory R. Bashford; Michael D. Dodd

Systematic patterns of eye movements during scene perception suggest a functional distinction between 2 viewing modes: an ambient mode (characterized by short fixations and large saccades) thought to reflect dorsal activity involved with spatial analysis, and a focal mode (characterized by long fixations and small saccades) thought to reflect ventral activity involved with object analysis. Little neuroscientific evidence exists supporting this claim. Here, functional transcranial Doppler ultrasound (fTCD) was used to investigate whether these modes show hemispheric specialization. Participants viewed scenes for 20 s under instructions to search or memorize. Overall, early viewing was right lateralized, whereas later viewing was left lateralized. This right-to-left shift interacted with viewing task (more pronounced in the memory task). Importantly, changes in lateralization correlated with changes in eye movements. This is the first demonstration of right hemisphere bias for eye movements servicing spatial analysis and left hemisphere bias for eye movements servicing object analysis.


IEEE Transactions on Ultrasonics Ferroelectrics and Frequency Control | 2016

Functional Transcranial Doppler Ultrasound for Measurement of Hemispheric Lateralization During Visual Memory and Visual Search Cognitive Tasks

Benjamin Hage; Mohammed Alwatban; Erin Barney; Mark Mills; Michael D. Dodd; Edward Truemper; Gregory R. Bashford

Functional transcranial Doppler ultrasound (fTCD) is a noninvasive sensing modality that measures cerebral blood flow velocity (CBFV) with high temporal resolution. CBFV change is correlated to changes in cerebral oxygen uptake, enabling fTCD to measure brain activity and lateralization with high accuracy. However, few studies have examined the relationship of CBFV change during visual search and visual memory tasks. Here a protocol to compare lateralization between these two similar cognitive tasks using fTCD is demonstrated. Ten healthy volunteers (age 21±2 years) were shown visual scenes on a computer and performed visual search and visual memory tasks while CBFV in the bilateral middle cerebral arteries was monitored with fTCD. Each subject completed 40 trials, consisting of baseline (25 s), calibration (variable), instruction (2.5 s), and task (20 s) epochs. Lateralization was computed for each task by calculating the bilateral CBFV envelope percent change from baseline and subtracting the right side from the left side. The results showed significant lateralization (p <; 0.001) of the visual memory and visual search tasks, with memory reaching lateralization of 1.6% and search reaching lateralization of 0.5%, suggesting that search is more right lateralized (and therefore may be related to “holistic” or global perception) and memory is more left lateralized (and therefore may be related to local perception). This method could be used to compare cerebral activity for any related cognitive tasks as long as the same stimulus is used in all tasks. The protocol is straightforward and the equipment is inexpensive, introducing a low-cost high temporal resolution technique to further study lateralization of the brain.

Collaboration


Dive into the Mark Mills's collaboration.

Top Co-Authors

Avatar

Michael D. Dodd

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brett Bahle

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Gerald P. McDonnell

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Benjamin Hage

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Edward Truemper

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory R. Bashford

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Mohammed Alwatban

University of Nebraska–Lincoln

View shared research outputs
Researchain Logo
Decentralizing Knowledge