Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laura E. Thomas is active.

Publication


Featured researches published by Laura E. Thomas.


Psychonomic Bulletin & Review | 2007

Moving eyes and moving thought: On the spatial compatibility between eye movements and cognition

Laura E. Thomas; Alejandro Lleras

Grant and Spivey (2003) proposed that eye movement trajectories can influence spatial reasoning by way of an implicit eye-movement-to-cognition link. We tested this proposal and investigated the nature of this link by continuously monitoring eye movements and asking participants to perform a problem-solving task under free-viewing conditions while occasionally guiding their eye movements (via an unrelated tracking task), either in a pattern related to the problem’s solution or in unrelated patterns. Although participants reported that they were not aware of any relationship between the tracking task and the problem, those who moved their eyes in a pattern related to the problem’s solution were the most successful problem solvers. Our results support the existence of an implicit compatibility between spatial cognition and the eye movement patterns that people use to examine a scene.


Psychonomic Bulletin & Review | 2006

Spatial updating relies on an egocentric representation of space : Effects of the number of objects

Ranxiao Frances Wang; James A. Crowell; Daniel J. Simons; David E. Irwin; Arthur F. Kramer; Michael S. Ambinder; Laura E. Thomas; Jessica L. Gosney; Brian R. Levinthal; Brendon Hsieh

Models of spatial updating attempt to explain how representations of spatial relationships between the actor and objects in the environment change as the actor moves. In allocentric models, object locations are encoded in an external reference frame, and only the actor’s position and orientation in that reference frame need to be updated. Thus, spatial updating should be independent of the number of objects in the environment (set size). In egocentric updating models, object locations are encoded relative to the actor, so the location of each object relative to the actor must be updated as the actor moves. Thus, spatial updating efficiency should depend on set size. We examined which model better accounts for human spatial updating by having people reconstruct the locations of varying numbers of virtual objects either from the original study position or from a changed viewing position. In consistency with the egocentric updating model, object localization following a viewpoint change was affected by the number of objects in the environment.


Psychonomic Bulletin & Review | 2009

Swinging into thought: Directed movement guides insight in problem solving

Laura E. Thomas; Alejandro Lleras

Can directed actions unconsciously influence higher order cognitive processing? We investigated how movement interventions affected participants’ ability to solve a classic insight problem. The participants attempted to solve Maier’s two-string problem while occasionally taking exercise breaks during which they moved their arms either in a manner related to the problem’s solution (swing group) or in a manner inconsistent with the solution (stretch group). Although most of the participants were unaware of the relationship between their arm movement exercises and the problem-solving task, the participants who moved their arms in a manner that suggested the problem’s solution were more likely to solve the problem than were those who moved their arms in other ways. Consistent with embodied theories of cognition, these findings show that actions influence thought and, furthermore, that we can implicitly guide people toward insight by directing their actions.


Psychonomic Bulletin & Review | 2006

Fruitful visual search: Inhibition of return in a virtual foraging task

Laura E. Thomas; Michael S. Ambinder; Brendon Hsieh; Brian R. Levinthal; James A. Crowell; David E. Irwin; Arthur F. Kramer; Alejandro Lleras; Daniel J. Simons; Ranxiao Frances Wang

Inhibition of return (IOR) has long been viewed as a foraging facilitator in visual search. We investigated the contribution of IOR in a task that approximates natural foraging more closely than typical visual search tasks. Participants in a fully immersive virtual reality environment manually searched an array of leaves for a hidden piece of fruit, using a wand to select and examine each leaf location. Search was slower than in typical IOR paradigms, taking seconds instead of a few hundred milliseconds. Participants also made a speeded response when they detected a flashing leaf that either was or was not in a previously searched location. Responses were slower when the flashing leaf was in a previously searched location than when it was in an unvisited location. These results generalize IOR to an approximation of a naturalistic visual search setting and support the hypothesis that IOR can facilitate foraging. The experiment also constitutes the first use of a fully immersive virtual reality display in the study of IOR.


Psychological Science | 2015

Grasp Posture Alters Visual Processing Biases Near the Hands

Laura E. Thomas

Observers experience biases in visual processing for objects within easy reach of their hands; these biases may assist them in evaluating items that are candidates for action. I investigated the hypothesis that hand postures that afford different types of actions differentially bias vision. Across three experiments, participants performed global-motion-detection and global-form-perception tasks while their hands were positioned (a) near the display in a posture affording a power grasp, (b) near the display in a posture affording a precision grasp, or (c) in their laps. Although the power-grasp posture facilitated performance on the motion-detection task, the precision-grasp posture instead facilitated performance on the form-perception task. These results suggest that the visual system weights processing on the basis of an observer’s current affordances for specific actions: Fast and forceful power grasps enhance temporal sensitivity, whereas detail-oriented precision grasps enhance spatial sensitivity.


Attention Perception & Psychophysics | 2006

Voluntary eyeblinks disrupt iconic memory

Laura E. Thomas; David E. Irwin

In the present research, we investigated whether eyeblinks interfere with cognitive processing. In Experiment 1, the participants performed a partial-report iconic memory task in which a letter array was presented for 106 msec, followed 50, 150, or 750 msec later by a tone that cued recall of one row of the array. At a cue delay of 50 msec between array offset and cue onset, letter report accuracy was lower when the participants blinked following array presentation than under no-blink conditions; the participants made more mislocation errors under blink conditions. This result suggests that blinking interferes with the binding of object identity and object position in iconic memory. Experiment 2 demonstrated that interference due to blinks was not due merely to changes in light intensity. Experiments 3 and 4 demonstrated that other motor responses did not interfere with iconic memory. We propose a new phenomenon,cognitive blink suppression, in which blinking inhibits cognitive processing. This phenomenon may be due to neural interference. Blinks reduce activation in area V1, which may interfere with the representation of information in iconic memory. nt]mis|This research was supported by NSF Grant BCS 01-32292.


Attention Perception & Psychophysics | 2009

Inhibitory tagging in an interrupted visual search.

Laura E. Thomas; Alejandro Lleras

Inhibition of return facilitates visual search, biasing attention away from previously examined locations. Prior research has shown that, as a result of inhibitory tags associated with rejected distractor items, observers are slower to detect small probes presented at these tagged locations than they are to detect probes presented at locations that were unoccupied during visual search, but only when the search stimuli remain visible during the probe-detection task. Using an interrupted visual search task, in which search displays alternated with blank displays, we found that inhibitory tagging occurred in the absence of the search array when probes were presented during these blank displays. Furthermore, by manipulating participants’ attentional set, we showed that these inhibitory tags were associated only with items that the participants actively searched. Finally, by probing before the search was completed, we also showed that, early in search, processing at distractor locations was actually facilitated, and only as the search progressed did evidence for inhibitory tagging arise at those locations. These results suggest that the context of a visual search determines the presence or absence of inhibitory tagging, as well as demonstrating for the first time the temporal dynamics of location prioritization while search is ongoing.


Attention Perception & Psychophysics | 2009

Visual direction constancy across eyeblinks

J. Stephen Higgins; David E. Irwin; Ranxiao Frances Wang; Laura E. Thomas

When a visual target is displaced during a saccade, the perception of its displacement is suppressed. Its movement can usually only be detected if the displacement is quite large. This suppression can be eliminated by introducing a short blank period after the saccade and before the target reappears in a new location. This has been termed the blanking effect and has been attributed to the use of otherwise ignored extraretinal information. We examined whether similar effects occur with eyeblinks and other visual distractions. We found that suppression of displacement perception can also occur due to a blink (both immediately prior to the blink and during the blink), and that introducing a blank period after a blink reduces the displacement suppression in much the same way as after a saccade. The blanking effect does not occur when other visual distractions are used. This provides further support for the conclusion that the blanking effect arises from extraretinal signals about eye position.


Attention Perception & Psychophysics | 2007

The effect of saccades on number processing

David E. Irwin; Laura E. Thomas

University of Illinois at Urbana-Champaign, Urbana, Illinois Recent research has shown that saccadic eye movements interfere with dorsal-stream tasks such as judgments of object orientation, but not with ventral-stream tasks such as object recognition. Because saccade programming and execution also rely on the dorsal stream, it has been hypothesized that cognitive saccadic suppression occurs as a result of dual-task interference within the dorsal stream. Judging whether one number is larger or smaller than another (magnitude comparison) is a dorsal-stream task that relies especially on the right parietal cortex. In contrast, judging whether a number is odd or even (parity judgment) does not involve the dorsal stream. In the present study, one group of subjects judged whether two-digit numbers were greater than or less than 65, whereas another group judged whether two-digit numbers were odd or even. Subjects in both groups made these judgments while making no, short, or long saccades. Saccade distance had no effect on parity judgments, but reaction times to make magnitude comparison judgments increased with saccade distance when the eyes moved from right to left. Because the right parietal cortex is instrumental in generating leftward saccades, these results provide further evidence for the hypothesis that cognitive suppression during saccades occurs as a result of dual-task interference within the dorsal stream.


Psychonomic Bulletin & Review | 2013

Interacting with Objects Compresses Environmental Representations in Spatial Memory

Laura E. Thomas; Christopher C. Davoli; James R. Brockmole

People perceive individual objects as being closer when they have the ability to interact with the objects than when they do not. We asked how interaction with multiple objects impacts representations of the environment. Participants studied multiple-object layouts, by manually exploring or simply observing each object, and then drew a scaled version of the environment (Exp. 1) or reconstructed a copy of the environment and its boundaries (Exp. 2) from memory. The participants who interacted with multiple objects remembered these objects as being closer together and reconstructed smaller environment boundaries than did the participants who looked without touching. These findings provide evidence that action-based perceptual distortions endure in memory over a moving observer’s multiple interactions, compressing not only representations between touched objects, but also untouched environmental boundaries.

Collaboration


Dive into the Laura E. Thomas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Balas

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Christopher Kuylen

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Robert McManus

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Pemstein

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Hsin-Mei Sun

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Stephen J. Agauas

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Balas

North Dakota State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge