Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rebecca M. Foerster is active.

Publication


Featured researches published by Rebecca M. Foerster.


Journal of Vision | 2011

Saccadic eye movements in a high-speed bimanual stacking task: changes of attentional control during learning and automatization.

Rebecca M. Foerster; Elena Carbone; Hendrik Koesling; Werner X. Schneider

Principles of saccadic eye movement control in the real world have been derived by the study of self-paced well-known tasks such as sandwich or tea making. Little is known whether these principles generalize to high-speed sensorimotor tasks and how they are affected by learning and automatization. In the present study, right-handers practiced the speed-stacking task in 14 consecutive daily training sessions, while their eye movements were recorded. Speed stacking is a high-speed sensorimotor task that requires grasping, moving, rotating, and placing of objects. The following main results emerged. Throughout practice, the eyes led the hands, displayed by a positive eye-hand time span. Moreover, visual information was gathered for the subsequent manual sub-action, displayed by a positive eye-hand unit span. With automatization, the eye-hand time span became shorter, yet it increased when corrected by the decreasing trial duration. In addition, fixations were mainly allocated to the goal positions of the right hand or objects in the right hand. The number of fixations decreased while the fixation rate remained constant. Importantly, all participants fixated on the same task-relevant locations in a similar scan path across training days, revealing a long-term memory-based mode of attention control after automatization of a high-speed sensorimotor task.


Journal of Vision | 2012

Saccadic eye movements in the dark while performing an automatized sequential high-speed sensorimotor task

Rebecca M. Foerster; Elena Carbone; Hendrik Koesling; Werner X. Schneider

Saccades during object-related everyday tasks select visual information to guide hand movements. Nevertheless, humans can perform such a task in the dark provided it was automatized beforehand. It is largely unknown whether and how saccades are executed in this case. Recently, a long-term memory (LTM)-based direct control mode of attention during the execution of well-learned sensorimotor tasks, which predicts task-relevant saccades in the dark, was proposed (R. M. Foerster, E. Carbone, H. Koesling, & W. X. Schneider, 2011). In the present study, participants performed an automatized speed-stacking task in the dark and in the light while their eye movements were recorded. Speed stacking is a sequential high-speed sensorimotor object manipulation task. Results demonstrated that participants indeed made systematic eye movements in the dark. Saccadic scan paths and the number of fixations were highly similar across illumination conditions, while fixation rates were lower and fixation durations were longer in the dark. Importantly, the eye reached a location ahead of the hands even in the dark. Finally, neither eye-hand dynamics nor saccade accuracy correlated with hand movement durations in the dark. Results support the hypothesis of an LTM-based mode of attention selection during the execution of automatized sequential high-speed sensorimotor tasks.


Scientific Reports | 2016

Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities

Rebecca M. Foerster; Christian H. Poth; Christian Behler; Mario Botsch; Werner X. Schneider

Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.


Annals of the New York Academy of Sciences | 2015

Expectation violations in sensorimotor sequences: shifting from LTM‐based attentional selection to visual search

Rebecca M. Foerster; Werner X. Schneider

Long‐term memory (LTM) delivers important control signals for attentional selection. LTM expectations have an important role in guiding the task‐driven sequence of covert attention and gaze shifts, especially in well‐practiced multistep sensorimotor actions. What happens when LTM expectations are disconfirmed? Does a sensory‐based visual‐search mode of attentional selection replace the LTM‐based mode? What happens when prior LTM expectations become valid again? We investigated these questions in a computerized version of the number‐connection test. Participants clicked on spatially distributed numbered shapes in ascending order while gaze was recorded. Sixty trials were performed with a constant spatial arrangement. In 20 consecutive trials, either numbers, shapes, both, or no features switched position. In 20 reversion trials, participants worked on the original arrangement. Only the sequence‐affecting number switches elicited slower clicking, visual search‐like scanning, and lower eye–hand synchrony. The effects were neither limited to the exchanged numbers nor to the corresponding actions. Thus, expectation violations in a well‐learned sensorimotor sequence cause a regression from LTM‐based attentional selection to visual search beyond deviant‐related actions and locations. Effects lasted for several trials and reappeared during reversion.


Frontiers in Psychology | 2016

Task-Irrelevant Expectation Violations in Sequential Manual Actions: Evidence for a “Check-after-Surprise” Mode of Visual Attention and Eye-Hand Decoupling

Rebecca M. Foerster

When performing sequential manual actions (e.g., cooking), visual information is prioritized according to the task determining where and when to attend, look, and act. In well-practiced sequential actions, long-term memory (LTM)-based expectations specify which action targets might be found where and when. We have previously demonstrated (Foerster and Schneider, 2015b) that violations of such expectations that are task-relevant (e.g., target location change) cause a regression from a memory-based mode of attentional selection to visual search. How might task-irrelevant expectation violations in such well-practiced sequential manual actions modify attentional selection? This question was investigated by a computerized version of the number-connection test. Participants clicked on nine spatially distributed numbered target circles in ascending order while eye movements were recorded as proxy for covert attention. Target’s visual features and locations stayed constant for 65 prechange-trials, allowing practicing the manual action sequence. Consecutively, a task-irrelevant expectation violation occurred and stayed for 20 change-trials. Specifically, action target number 4 appeared in a different font. In 15 reversion-trials, number 4 returned to the original font. During the first task-irrelevant change trial, manual clicking was slower and eye scanpaths were larger and contained more fixations. The additional fixations were mainly checking fixations on the changed target while acting on later targets. Whereas the eyes repeatedly revisited the task-irrelevant change, cursor-paths remained completely unaffected. Effects lasted for 2–3 change trials and did not reappear during reversion. In conclusion, an unexpected task-irrelevant change on a task-defining feature of a well-practiced manual sequence leads to eye-hand decoupling and a “check-after-surprise” mode of attentional selection.


Cognitive Processing | 2015

Anticipatory eye movements in sensorimotor actions: on the role of guiding fixations during learning

Rebecca M. Foerster; Werner X. Schneider

During object-based sensorimotor tasks, humans look at target locations for subsequent hand actions. These anticipatory eye movements or guiding fixations seem to be necessary for a successful performance. By practicing such a sensorimotor task, humans become faster and perform fewer guiding fixations (Foerster and Schneider, In Prep; Foerster et al. in J Vis 11(7):9:1–16, 2011). We aimed at clarifying whether this decrease in guiding fixations is the cause or effect of faster task completion time. Participants may learn to use less visual input (fewer fixations) allowing shorter completion times. Alternatively, participants may speed up their hand movements (e.g., more efficient motor control) leaving less time for visual intake. The latter would imply that the number of fixations is directly connected to task speed. We investigated the relationship between the number of fixations and task speed in a computerized version of the number connection task (Foerster and Schneider in Ann N Y Acad Sci 2015. doi:10.1111/nyas.12729). Eye movements were recorded while participants clicked in ascending order on nine numbered circles. In 90 learning trials, they clicked the sequence with a constant spatial configuration as fast as possible. In the subsequent experimental phase, they should perform 30 trials again under high-speed instruction and 30 trials under slow-speed instruction. During slow-speed instruction, fixation rates were lower with longer fixation durations and more fixations were performed than during high-speed instruction. The results suggest that the number of fixations depends on both the need for visual intake and task completion time. It seems that the decrease in anticipatory eye movements through sensorimotor learning is at the same time a result and a cause of faster task performance.


Frontiers in Psychology | 2014

Long-term memory-based control of attention in multi-step tasks requires working memory: evidence from domain-specific interference

Rebecca M. Foerster; Elena Carbone; Werner X. Schneider

Evidence for long-term memory (LTM)-based control of attention has been found during the execution of highly practiced multi-step tasks. However, does LTM directly control for attention or are working memory (WM) processes involved? In the present study, this question was investigated with a dual-task paradigm. Participants executed either a highly practiced visuospatial sensorimotor task (speed stacking) or a verbal task (high-speed poem reciting), while maintaining visuospatial or verbal information in WM. Results revealed unidirectional and domain-specific interference. Neither speed stacking nor high-speed poem reciting was influenced by WM retention. Stacking disrupted the retention of visuospatial locations, but did not modify memory performance of verbal material (letters). Reciting reduced the retention of verbal material substantially whereas it affected the memory performance of visuospatial locations to a smaller degree. We suggest that the selection of task-relevant information from LTM for the execution of overlearned multi-step tasks recruits domain-specific WM.


Cognition | 2018

Involuntary top-down control by search-irrelevant features: Visual working memory biases attention in an object-based manner

Rebecca M. Foerster; Werner X. Schneider

Many everyday tasks involve successive visual-search episodes with changing targets. Converging evidence suggests that these targets are retained in visual working memory (VWM) and bias attention from there. It is unknown whether all or only search-relevant features of a VWM template bias attention during search. Bias signals might be configured exclusively to task-relevant features so that only search-relevant features bias attention. Alternatively, VWM might maintain objects in the form of bound features. Then, all template features will bias attention in an object-based manner, so that biasing effects are ranked by feature relevance. Here, we investigated whether search-irrelevant VWM template features bias attention. Participants had to saccade to a target opposite a distractor. A colored cue depicted the target prior to each search trial. The target was predefined only by its identity, while its color was irrelevant. When target and cue matched not only in identity (search-relevant) but also in color (search-irrelevant), saccades went more often and faster directly to the target than without any color match (Experiment 1). When introducing a cue-distractor color match (Experiment 2), direct target saccades were most likely when target and cue matched in the search-irrelevant color and least likely in case of a cue-distractor color match. When cue and target were never colored the same (Experiment 3), cue-colored distractors still captured the eyes more often than different-colored distractors despite color being search-irrelevant. As participants were informed about the misleading color, the result argues against a strategical and voluntary usage of color. Instead, search-irrelevant features biased attention obligatorily arguing for involuntary top-down control by object-based VWM templates.


Behavior Research Methods | 2018

Ultrahigh temporal resolution of visual presentation using gaming monitors and G-Sync

Christian H. Poth; Rebecca M. Foerster; Christian Behler; Ulrich Schwanecke; Werner X. Schneider; Mario Botsch

Vision unfolds as an intricate pattern of information processing over time. Studying vision and visual cognition therefore requires precise manipulations of the timing of visual stimulus presentation. Although standard computer display technologies offer great accuracy and precision of visual presentation, their temporal resolution is limited. This limitation stems from the fact that the presentation of rendered stimuli has to wait until the next refresh of the computer screen. We present a novel method for presenting visual stimuli with ultrahigh temporal resolution (<1 ms) on newly available gaming monitors. The method capitalizes on the G-Sync technology, which allows for presenting stimuli as soon as they have been rendered by the computer’s graphics card, without having to wait for the next screen refresh. We provide software implementations in the three programming languages C++, Python (using PsychoPy2), and Matlab (using Psychtoolbox3). For all implementations, we confirmed the ultrahigh temporal resolution of visual presentation with external measurements by using a photodiode. Moreover, a psychophysical experiment revealed that the ultrahigh temporal resolution impacts on human visual performance. Specifically, observers’ object recognition performance improved over fine-grained increases of object presentation duration in a theoretically predicted way. Taken together, the present study shows that the G-Sync-based presentation method enables researchers to investigate visual processes whose data patterns were concealed by the low temporal resolution of previous technologies. Therefore, this new presentation method may be a valuable tool for experimental psychologists and neuroscientists studying vision and its temporal characteristics.


Vision Research | 2018

“Looking-at-nothing” during sequential sensorimotor actions. Long-term memory-based eye scanning of remembered target locations

Rebecca M. Foerster

ABSTRACT Before acting humans saccade to a target object to extract relevant visual information. Even when acting on remembered objects, locations previously occupied by relevant objects are fixated during imagery and memory tasks – a phenomenon called “looking‐at‐nothing”. While looking‐at‐nothing was robustly found in tasks encouraging declarative memory built‐up, results are mixed in the case of procedural sensorimotor tasks. Eye‐guidance to manual targets in complete darkness was observed in a task practiced for days beforehand, while investigations using only a single session did not find fixations to remembered action targets. Here, it is asked whether looking‐at‐nothing can be found in a single sensorimotor session and thus independent from sleep consolidation, and how it progresses when visual information is repeatedly unavailable. Eye movements were investigated in a computerized version of the trail making test. Participants clicked on numbered circles in ascending sequence. Fifty trials were performed with the same spatial arrangement of 9 visual targets to enable long‐term memory consolidation. During 50 consecutive trials, participants had to click the remembered target sequence on an empty screen. Participants scanned the visual targets and also the empty target locations sequentially with their eyes, however, the latter less precise than the former. Over the course of the memory trials, manual and oculomotor sequential target scanning became more similar to the visual trials. Results argue for robust looking‐at‐nothing during procedural sensorimotor tasks provided that long‐term memory information is sufficient.

Collaboration


Dive into the Rebecca M. Foerster's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ulrich Schwanecke

RheinMain University of Applied Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge