Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel A. Gajewski is active.

Publication


Featured researches published by Daniel A. Gajewski.


Psychonomic Bulletin & Review | 2006

Feature bindings endure without attention: Evidence from an explicit recall task

Daniel A. Gajewski; James R. Brockmole

Are integrated objects the unit of capacity of visual working memory, or is continued attention needed to maintain bindings between independently stored features? In a delayed recall task, participants reported the color and shape of a probed item from a memory array. During the delay, attention was manipulated with an exogenous cue. Recall was elevated at validly cued positions, indicating that the cue affected item memory. On invalid trials, participants most frequently recalled either both features (perfect object memory) or neither of the two features (no object memory); the frequency with which only one feature was recalled was significantly lower than predicted by feature independence as determined in a single-feature recall task. These data do not support the view that features are remembered independently when attention is withdrawn. Instead, integrated objects are stored in visual working memory without need for continued attention.


Visual Cognition | 2005

Minimal use of working memory in a scene comparison task

Daniel A. Gajewski; John M. Henderson

Eye movement behaviour in hand-eye tasks suggests a preference for a “just in time” processing strategy that minimizes the use of working memory. In the present study, a scene comparison task was introduced to determine whether the preference holds when the task is primarily visual and when more complex naturalistic scenes are used as stimuli. In two experiments, participants made same or different judgements in response to simultaneously presented pairs of scenes that were identical or differed by one object. The number of fixations per scene glance and the number of fixations intervening between glances to corresponding objects suggest that frequently one object at a time is encoded and maintained in visual working memory. The same pattern of results was observed in a third experiment using word and object arrays. Overall, the results suggest a strong general bias toward minimal use of visual working memory in complex visual tasks.


PLOS ONE | 2014

Medial Temporal Lobe Roles in Human Path Integration

Naohide Yamamoto; John W. Philbeck; Adam J. Woods; Daniel A. Gajewski; Joeanna C. Arthur; Samuel J. Potolicchio; Lucien M. Levy; Anthony J. Caputy

Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL) is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere) walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of ones locomotor actions. The strengths of this new theoretical viewpoint are discussed.


Journal of Vision | 2014

Gaze behavior and the perception of egocentric distance

Daniel A. Gajewski; Courtney P. Wallin; John W. Philbeck

The ground plane is thought to be an important reference for localizing objects, particularly when angular declination is informative, as it is for objects seen resting at floor level. A potential role for eye movements has been implicated by the idea that information about the nearby ground is required to localize objects more distant, and by the fact that the time course for the extraction of distance extends beyond the duration of a typical eye fixation. To test this potential role, eye movements were monitored when participants previewed targets. Distance estimates were provided by walking without vision to the remembered target location (blind walking) or by verbal report. We found that a strategy of holding the gaze steady on the object was as frequent as one where the region between the observer and object was fixated. There was no performance advantage associated with making eye movements in an observational study (Experiment 1) or when an eye-movement strategy was manipulated experimentally (Experiment 2). Observers were extracting useful information covertly, however. In Experiments 3 through 5, obscuring the nearby ground plane had a modest impact on performance; obscuring the walls and ceiling was more detrimental. The results suggest that these alternate surfaces provide useful information when judging the distance to objects within indoor environments. Critically, they constrain the role for the nearby ground plane in theories of egocentric distance perception.


Psychological Science | 2010

From the Most Fleeting of Glimpses On the Time Course for the Extraction of Distance Information

Daniel A. Gajewski; John W. Philbeck; Stephen G. Pothier; David Chichka

An observer’s visual perception of the absolute distance between his or her position and an object is based on multiple sources of information that must be extracted during scene viewing. Research has not yet discovered the viewing duration observers need to fully extract distance information, particularly in navigable real-world environments. In a visually directed walking task, participants showed a sensitive response to distance when they were given 9-ms glimpses of floor- and eye-level targets. However, sensitivity to distance decreased markedly when targets were presented at eye level and angular size was rendered uninformative. Performance after brief viewing durations was characterized by underestimation of distance, unless the brief-viewing trials were preceded by a block of extended-viewing trials. The results indicate that experience plays a role in the extraction of information during brief glimpses. Even without prior experience, the extraction of useful information is virtually immediate when the cues of angular size or angular declination are informative for the observer.


Vision Research | 2008

Differential detection of global luminance and contrast changes across saccades and flickers during active scene perception

John M. Henderson; James R. Brockmole; Daniel A. Gajewski

How sensitive are viewers to changes in global image properties across saccades during active real-world scene perception? This question was investigated by globally increasing and/or decreasing luminance or contrast in photographs of real-world scenes across saccadic eye movements or during matched brief interruptions in a flicker paradigm. The results from two experiments demonstrated very poor sensitivity to global image changes in both the saccade-contingent and flicker paradigms, suggesting that the specific values of basic sensory properties do not contribute to the perception of stability across saccades during complex scene perception. In addition, overall sensitivity was significantly worse in the saccade-contingent change paradigm than the flicker paradigm, suggesting that the flicker paradigm is an imperfect simulation of transsaccadic vision.


Journal of Experimental Psychology: Human Perception and Performance | 2014

Angular declination and the dynamic perception of egocentric distance.

Daniel A. Gajewski; John W. Philbeck; Philip W. Wirtz; David Chichka

The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36-220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220-ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15-s preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented.


Behavior Research Methods | 2009

Tachistoscopic exposure and masking of real three-dimensional scenes

Stephen G. Pothier; John W. Philbeck; David Chichka; Daniel A. Gajewski

Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking.


Journal of Experimental Psychology: Human Perception and Performance | 2005

The Role of Saccade Targeting in the Transsaccadic Integration of Object Types and Tokens.

Daniel A. Gajewski; John M. Henderson

The presence of location-dependent and location-independent benefits on object identification in an eye movement contingent preview paradigm has been taken as support for the transsaccadic integration of object types and object tokens (J. M. Henderson, 1994). A recent study, however, suggests a critical role for saccade targeting in the generation of the 2 preview effects (F. Germeys, De Graef, & Verfaillie, 2002). In the present study, eye movements were monitored in a preview paradigm, and both location-independent and location-dependent preview benefits were observed regardless of the saccade target status of the preview object. The findings support the view that type and token representational systems contribute independently to the integration of object information across eye movements.


Visual Cognition | 2015

The effects of age and set size on the fast extraction of egocentric distance

Daniel A. Gajewski; Courtney P. Wallin; John W. Philbeck

ABSTRACT Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near ones threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance. Regardless of viewing duration, distance judgments were more accurate (less biased towards underestimation) when multiple potential targets were presented, suggesting that the relative angular declinations between the objects are an additional source of useful information. Distance judgments were more precise with additional viewing time, but the benefit did not depend on set size and accuracy did not improve with longer viewing durations. The overall pattern suggests that distance can be efficiently derived from direction for floor-level objects. Controlling for age-related differences in the viewing time needed to support detection was sufficient to support distal localization but only when brief and longer glimpse trials were interspersed. Information extracted from longer glimpse trials presumably supported performance on subsequent trials when viewing time was more limited. This outcome suggests a particularly important role for prior visual experience in distance judgments for older observers.

Collaboration


Dive into the Daniel A. Gajewski's collaboration.

Top Co-Authors

Avatar

John W. Philbeck

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Courtney P. Wallin

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Chichka

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Sandra Mihelič

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Stephen G. Pothier

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony J. Caputy

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Joeanna C. Arthur

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Lucien M. Levy

Washington University in St. Louis

View shared research outputs
Researchain Logo
Decentralizing Knowledge