Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George L. Malcolm is active.

Publication


Featured researches published by George L. Malcolm.


Psychonomic Bulletin & Review | 2009

Searching in the dark:Cognitive relevance drives attention in real-world scenes

John M. Henderson; George L. Malcolm; Charles Schandl

We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes.


Journal of Vision | 2009

The effects of target template specificity on visual search in real-world scenes:Evidence from eye movements

George L. Malcolm; John M. Henderson

We can locate an object more quickly in a real-world scene when a specific target template is held in visual working memory, but it is not known exactly how a target templates specificity affects real-world search. In the present study, we compared word and picture cues in real-world scene search. Using an eye-tracker, we segmented search time into three behaviorally defined epochs: search initiation time, scanning time, and verification time. Results from three experiments indicated that target template specificity affects scanning and verification time. Within the scanning epoch, target template specificity affected the number of scene regions visited and the mean fixation duration. Changes to SOA did not affect this pattern of results. Similarly, the pattern of results did not change when participants were familiarized with target images prior to testing, suggesting that an immediately preceding picture provides a more useful search template than one stored in long-term memory. The results suggest that the specificity of the target cue affects both the activation map representing potential target locations and the process that matches a fixated object to an internal representation of the target.


Journal of Vision | 2010

Combining top-down processes to guide eye movements during real-world scene search

George L. Malcolm; John M. Henderson

Eye movements can be guided by various types of information in real-world scenes. Here we investigated how the visual system combines multiple types of top-down information to facilitate search. We manipulated independently the specificity of the search target template and the usefulness of contextual constraint in an object search task. An eye tracker was used to segment search time into three behaviorally defined epochs so that influences on specific search processes could be identified. The results support previous studies indicating that the availability of either a specific target template or scene context facilitates search. The results also show that target template and contextual constraints combine additively in facilitating search. The results extend recent eye guidance models by suggesting the manner in which our visual system utilizes multiple types of top-down information.


Psychological Science | 2009

Eye Movements and Visual Encoding During Scene Perception

Keith Rayner; Tim J. Smith; George L. Malcolm; John M. Henderson

The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist of a scene from a brief 40- to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways.


Perception | 2004

Regional variation in the inversion effect for faces: differential effects for feature shape, feature configuration, and external contour

George L. Malcolm; Connie Leung; Jason J. S. Barton

Faces are perceived via an orientation-dependent expert mechanism. We previously showed that inversion impaired perception of the spatial relations of features more in the lower face than in the (more salient) upper face, suggests a failure to rapidly process this type of structural data from the entire face. In this study we wished to determine if this interaction between inversion and regional salience, which we consider a marker for efficient whole-face processing, was specific to second-order (coordinate) spatial relations or also affected other types of structural information in faces. We used an oddity paradigm to test the ability of seventeen subjects to discriminate changes in feature size, feature spatial relations, and external contour in both the upper and lower face. We also tested fourteen subjects on perception of two different types of spatial relations: second-order changes that create plausible alternative faces, and illegal spatial changes that transgress normal rules of facial geometry. In both experiments we examined for asymmetries between upper-face and lower-face perceptual accuracy with brief stimulus presentations. While all structural changes were less easily discerned in inverted faces, only changes to spatial relations showed a marked asymmetry between the upper and lower face, with far worse performance in the mouth region. Furthermore, this asymmetry was found only for second-order spatial relations and not illegal spatial changes. These results suggest that the orientation-dependent face mechanism has a rapid whole-face processing capacity specific to the internal second-order (coordinate) spatial relations of facial features.


Journal of Vision | 2008

Scan patterns during the processing of facial expression versus identity: an exploration of task-driven and stimulus-driven effects.

George L. Malcolm; Linda J. Lanyon; Andrew J. B. Fugard; Jason J. S. Barton

Perceptual studies suggest that processing facial identity emphasizes upper-face information, whereas processing expressions of anger or happiness emphasizes the lower-face. The two goals of the present study were to determine (a) if the distributions of eye fixations reflect these upper/lower-face biases, and (b) whether this bias is task- or stimulus-driven. We presented a target face followed by a probe pair of morphed faces, neither of which was identical to the target. Subjects judged which of the pair was more similar to the target face while eye movements were recorded. In Experiment 1 the probe pair always differed from each other in both identity and expression on each trial. In one block subjects judged which probe face was more similar to the target face in identity, and in a second block subjects judged which probe face was more similar to the target face in expression. In Experiment 2 the two probe faces differed in either expression or identity, but not both. Subjects were not informed which dimension differed, but simply asked to judge which probe face was more similar to the target face. We found that subjects scanned the upper-face more than the lower-face during the identity task but the lower-face more than the upper-face during the expression task in Experiment 1 (task-driven effects), with significantly less variation in bias in Experiment 2 (stimulus-driven effects). We conclude that fixations correlate with regional variations of diagnostic information in different processing tasks, but that these reflect top-down task-driven guidance of information acquisition more than stimulus-driven effects.


Neuropsychologia | 2012

ERP correlates of spatially incongruent object identification during scene viewing: Contextual expectancy versus simultaneous processing

Şükrü Barış Demiral; George L. Malcolm; John M. Henderson

Object processing is affected by the gist of the scene within which it is embedded. Previous ERP research has suggested that manipulating the semantic congruency between an object and the surrounding scene affects the high level (semantic) representation of that object emerging after the presentation of the scene (Ganis & Kutas, 2003). In two ERP experiments, we investigated whether there would be a similar electrophysiological response when spatial congruency of an object in a scene was manipulated while the semantic congruency remained the same. Apart from the location of the object, all other object features were congruent with the scene (e.g., in a bedroom scene, either a painting or a cat appeared on the wall). In the first experiment, participants were shown a location cue and then a scene image for 300 ms, after which an object image appeared on the cued location for 300 ms. Spatially incongruent objects elicited a stronger centro-frontal N300-N400 effect in the 275-500 ms window relative to the spatially congruent objects. We also found early ERP effects, dominant on the left hemisphere electrodes. Strikingly, LORETA analysis revealed that these activations were mainly located in the superior and middle temporal gyrus of the right hemisphere. In the second experiment, we used a paradigm similar to Mudrik, Lamy, and Deouell (2010). The scene and the object were presented together for 300 ms after the location cue. This time, we did not observe either an early or the pronounced N300-N400 effect. In contrast to Experiment 1, LORETA analysis on the N400 time-window revealed that the generators of these weak ERP effects were mainly located in the temporal lobe of the left hemisphere. Our results suggest that, when the scene is presented before the object, top-down spatial encoding processes are initiated and the right superior temporal gyrus is activated, as previously suggested (Ellison, Schindler, Pattison, & Milner, 2004). Mismatch between the actual object features and the spatially driven top-down structural and functional features could lead to the early effect, and then to the N300-N400 effect. In contrast, when the scene is not presented before the object, the spatial encoding could not happen early and strong enough to initiate spatial object-integration effects. Our results indicate that spatial information is an early and essential part in scene-object integration, and it primes structural as well as semantic features of an object.


Trends in Cognitive Sciences | 2016

Making Sense of Real-World Scenes

George L. Malcolm; Iris I. A. Groen; Chris I. Baker

To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties.


Quarterly Journal of Experimental Psychology | 2014

The interplay of bottom-up and top-down mechanisms in visual guidance during object naming

Moreno I. Coco; George L. Malcolm; Frank Keller

An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.


Journal of Vision | 2014

How context information and target information guide the eyes from the first epoch of search in real-world scenes

Sara Spotorno; George L. Malcolm; Benjamin W. Tatler

This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high-level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene.

Collaboration


Dive into the George L. Malcolm's collaboration.

Top Co-Authors

Avatar

Sarah Shomstein

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Jason J. S. Barton

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Keller

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Chris I. Baker

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Edward Silson

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Jennifer Henry

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Summer Sheremata

Florida Atlantic University

View shared research outputs
Researchain Logo
Decentralizing Knowledge