Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liam J. Norman is active.

Publication


Featured researches published by Liam J. Norman.


Psychological Science | 2013

Object-Based Attention Without Awareness

Liam J. Norman; Charles A. Heywood; Robert W. Kentridge

Attention and awareness are often considered to be related. Some forms of attention can, however, facilitate the processing of stimuli that remain unseen. It is unclear whether this dissociation extends beyond selection on the basis of primitive properties, such as spatial location, to situations in which there are more complex bases for attentional selection. The experiment described here shows that attentional selection at the level of objects can take place without giving rise to awareness of those objects. Pairs of objects were continually masked, which rendered them invisible to participants performing a cued-target-discrimination task. When the cue and target appeared within the same object, discrimination was faster than when they appeared in different objects at the same spatial separation. Participants reported no awareness of the objects and were unable to detect them in a signal-detection task. Object-based attention, therefore, is not sufficient for object awareness.


Journal of Vision | 2015

Direct encoding of orientation variance in the visual system

Liam J. Norman; Charles A. Heywood; Robert W. Kentridge

Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.


Consciousness and Cognition | 2015

Exogenous attention to unseen objects

Liam J. Norman; Charles A. Heywood; Robert W. Kentridge

Attention and awareness are closely related phenomena, but recent evidence has shown that not all attended stimuli give rise to awareness. Controversy still remains over whether, and the extent to which, a dissociation between attention and awareness encompasses all forms of attention. For example, it has been suggested that attention without awareness is more readily demonstrated for voluntary, endogenous attention than its reflexive, exogenous counterpart. Here we examine whether exogenous attentional cueing can have selective behavioural effects on stimuli that nevertheless remain unseen. Using a task in which object-based attention has been shown in the absence of awareness, we remove all possible contingencies between cues and target stimuli to ensure that any cueing effects must be under purely exogenous control, and find evidence of exogenous object-based attention without awareness. In a second experiment we address whether this dissociation crucially depends on the method used to establish that the objects indeed remain unseen. Specifically, to confirm that objects are unseen we adopt appropriate signal detection task procedures, including those that retain parity with the primary attentional task (by requiring participants to discriminate the two types of trial that are used to measure an effect of attention). We show a significant object-based attention effect is apparent under conditions where the selected object indeed remains undetectable.


Vision Research | 2011

Contrasting the processes of texture segmentation and discrimination with static and phase-reversing stimuli

Liam J. Norman; Charles A. Heywood; Robert W. Kentridge

Regions of visual texture can be automatically segregated from one another when they abut but also discriminated from one another if they are separated in space or time. A difference in mean orientation between two textures serves to facilitate their segmentation, whereas a difference in orientation variance does not. The present study further supports this notion, by replicating the findings of Wolfson and Landy (1998) in showing that judgments (odd-one-out) made for textures that differ in mean orientation were more accurate (and more rapid) when the textures were abutting than when separated, whereas judgments of variance were made no more accurately for abutting relative to separated textures. Interestingly, however, responses were overall faster for textures differing in variance when they were separated compared to when they were abutting. This is perhaps due to the clear separation boundary, which serves to delineate the regions on which to perform some regional estimation of orientation variance. A second experiment highlights the phase-insensitivity of texture segmentation, in that locating a texture edge (defined by a difference in mean orientation) in high frequency orientation-reversing stimuli can be performed at much higher frequencies than the discrimination of the same regions but with the texture contour masked. Textures that differed in variance did not exhibit this effect. A final experiment demonstrates that the phase-insensitive perception of texture borders improves with eccentric viewing relative to the fovea, whereas perception of the texture regions does not. Together, these experiments show dissociations between edge- and region-based texture analysis mechanisms and suggest a fast, sign-invariant contour extraction system mediating texture segmentation, which may be closely linked to the magnocellular subdivision of visual processing.


Learning & Memory | 2017

Incidental context information increases recollection

Kamar E. Ameen-Ali; Liam J. Norman; Madeline J. Eacott; Alexander Easton

The current study describes a receiver-operating characteristic (ROC) task for human participants based on the spontaneous recognition memory paradigms typically used with rodents. Recollection was significantly higher when an object was in the same location and background as at encoding, a combination used to assess episodic-like memory in animals, but not when only one of these task-irrelevant cues was present. The results show that incidentally encoded cue information can determine the degree of recollection, and opens up the possibility of assessing recollection across species in a single experimental paradigm, allowing better understanding of the cognitive and biological mechanisms at play.


Perception | 2014

Spatial attention does not modulate holistic face processing, even when multiple faces are present.

Liam J. Norman; Alexander Tokarev

The perception of faces is often considered to be unique in comparison with that of other objects in the world. The fact that faces are processed not by their constituent components but by the spatial configuration between those components (holistic face processing—HFP) is often used to support this. Despite two decades of research, however, there is no consensus as to whether or not HFP is a process that is subject to attentional modulation. Here, in two experiments, we used a method to direct spatial attention not previously used in studies of HFP—an exogenous spatial cue—as it offers a robust, rapid, and involuntary method of directing attention. In one experiment we demonstrate that the degree of HFP afforded to a face is not reduced when attention is directed away from that face. In a second experiment we replicate this finding even when the face is simultaneously flanked by other faces—a condition under which a face-specific processing module would, hypothetically, be more sensitive to attentional guidance. These results add to the argument that HFP is carried out independently of attention.


I-perception | 2018

Human Echolocation for Target Detection Is More Accurate With Emissions Containing Higher Spectral Frequencies, and This Is Explained by Echo Intensity

Liam J. Norman; Lore Thaler

Humans can learn to use acoustic echoes to detect and classify objects. Echolocators typically use tongue clicks to induce these echoes, and there is some evidence that higher spectral frequency content of an echolocator’s tongue click is associated with better echolocation performance. This may be explained by the intensity of the echoes. The current study tested experimentally (a) if emissions with higher spectral frequencies lead to better performance for target detection, and (b) if this is mediated by echo intensity. Participants listened to sound recordings that contained an emission and sometimes an echo from an object. The peak spectral frequency of the emission was varied between 3.5 and 4.5 kHz. Participants judged whether they heard the object in these recordings and did the same under conditions in which the intensity of the echoes had been digitally equated. Participants performed better using emissions with higher spectral frequencies, but this advantage was eliminated when the intensity of the echoes was equated. These results demonstrate that emissions with higher spectral frequencies can benefit echolocation performance in conditions where they lead to an increase in echo intensity. The findings suggest that people who train to echolocate should be instructed to make emissions (e.g. mouth clicks) with higher spectral frequency content.


Visual Cognition | 2017

Texture segmentation without human V4

Liam J. Norman; Charles A. Heywood; Robert W. Kentridge

ABSTRACT Texture segmentation, or second-order segmentation, is a rapid perceptual process, allowing object and surface boundaries to be effortlessly detected. It is currently unclear whether this is achieved in early cortical areas or whether it necessitates the region referred to as human V4. The present report presents a single case study of patient MS, whose bilateral occipitotemporal damage includes the putative human V4 area, yet whose early visual cortex is spared. As shown in these experiments, MS can accurately locate a target defined by an orientation contrast to its background, even with considerable orientation noise. Importantly, his performance was significantly reduced when the texture edges were masked by black borders (thus preventing edge-based segmentation), indicating that he retains a functional edge-based texture segmentation process. Additionally, when the sign of the orientation contrast was reversed at a temporal frequency of 12.5 Hz, MS could nonetheless detect the contours defined by the orientation contrast despite being unable to judge whether the surfaces either side of the contrast were the same or not. This reveals that MS’s early visual cortex is sufficient for the intact phase-insensitive component of texture segmentation. Human area V4, therefore, is not necessary for texture segmentation.


Current Biology | 2014

Color Constancy for an Unseen Surface

Liam J. Norman; Kathleen Akins; Charles A. Heywood; Robert W. Kentridge


Sex Roles | 2012

Gender-Based Navigation Stereotype Improves Men’s Search for a Hidden Goal

Harriet E. S. Rosenthal; Liam J. Norman; Shamus P. Smith; Anthony McGregor

Collaboration


Dive into the Liam J. Norman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge