Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ute Leonards is active.

Publication


Featured researches published by Ute Leonards.


Addiction | 2011

Plain packaging increases visual attention to health warnings on cigarette packs in non-smokers and weekly smokers but not daily smokers.

Marcus R. Munafò; Nicole Roberts; Linda Bauld; Ute Leonards

AIMS To assess the impact of plain packaging on visual attention towards health warning information on cigarette packs. DESIGN Mixed-model experimental design, comprising smoking status as a between-subjects factor, and package type (branded versus plain) as a within-subjects factor. SETTING University laboratory. PARTICIPANTS Convenience sample of young adults, comprising non-smokers (n = 15), weekly smokers (n = 14) and daily smokers (n = 14). MEASUREMENTS Number of saccades (eye movements) towards health warnings on cigarette packs, to directly index visual attention. FINDINGS Analysis of variance indicated more eye movements (i.e. greater visual attention) towards health warnings compared to brand information on plain packs versus branded packs. This effect was observed among non-smokers and weekly smokers, but not daily smokers. CONCLUSION Among non-smokers and non-daily cigarette smokers, plain packaging appears to increase visual attention towards health warning information and away from brand information.


Vision Research | 2005

Idiosyncratic initiation of saccadic face exploration in humans

Ute Leonards; Nicholas E. Scott-Samuel

Visual processing and subsequent action are limited by the effectiveness of eye movement control: where the eyes fixate determines what part of the visual environment is seen in detail. Visual exploration consists of stereotypical sequences of saccadic eye movements which are known to depend upon both external factors, such as visual stimulus features, and internal cognition-related factors, such as attention and memory. However, how these two factors are balanced is unknown. One determinant might be the familiarity or ecological importance of the visual stimulus being explored. Recordings of saccades for human face stimuli revealed that their exploration was subject to strong individual biases for the initial saccade direction: subjects tended to look first to one particular side. We attribute this to internal factors. In contrast, exploration of landscapes, fractals or inverted faces showed no significant direction bias for initial saccades, suggesting more externally driven exploration patterns. Thus the balance between external and internal factors in scene exploration depends on stimulus type. An analysis of saccade latencies suggested that this individual preference for first saccade direction during face exploration leads to higher effectiveness through automation. The findings have implications for the understanding of both normal and abnormal eye movements.


Visual Cognition | 2009

Do we look at lights? Using mixture modelling to distinguish between low- and high-level factors in natural image viewing

Benjamin T. Vincent; Roland Baddeley; Alessia Correani; Tom Troscianko; Ute Leonards

The allocation of overt visual attention while viewing photographs of natural scenes is commonly thought to involve both bottom-up feature cues, such as luminance contrast, and top-down factors such as behavioural relevance and scene understanding. Profiting from the fact that light sources are highly visible but uninformative in visual scenes, we develop a mixture model approach that estimates the relative contribution of various low and high-level factors to patterns of eye movements whilst viewing natural scenes containing light sources. Low-level salience accounts predicted fixations at luminance contrast and at lights, whereas these factors played only a minor role in the observed human fixations. Conversely, human data were mostly explicable in terms of a central bias and a foreground preference. Moreover, observers were more likely to look near lights rather than directly at them, an effect that cannot be explained by low-level stimulus factors such as luminance or contrast. These and other results support the idea that the visual system neglects highly visible cues in favour of less visible object information. Mixture modelling might be a good way forward in understanding visual scene exploration, since it makes it possible to measure the extent that low-level or high-level cues act as drivers of eye movements.


Addiction | 2013

Visual attention to health warnings on plain tobacco packaging in adolescent smokers and non-smokers.

Olivia M. Maynard; Marcus R. Munafò; Ute Leonards

AIMS Previous research with adults indicates that plain packaging increases visual attention to health warnings in adult non-smokers and weekly smokers, but not daily smokers. The present research extends this study to adolescents aged 14-19 years. DESIGN Mixed-model experimental design, with smoking status as a between-subjects factor and pack type (branded or plain pack) and eye gaze location (health warning or branding) as within-subjects factors. SETTING Three secondary schools in Bristol, UK. PARTICIPANTS A convenience sample of adolescents comprising never-smokers (n = 26), experimenters (n = 34), weekly smokers (n = 13) and daily smokers (n = 14). MEASUREMENTS Number of eye movements to health warnings and branding on plain and branded packs. FINDINGS Analysis of variance, irrespective of smoking status revealed more eye movements to health warnings than branding on plain packs, but an equal number of eye movements to both regions on branded packs (P = 0.033). This was observed among experimenters (P < 0.001) and weekly smokers (P = 0.047), but not among never-smokers or daily smokers. CONCLUSION Among experimenters and weekly smokers, plain packaging increases visual attention to health warnings and away from branding. Daily smokers, even relatively early in their smoking careers, seem to avoid the health warnings on cigarette packs. Adolescent never-smokers attend the health warnings preferentially on both types of packs, a finding which may reflect their decision not to smoke.


Journal of Vision | 2010

What makes cast shadows hard to see

Gillian Porter; Andrea Tales; Ute Leonards

Visual search is slowed for cast shadows lit from above, as compared to the same search items inverted and so not interpreted as shadows (R. A. Rensink & P. Cavanagh, 2004). The underlying mechanisms for such impaired shadow processing are still not understood. Here we investigated the processing levels at which this shadow-related slowing might operate, by examining its interaction with a range of different phenomena including eye movements, perceptual learning, and stimulus presentation context. The data demonstrated that the shadow mechanism affects the number of saccades during the search rather than the duration until first saccade onset and can be overridden by prolonged training, which then transfers from one type of shadow stimulus to another. Shadow-related slowing did not differ for peripheral and central search items but was reduced when participants searched unilateral displays as compared to bilateral ones. Together our findings suggest that difficulties with perceiving shadows are due to visual processes linked to object recognition, rather than to shadow-specific identification and suppression mechanisms in low-level sensory visual areas. Findings are discussed in the context of the need for the visual system to distinguish between illumination and material.


Drug and Alcohol Dependence | 2014

Avoidance of cigarette pack health warnings among regular cigarette smokers

Olivia M. Maynard; Angela S. Attwood; Laura O’Brien; Sabrina Brooks; Craig Hedge; Ute Leonards; Marcus R. Munafò

BACKGROUND Previous research with adults and adolescents indicates that plain cigarette packs increase visual attention to health warnings among non-smokers and non-regular smokers, but not among regular smokers. This may be because regular smokers: (1) are familiar with the health warnings, (2) preferentially attend to branding, or (3) actively avoid health warnings. We sought to distinguish between these explanations using eye-tracking technology. METHOD A convenience sample of 30 adult dependent smokers participated in an eye-tracking study. Participants viewed branded, plain and blank packs of cigarettes with familiar and unfamiliar health warnings. The number of fixations to health warnings and branding on the different pack types were recorded. RESULTS Analysis of variance indicated that regular smokers were biased towards fixating the branding rather than the health warning on all three pack types. This bias was smaller, but still evident, for blank packs, where smokers preferentially attended the blank region over the health warnings. Time-course analysis showed that for branded and plain packs, attention was preferentially directed to the branding location for the entire 10s of the stimulus presentation, while for blank packs this occurred for the last 8s of the stimulus presentation. Familiarity with health warnings had no effect on eye gaze location. CONCLUSION Smokers actively avoid cigarette pack health warnings, and this remains the case even in the absence of salient branding information. Smokers may have learned to divert their attention away from cigarette pack health warnings. These findings have implications for cigarette packaging and health warning policy.


Experimental Brain Research | 2002

The role of stimulus type in age-related changes of visual working memory.

Ute Leonards; V. Ibanez; P. Giannakopoulos

Aging is accompanied by increasing difficulty in working memory associated with the temporary storage and processing of goal-relevant information. Face recognition plays a preponderant role in human behavior, and one might therefore suggest that working memory for faces is spared from age-related decline compared to socially less important visual stimulus material. To test this hypothesis, we performed working memory (n-back) tasks with two different visual stimulus types, namely faces and doors, and compared them to tasks with primarily verbal material, namely letters. Age-related reaction time slowing was comparable for all three stimulus types, supporting hypotheses on general cognitive and motor slowing. In contrast, performance substantially declined with age for faces and doors, but little for letters. Working memory for faces resulted in significantly better performance than that for doors and was more sensitive to on-line manipulation errors such as the temporal order. All together, our results show that even though face perception might play a specific role in visual processing, visual working memory for faces undergoes the same age-related decline as it does for socially less relevant visual material. Moreover, these results suggest that working memory decline cannot be solely explained by increasing vulnerability in prefrontal cortex related to executive functioning, but indicate an age-related decrease in a visual short-term buffer, possibly located in the temporal cortex.


intelligent robots and systems | 2013

Joint action understanding improves robot-to-human object handover

Elena Corina Grigore; Kerstin Eder; Anthony G. Pipe; Chris Melhuish; Ute Leonards

The development of trustworthy human-assistive robots is a challenge that goes beyond the traditional boundaries of engineering. Essential components of trustworthiness are safety, predictability and usefulness. In this paper we demonstrate that the integration of joint action understanding from human-human interaction into the human-robot context can significantly improve the success rate of robot-to-human object handover tasks. We take a two layer approach. The first layer handles the physical aspects of the handover. The robots decision to release the object is informed by a Hidden Markov Model that estimates the state of the handover. Inspired by human-human handover observations, we then introduce a higher-level cognitive layer that models behaviour characteristic for a human user in a handover situation. In particular, we focus on the inclusion of eye gaze / head orientation into the robots decision making. Our results demonstrate that by integrating these non-verbal cues the success rate of robot-to-human handovers can be significantly improved, resulting in a more robust and therefore safer system.


Cortex | 2010

New insights into feature and conjunction search: I. Evidence from pupil size, eye movements and ageing

Gillian Porter; Andrea Tales; Tom Troscianko; Gordon K. Wilcock; Judy Haworth; Ute Leonards

Differences in the processing mechanisms underlying visual feature and conjunction search are still under debate, one problem being a common emphasis on performance measures (speed and accuracy) which do not necessarily provide insights to the underlying processing principles. Here, eye movements and pupil dilation were used to investigate sampling strategy and processing load during performance of a conjunction and two feature-search tasks, with younger (18-27 years) and healthy older (61-83 years) age groups compared for evidence of differential age-related changes. The tasks involved equivalent processing time per item, were controlled in terms of target-distractor similarity, and did not allow perceptual grouping. Close matching of the key tasks was confirmed by patterns of fixation duration and an equal number of saccades required to find a target. Moreover, moment-to-moment pupillary dilation was indistinguishable across the tasks for both age groups, suggesting that all required the same total amount of effort or resources. Despite matching, subtle differences in eye movement patterns occurred between tasks: the conjunction task required more saccades to reach a target-absent decision and involved shorter saccade amplitudes than the feature tasks. General age-related changes were manifested in an increased number of saccades and longer fixation durations in older than younger participants. In addition, older people showed disproportionately longer and more variable fixation durations for the conjunction task specifically. These results suggest a fundamental difference between conjunction and feature search: accurate target identification in the conjunction context requires more conservative eye movement patterns, with these further adjusted in healthy ageing. The data also highlight the independence of eye movement and pupillometry measures and stress the importance of saccades and strategy for understanding the processing mechanisms driving different types of visual search.


NeuroImage | 2010

The neural mechanisms of learning from competitors

Paul A Howard-Jones; Rafal Bogacz; Jee H. Yoo; Ute Leonards; Skevi Demetriou

Learning from competitors poses a challenge for existing theories of reward-based learning, which assume that rewarded actions are more likely to be executed in the future. Such a learning mechanism would disadvantage a player in a competitive situation because, since the competitors loss is the players gain, reward might become associated with an action the player should themselves avoid. Using fMRI, we investigated the neural activity of humans competing with a computer in a foraging task. We observed neural activity that represented the variables required for learning from competitors: the actions of the competitor (in the players motor and premotor cortex) and the reward prediction error arising from the competitors feedback. In particular, regions positively correlated with the unexpected loss of the competitor (which was beneficial to the player) included the striatum and those regions previously implicated in response inhibition. Our results suggest that learning in such contexts may involve the competitors unexpected losses activating regions of the players brain that subserve response inhibition, as the player learns to avoid the actions that produced them.

Collaboration


Dive into the Ute Leonards's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge