Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruce Keefe is active.

Publication


Featured researches published by Bruce Keefe.


Experimental Brain Research | 2009

The role of binocular vision in grasping: a small stimulus-set distorts results

Bruce Keefe; Simon J. Watt

The role of binocular vision in grasping has frequently been assessed by measuring the effects on grasp kinematics of covering one eye. These studies have typically used three or fewer objects presented at three or fewer distances, raising the possibility that participants learn the properties of the stimulus set. If so, even relatively poor visual information may be sufficient to identify which object/distance configuration is presented on a given trial, in effect providing an additional source of depth information. Here we show that the availability of this uncontrolled cue leads to an underestimate of the effects of removing binocular information, and therefore to an overestimate of the effectiveness of the remaining cues. We measured the effects of removing binocular cues on visually open-loop grasps using (1) a conventional small stimulus-set, and (2) a large, pseudo-randomised stimulus set, which could not be learned. Removing binocular cues resulted in a significant change in grip aperture scaling in both conditions: peak grip apertures were larger (when reaching to small objects), and scaled less with increases in object size. However, this effect was significantly larger with the randomised stimulus set. These results confirm that binocular information makes a significant contribution to grasp planning. Moreover, they suggest that learned stimulus information can contribute to grasping in typical experiments, and so the contribution of information from binocular vision (and from other depth cues) may not have been measured accurately.


Frontiers in Psychology | 2013

Adaptation improves face trustworthiness discrimination

Bruce Keefe; Milena Dzhelyova; David I. Perrett; Nick E. Barraclough

Adaptation to facial characteristics, such as gender and viewpoint, has been shown to both bias our perception of faces and improve facial discrimination. In this study, we examined whether adapting to two levels of face trustworthiness improved sensitivity around the adapted level. Facial trustworthiness was manipulated by morphing between trustworthy and untrustworthy prototypes, each generated by morphing eight trustworthy and eight untrustworthy faces, respectively. In the first experiment, just-noticeable differences (JNDs) were calculated for an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical, except that JNDs were calculated for a trustworthy face. In the third experiment we examined whether adapting to an untrustworthy male face improved discrimination to an untrustworthy female face. In all experiments, participants completed a two-interval forced-choice (2-IFC) adaptive staircase procedure, in which they judged which face was more untrustworthy. JNDs were derived from a psychometric function fitted to the data. Adaptation improved sensitivity to faces conveying the same level of trustworthiness when compared to no adaptation. When adapting to and discriminating around a different level of face trustworthiness there was no improvement in sensitivity and JNDs were equivalent to those in the no adaptation condition. The improvement in sensitivity was found to occur even when adapting to a face with different gender and identity. These results suggest that adaptation to facial trustworthiness can selectively enhance mechanisms underlying the coding of facial trustworthiness to improve perceptual sensitivity. These findings have implications for the role of our visual experience in the decisions we make about the trustworthiness of other individuals.


I-perception | 2016

An Orientation Dependent Size Illusion Is Underpinned by Processing in the Extrastriate Visual Area, LO1

Kyriaki Mikellidou; Andre Gouws; Hannah Clawson; Peter Thompson; Antony B. Morland; Bruce Keefe

We use the simple, but prominent Helmholtz’s squares illusion in which a vertically striped square appears wider than a horizontally striped square of identical physical dimensions to determine whether functional magnetic resonance imaging (fMRI) BOLD responses in V1 underpin illusions of size. We report that these simple stimuli which differ in only one parameter, orientation, to which V1 neurons are highly selective elicited activity in V1 that followed their physical, not perceived size. To further probe the role of V1 in the illusion and investigate plausible extrastriate visual areas responsible for eliciting the Helmholtz squares illusion, we performed a follow-up transcranial magnetic stimulation (TMS) experiment in which we compared perceptual judgments about the aspect ratio of perceptually identical Helmholtz squares when no TMS was applied against selective stimulation of V1, LO1, or LO2. In agreement with fMRI results, we report that TMS of area V1 does not compromise the strength of the illusion. Only stimulation of area LO1, and not LO2, compromised significantly the strength of the illusion, consistent with previous research that LO1 plays a role in the processing of orientation information. These results demonstrate the involvement of a specific extrastriate area in an illusory percept of size.


Experimental Brain Research | 2017

Viewing geometry determines the contribution of binocular vision to the online control of grasping

Bruce Keefe; Simon J. Watt

Binocular vision is often assumed to make a specific, critical contribution to online visual control of grasping by providing precise information about the separation between digits and object. This account overlooks the ‘viewing geometry’ typically encountered in grasping, however. Separation of hand and object is rarely aligned precisely with the line of sight (the visual depth dimension), and analysis of the raw signals suggests that, for most other viewing angles, binocular feedback is less precise than monocular feedback. Thus, online grasp control relying selectively on binocular feedback would not be robust to natural changes in viewing geometry. Alternatively, sensory integration theory suggests that different signals contribute according to their relative precision, in which case the role of binocular feedback should depend on viewing geometry, rather than being ‘hard-wired’. We manipulated viewing geometry, and assessed the role of binocular feedback by measuring the effects on grasping of occluding one eye at movement onset. Loss of binocular feedback resulted in a significantly less extended final slow-movement phase when hand and object were separated primarily in the frontoparallel plane (where binocular information is relatively imprecise), compared to when they were separated primarily along the line of sight (where binocular information is relatively precise). Consistent with sensory integration theory, this suggests the role of binocular (and monocular) vision in online grasp control is not a fixed, ‘architectural’ property of the visuo-motor system, but arises instead from the interaction of viewer and situation, allowing robust online control across natural variations in viewing geometry.


Behavior Research Methods | 2014

A database of whole-body action videos for the study of action, emotion, and untrustworthiness

Bruce Keefe; Matthias Villing; Chris Racey; Samantha L. Strong; Joanna Wincenciak; Nick E. Barraclough

We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions—walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting—while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions.


Attention Perception & Psychophysics | 2017

Visual adaptation enhances action sound discrimination.

Nick E. Barraclough; Steve Page; Bruce Keefe

Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action “matched” the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.


Journal of Vision | 2016

Action adaptation during natural unfolding social scenes influences action recognition and inferences made about actor beliefs

Bruce Keefe; Joanna Wincenciak; Tjeerd Jellema; James Ward; Nick E. Barraclough

When observing another individuals actions, we can both recognize their actions and infer their beliefs concerning the physical and social environment. The extent to which visual adaptation influences action recognition and conceptually later stages of processing involved in deriving the belief state of the actor remains unknown. To explore this we used virtual reality (life-size photorealistic actors presented in stereoscopic three dimensions) to see how visual adaptation influences the perception of individuals in naturally unfolding social scenes at increasingly higher levels of action understanding. We presented scenes in which one actor picked up boxes (of varying number and weight), after which a second actor picked up a single box. Adaptation to the first actors behavior systematically changed perception of the second actor. Aftereffects increased with the duration of the first actors behavior, declined exponentially over time, and were independent of view direction. Inferences about the second actors expectation of box weight were also distorted by adaptation to the first actor. Distortions in action recognition and actor expectations did not, however, extend across different actions, indicating that adaptation is not acting at an action-independent abstract level but rather at an action-dependent level. We conclude that although adaptation influences more complex inferences about belief states of individuals, this is likely to be a result of adaptation at an earlier action recognition stage rather than adaptation operating at a higher, more abstract level in mentalizing or simulation systems.


Human Brain Mapping | 2018

Emergence of symmetry selectivity in the visual areas of the human brain: fMRI responses to symmetry presented in both frontoparallel and slanted planes

Bruce Keefe; Andre Gouws; Aislin A. Sheldon; Richard Vernon; Samuel Lawrence; Declan J. McKeefry; Alex R. Wade; Antony B. Morland

Symmetry is effortlessly perceived by humans across changes in viewing geometry. Here, we re‐examined the network subserving symmetry processing in the context of up‐to‐date retinotopic definitions of visual areas. Responses in object selective cortex, as defined by functional localizers, were also examined. We further examined responses to both frontoparallel and slanted symmetry while manipulating attention both toward and away from symmetry. Symmetry‐specific responses first emerge in V3 and continue across all downstream areas examined. Of the retinotopic areas, ventral occipital VO1 showed the strongest symmetry response, which was similar in magnitude to the responses observed in object selective cortex. Neural responses were found to increase with both the coherence and folds of symmetry. Compared to passive viewing, drawing attention to symmetry generally increased neural responses and the correspondence of these neural responses with psychophysical performance. Examining symmetry on the slanted plane found responses to again emerge in V3, continue through downstream visual cortex, and be strongest in VO1 and LOB. Both slanted and frontoparallel symmetry evoked similar activity when participants performed a symmetry‐related task. However, when a symmetry‐unrelated task was performed, fMRI responses to slanted symmetry were reduced relative to their frontoparallel counterparts. These task‐related changes provide a neural signature that suggests slant has to be computed ahead of symmetry being appropriately extracted, known as the “normalization” account of symmetry processing. Specifically, our results suggest that normalization occurs naturally when attention is directed toward symmetry and orientation, but becomes interrupted when attention is directed away from these features.


Journal of Vision | 2016

Global shape aftereffects in composite radial frequency patterns.

Samuel Lawrence; Bruce Keefe; Richard Vernon; Alex R. Wade; Declan J. McKeefry; Antony B. Morland

Individual radial frequency (RF) patterns are generated by modulating a circles radius as a sinusoidal function of polar angle and have been shown to tap into global shape processing mechanisms. Composite RF patterns can reproduce the complex outlines of natural shapes and examining these stimuli may allow us to interrogate global shape mechanisms that are recruited in biologically relevant tasks. We present evidence for a global shape aftereffect in a composite RF pattern stimulus comprising two RF components. Manipulations of the shape, location, size and spatial frequency of the stimuli revealed that this aftereffect could only be explained by the attenuation of intermediate-level global shape mechanisms. The tuning of the aftereffect to test stimulus size also revealed two mechanisms underlying the aftereffect; one that was tuned to size and one that was invariant. Finally, we show that these shape mechanisms may encode some RF information. However, the RF encoding we found was not capable of explaining the full extent of the aftereffect, indicating that encoding of other shape features such as curvature are also important in global shape processing.


Journal of Vision | 2015

A bias-free measure of the face viewpoint aftereffect from radial frequency patterns

Bruce Keefe; Samuel Lawrence; Alex R. Wade; Declan J. McKeefry; Antony B. Morland

The face viewpoint aftereffect (FVA) is an illusion that biases the perception of a frontward facing test face in the opposite direction to the adapting face (Fang & He, 2005). Changes in the outer contour of the head that highlight these changes in face viewpoint can be modelled using radial frequency (RF) patterns (Wilson et al., 2000). We used RF patterns together with a recently developed procedure to tease apart the FVA from observer response bias. The psychophysical procedure we used was necessary because traditional methods for measuring perceptual illusions confound perceptual changes, with changes in an observers criterion (Morgan, 2013; 2014). The contours describing the shape stimuli had a luminance profile that followed the fourth derivative of a Gaussian. Participants adapted to two vertically aligned, contrast reversing (1Hz) stimuli facing in opposite directions. Eight separate test conditions were randomly interleaved, each with two static test stimuli, vertically aligned to the same spatial position of the adaptors. Test stimuli could face in either the same or opposite directions and allowed us to make tangible predictions to distinguish adaptation from response bias. A two-alternate forced choice adaptive staircase procedure was used in which the participants indicated which of the two test stimuli appeared most asymmetric. Results showed a large FVA from RF patterns that could not be attributed to shifts in observer response bias. By varying the size of the test stimuli we were able to rule out a purely retinotopic account of adaptation. Halving the size of the test stimuli reduced the FVA by ~50%, suggesting an extra-striate locus of adaptation. These results are consistent with the proposal by Wilson et al., (2000) which suggests an extra-striate locus for the processing of face viewpoint from RF patterns. Meeting abstract presented at VSS 2015.

Collaboration


Dive into the Bruce Keefe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge