Matt Craddock
Leipzig University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matt Craddock.
Attention Perception & Psychophysics | 2008
Matt Craddock; Rebecca Lawson
In four experiments, we examined the haptic recognition of 3-D objects. In Experiment 1, blindfolded participants named everyday objects presented haptically in two blocks. There was significant priming of naming, but no cost of an object changing orientation between blocks. However, typical orientations of objects were recognized more quickly than nonstandard orientations. In Experiment 2, participants accurately performed an unannounced test of memory for orientation. The lack of orientation-specific priming in Experiment 1, therefore, was not because participants could not remember the orientation at which they had first felt an object. In Experiment 3, we examined haptic naming of objects that were primed either haptically or visually. Haptic priming was greater than visual priming, although significant cross-modal priming was also observed. In Experiment 4, we tested recognition memory for familiar and unfamiliar objects using an old-new recognition task. Objects were recognized best when they were presented in the same orientation in both blocks, suggesting that haptic object recognition is orientation sensitive. Photographs of the unfamiliar objects may be downloaded from www.psychonomic.org/archive.
Attention Perception & Psychophysics | 2009
Matt Craddock; Rebecca Lawson
Two experiments examined the effects of size changes on haptic object recognition. In Experiment 1, participants named one of three exemplars (a standard-size-and-shape, different-size, or different-shape exemplar) of 36 categories of real, familiar objects. They then performed an old/new recognition task on the basis of object identity for the standard exemplars of all 36 objects. Half of the participants performed both blocks visually; the other half performed both blocks haptically. The participants were able to efficiently name unusually sized objects haptically, consistent with previous findings of good recognition of small-scale models of stimuli (Lawson, in press). However, performance was impaired for both visual and haptic old/new recognition when objects changed size or shape between blocks. In Experiment 2, participants performed a short-term haptic shape-matching task using 3-D plastic models of familiar objects, and as in Experiment 1, a cost emerged for ignoring the irrelevant size change. Like its visual counterpart, haptic object recognition incurs a significant but modest cost for generalizing across size changes.
PLOS ONE | 2009
Matt Craddock; Rebecca Lawson
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations.
Perception | 2009
Matt Craddock; Rebecca Lawson
Two experiments were carried out to examine the effects of dominant right versus non-dominant left exploration hand and left versus right object orientation on haptic recognition of familiar objects. In experiment 1, participants named 48 familiar objects in two blocks. There was no dominant-hand advantage to naming objects haptically and there was no interaction between exploration hand and object orientation. Furthermore, priming of naming was not reduced by changes of either object orientation or exploration hand. To test whether these results were attributable to a failure to encode object orientation and exploration hand, experiment 2 replicated experiment 1 except that the unexpected task in the second block was to decide whether either exploration hand or object orientation had changed relative to the initial naming block. Performance on both tasks was above chance, demonstrating that this information had been encoded into long-term haptic representations following the initial block of naming. Thus when identifying familiar objects, the haptic processing system can achieve object constancy efficiently across hand changes and object-orientation changes, although this information is often stored even when it is task-irrelevant.
Frontiers in Human Neuroscience | 2012
Jasna Martinovic; Rebecca Lawson; Matt Craddock
Vision identifies objects rapidly and efficiently. In contrast, object recognition by touch is much slower. Furthermore, haptics usually serially accumulates information from different parts of objects, whereas vision typically processes object information in parallel. Is haptic object identification slower simply due to sequential information acquisition and the resulting memory load or due to more fundamental processing differences between the senses? To compare the time course of visual and haptic object recognition, we slowed visual processing using a novel, restricted viewing technique. In an electroencephalographic (EEG) experiment, participants discriminated familiar, nameable from unfamiliar, unnamable objects both visually and haptically. Analyses focused on the evoked and total fronto-central theta-band (5–7 Hz; a marker of working memory) and the occipital upper alpha-band (10–12 Hz; a marker of perceptual processing) locked to the onset of classification. Decreases in total upper alpha-band activity for haptic identification of objects indicate a likely processing role of multisensory extrastriate areas. Long-latency modulations of alpha-band activity differentiated between familiar and unfamiliar objects in haptics but not in vision. In contrast, theta-band activity showed a general increase over time for the slowed-down visual recognition task only. We conclude that haptic object recognition relies on common representations with vision but also that there are fundamental differences between the senses that do not merely arise from differences in their speed of processing.
Perception | 2011
Matt Craddock; Jasna Martinovic; Rebecca Lawson
In aperture viewing the field-of-view is restricted, such that only a small part of an image is visible, enforcing serial exploration of different regions of an object in order to successfully recognise it. Previous studies have used either active control or passive observation of the viewing aperture, but have not contrasted the two modes. Active viewing has previously been shown to confer an advantage in visual object recognition. We displayed objects through a small moveable aperture and tested whether peoples ability to identify the images as familiar or novel objects was influenced by how the window location was controlled. Participants recognised objects faster when they actively controlled the window using their finger on a touch-screen, as opposed to passively observing the moving window. There was no difference between passively viewing again ones own window movement as generated in a previous block of trials versus viewing window movements that had been generated by other participants. These results contrast with those from comparable studies of haptic object recognition, which have found a benefit for passive over active stimulus exploration, but accord with findings of an advantage of active viewing in visual object recognition.
NeuroImage | 2015
Valeria Bekhtereva; Matt Craddock; Matthias M. Müller
Emotionally arousing stimuli are known to rapidly draw the brains processing resources, even when they are task-irrelevant. The steady-state visual evoked potential (SSVEP) response, a neural response to a flickering stimulus which effectively allows measurement of the processing resources devoted to that stimulus, has been used to examine this process of attentional shifting. Previous studies have used a task in which participants detected periods of coherent motion in flickering random dot kinematograms (RDKs) which generate an SSVEP, and found that task-irrelevant emotional stimuli withdraw more attentional resources from the task-relevant RDKs than task-irrelevant neutral stimuli. However, it is not clear whether the emotion-related differences in the SSVEP response are conditional on higher-level extraction of emotional cues as indexed by well-known event-related potential (ERPs) components (N170, early posterior negativity, EPN), or if affective bias in competition for visual attention resources is a consequence of a time-invariant shifting process. In the present study, we used two different types of emotional distractors - IAPS pictures and facial expressions - for which emotional cue extraction occurs at different speeds, being typically earlier for faces (at ~170ms, as indexed by the N170) than for IAPS images (~220-280ms, EPN). We found that emotional modulation of attentional resources as measured by the SSVEP occurred earlier for faces (around 180ms) than for IAPS pictures (around 550ms), after the extraction of emotional cues as indexed by visual ERP components. This is consistent with emotion related re-allocation of attentional resources occurring after emotional cue extraction rather than being linked to a time-fixed shifting process.
Attention Perception & Psychophysics | 2010
Matt Craddock; Rebecca Lawson
We examined the effects of interstimulus interval (ISI) and orientation changes on the haptic recognition of novel objects, using a sequential shape-matching task. The stimuli consisted of 36 wedge-shaped plastic objects that varied along two shape dimensions (hole/bump and dip/ridge). Two objects were presented at either the same orientation or a different orientation, separated by either a short (3-sec) ISI or a long (15-sec) ISI. In separate conditions, ISI was blocked or randomly intermixed. Participants ignored orientation changes and matched on shape alone. Although performance was better in the mixed condition, there were no other differences between conditions. There was no decline in performance at the long ISI. There were similar, marginally significant benefits to same-orientation matching for short and long ISIs. The results suggest that the perceptual object representations activated from haptic inputs are both stable, being maintained for at least 15 sec, and orientation sensitive.
PLOS ONE | 2013
Matt Craddock; Jasna Martinovic; Matthias M. Müller
Visual object processing may follow a coarse-to-fine sequence imposed by fast processing of low spatial frequencies (LSF) and slow processing of high spatial frequencies (HSF). Objects can be categorized at varying levels of specificity: the superordinate (e.g. animal), the basic (e.g. dog), or the subordinate (e.g. Border Collie). We tested whether superordinate and more specific categorization depend on different spatial frequency ranges, and whether any such dependencies might be revealed by or influence signals recorded using EEG. We used event-related potentials (ERPs) and time-frequency (TF) analysis to examine the time course of object processing while participants performed either a grammatical gender-classification task (which generally forces basic-level categorization) or a living/non-living judgement (superordinate categorization) on everyday, real-life objects. Objects were filtered to contain only HSF or LSF. We found a greater positivity and greater negativity for HSF than for LSF pictures in the P1 and N1 respectively, but no effects of task on either component. A later, fronto-central negativity (N350) was more negative in the gender-classification task than the superordinate categorization task, which may indicate that this component relates to semantic or syntactic processing. We found no significant effects of task or spatial frequency on evoked or total gamma band responses. Our results demonstrate early differences in processing of HSF and LSF content that were not modulated by categorization task, with later responses reflecting such higher-level cognitive factors.
BMC Neuroscience | 2015
Matt Craddock; Jasna Martinovic; Matthias M. Müller
BackgroundThe visual system may process spatial frequency information in a low-to-high, coarse-to-fine sequence. In particular, low and high spatial frequency information may be processed via different pathways during object recognition, with LSF information projected rapidly to frontal areas and HSF processed later in visual ventral areas. In an electroencephalographic study, we examined the time course of information processing for images filtered to contain different ranges of spatial frequencies. Participants viewed either high spatial frequency (HSF), low spatial frequency (LSF), or unfiltered, broadband (BB) images of objects or non-object textures, classifying them as showing either man-made or natural objects, or non-objects. Event-related potentials (ERPs) and evoked and total gamma band activity (eGBA and tGBA) recorded using the electroencephalogram were compared for object and non-object images across the different spatial frequency ranges.ResultsThe visual P1 showed independent modulations by object and spatial frequency, while for the N1 these factors interacted. The P1 showed more positive amplitudes for objects than non-objects, and more positive amplitudes for BB than for HSF images, which in turn evoked more positive amplitudes than LSF images. The peak-to-peak N1 showed that the N1 was much reduced for BB non-objects relative to all other images, while HSF and LSF non-objects still elicited as negative an N1 as objects. In contrast, eGBA was influenced by spatial frequency and not objecthood, while tGBA showed a stronger response to objects than non-objects.ConclusionsDifferent pathways are involved in the processing of low and high spatial frequencies during object recognition, as reflected in interactions between objecthood and spatial frequency in the visual N1 component. Total gamma band seems to be related to a late, probably high-level representational process.