Alberta Ipser
City University London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alberta Ipser.
Journal of Experimental Psychology: Human Perception and Performance | 2015
Jennifer Murphy; Alberta Ipser; Sebastian B. Gaigg; Richard J. Cook
Differences in the visual processing of familiar and unfamiliar faces have prompted considerable interest in face learning, the process by which unfamiliar faces become familiar. Previous work indicates that face learning is determined in part by exposure duration; unsurprisingly, viewing faces for longer affords superior performance on subsequent recognition tests. However, there has been further speculation that exemplar variation, experience of different exemplars of the same facial identity, contributes to face learning independently of viewing time. Several leading accounts of face learning, including the averaging and pictorial coding models, predict an exemplar variation advantage. Nevertheless, the exemplar variation hypothesis currently lacks empirical support. The present study therefore sought to test this prediction by comparing the effects of unique exemplar face learning—a condition rich in exemplar variation—and repeated exemplar face learning—a condition that equates viewing time, but constrains exemplar variation. Crucially, observers who received unique exemplar learning displayed better recognition of novel exemplars of the learned identities at test, than observers in the repeated exemplar condition. These results have important theoretical and substantive implications for models of face learning and for approaches to face training in applied contexts.
Cortex | 2013
Elliot Freeman; Alberta Ipser; Austra Palmbaha; Diana Paunoiu; Peter Brown; Christian Lambert; Alexander P. Leff; Jon Driver
The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PHs timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream–Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing estimates in PH and healthy participants, and how they can still perceive the timing of external events correctly, on average.
Journal of Experimental Psychology: Human Perception and Performance | 2016
Alberta Ipser; Richard J. Cook
Motor theories of expression perception posit that observers simulate facial expressions within their own motor system, aiding perception and interpretation. Consistent with this view, reports have suggested that blocking facial mimicry induces expression labeling errors and alters patterns of ratings. Crucially, however, it is unclear whether changes in labeling and rating behavior reflect genuine perceptual phenomena (e.g., greater internal noise associated with expression perception or interpretation) or are products of response bias. In an effort to advance this literature, the present study introduces a new psychophysical paradigm for investigating motor contributions to expression perception that overcomes some of the limitations inherent in simple labeling and rating tasks. Observers were asked to judge whether smiles drawn from a morph continuum were sincere or insincere, in the presence or absence of a motor load induced by the concurrent production of vowel sounds. Having confirmed that smile sincerity judgments depend on cues from both eye and mouth regions (Experiment 1), we demonstrated that vowel production reduces the precision with which smiles are categorized (Experiment 2). In Experiment 3, we replicated this effect when observers were required to produce vowels, but not when they passively listened to the same vowel sounds. In Experiments 4 and 5, we found that gender categorizations, equated for difficulty, were unaffected by vowel production, irrespective of the presence of a smiling expression. These findings greatly advance our understanding of motor contributions to expression perception and represent a timely contribution in light of recent high-profile challenges to the existing evidence base.
Scientific Reports | 2017
Alberta Ipser; Vlera Agolli; Anisa Bajraktari; Fatimah Al-Alawi; Nurfitriani Djaafara; Elliot Freeman
Are sight and sound out of synch? Signs that they are have been dismissed for over two centuries as an artefact of attentional and response bias, to which traditional subjective methods are prone. To avoid such biases, we measured performance on objective tasks that depend implicitly on achieving good lip-synch. We measured the McGurk effect (in which incongruent lip-voice pairs evoke illusory phonemes), and also identification of degraded speech, while manipulating audiovisual asynchrony. Peak performance was found at an average auditory lag of ~100 ms, but this varied widely between individuals. Participants’ individual optimal asynchronies showed trait-like stability when the same task was re-tested one week later, but measures based on different tasks did not correlate. This discounts the possible influence of common biasing factors, suggesting instead that our different tasks probe different brain networks, each subject to their own intrinsic auditory and visual processing latencies. Our findings call for renewed interest in the biological causes and cognitive consequences of individual sensory asynchronies, leading potentially to fresh insights into the neural representation of sensory timing. A concrete implication is that speech comprehension might be enhanced, by first measuring each individual’s optimal asynchrony and then applying a compensatory auditory delay.
wireless, mobile and ubiquitous technologies in education | 2012
Maria Uther; Alberta Ipser
This paper sets out to provide a preliminary guidance on developing mobile language learning applications, with consideration for using multimedia. A set of initial findings are presented from a small-scale pilot learner study, along with other considerations from findings in the literature. These preliminary guidelines could be further developed in later iterations to provide an overall framework for developing and evaluating other multimedia elements in mobile language learning applications and possibly also other mobile learning applications that use multimedia extensively (e.g. musical learning).
electronic imaging | 2016
Elliot Freeman; Alberta Ipser
The senses have traditionally been studied separately, but it is now recognised that the brain is just as richly multisensory as is our natural environment. This creates fresh challenges for understanding how complex multisensory information is organised and coordinated around the brain. Take timing for example: the sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their neural signals from each modality arrive at different multisensory areas in the brain at different times. How do we nevertheless perceive the synchrony of the original events correctly? It is popularly assumed that this is achieved via some mechanism of multisensory temporal recalibration. But recent work from my lab on normal and pathological individual differences show that sight and sound are nevertheless markedly out of synch by different amounts for each individual and even for different tasks performed by the same individual. Indeed, the more an individual perceive the same multisensory event as having an auditory lead and an auditory lag at the same time. This evidence of apparent temporal disunity sheds new light on the deep problem of understanding how neural timing relates to perceptual timing of multisensory events. It also leads to concrete therapeutic applications: for example, we may now be able to improve an individual’s speech comprehension by simply delaying sound or vision to compensate for their individual perceptual asynchrony.
Journal of Experimental Psychology: Human Perception and Performance | 2018
Alberta Ipser; Maayan Karlinski; Elliot Freeman
Sight and sound are out of synch in different people by different amounts for different tasks. But surprisingly, different concurrent measures of perceptual asynchrony correlate negatively (Freeman et al., 2013). Thus, if vision subjectively leads audition in one individual, the same individual might show a visual lag in other measures of audiovisual integration (e.g., McGurk illusion, Stream-Bounce illusion). This curious negative correlation was first observed between explicit temporal order judgments and implicit phoneme identification tasks, performed concurrently as a dual task, using incongruent McGurk stimuli. Here we used a new set of explicit and implicit tasks and congruent stimuli, to test whether this negative correlation persists across testing sessions, and whether it might be an artifact of using specific incongruent stimuli. None of these manipulations eliminated the negative correlation between explicit and implicit measures. This supports the generalizability and validity of the phenomenon, and offers new theoretical insights into its explanation. Our previously proposed “temporal renormalization” theory assumes that the timings of sensory events registered within the brain’s different multimodal subnetworks are each perceived relative to a representation of the typical average timing of such events across the wider network. Our new data suggest that this representation is stable and generic, rather than dependent on specific stimuli or task contexts, and that it may be acquired through experience with a variety of simultaneous stimuli. Our results also add further evidence that speech comprehension may be improved in some individuals by artificially delaying voices relative to lip-movements.
Seeing and Perceiving | 2012
Alberta Ipser; Diana Paunoiu; Elliot Freeman
It has often been claimed that there is mutual dependence between the perceived synchrony of auditory and visual sources, and the extent to which they perceptually integrate (‘unity assumption’: Vroomen and Keetels, 2010; Welsh and Warren, 1980). However subjective audiovisual synchrony can vary widely between subjects (Stone, 2001) and between paradigms (van Eijk, 2008). Do such individual differences in subjective synchrony correlate positively with individual differences in optimal timing for integration, as expected under the unity assumption? In separate experiments we measured the optimal audiovisual asynchrony for the McGurk illusion (McGurk and MacDonald, 1976), and the stream-bounce illusion (Sekuler et al., 1997). We concurrently elicited either temporal order judgements (TOJ) or simultaneity judgements (SJ), in counterbalanced sessions, from which we derived the point of subjective simultaneity (PSS). For both experiments, the asynchrony for maximum illusion showed a significant positive correlation with PSS derived from SJ, following the unity assumption. But surprisingly, the analogous correlation with PSS derived from TOJ was significantly negative. The temporal mechanisms for this pairing of tasks seem neither unitary nor fully independent, but apparently antagonistic. A tentative temporal renormalisation mechanism explains these paradoxical results as follows: (1) subjective timing in our different tasks can depend on independent mechanisms subject to their own neural delays; (2) inter-modal synchronization is achieved by first discounting the mean neural delay within each modality; and (3) apparent antagonism between estimates of subjective timing emerges as the mean is attracted towards deviants in the unimodal temporal distribution.
Seeing and Perceiving | 2012
Elliot Freeman; Alberta Ipser
Due to physical and neural delays, the sight and sound of a person speaking causes a cachophony of asynchronous events in the brain. How can we still perceive them as simultaneous? Our converging evidence suggests that actually, we do not. Patient PH, with midbrain and auditory brainstem lesions, experiences voices leading lipmovements by approximately 200 ms. In temporal order judgements (TOJ) he experiences simultaneity only when voices physically lag lips. In contrast, he requires the opposite visual lag (again of about 200 ms) to experience the classic McGurk illusion (e.g., hearing ‘da’ when listening to /ba/ and watching lips say [ga]), consistent with pathological auditory slowing. These delays seem to be specific to speech stimuli. Is PH just an anomaly? Surprisingly, neuro-typical individual differences between temporal tuning of McGurk integration and TOJ are actually negatively correlated. Thus some people require a small auditory lead for optimal McG but an auditory lag for subjective simultaneity (like PH but not as extreme), while others show the opposite pattern. Evidently, any individual can concurrently experience the same external events as happening at different times. These dissociative patterns confirm that distinct mechanisms for audiovisual synchronization versus integration are each subject to different neural delays. To explain the apparent repulsion of their respective timings, we propose that multimodal synchronization is achieved by discounting the average neural event time within each modality. Lesions or individual differences which slow the propagation of neural signals will then attract the average, so that relatively undelayed neural signals will be experienced as occurring relatively early.
Neuropsychologia | 2016
Alberta Ipser; Melanie Ring; Jennifer Murphy; Sebastian B. Gaigg; Richard J. Cook