Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Virginie Attina is active.

Publication


Featured researches published by Virginie Attina.


IEEE Transactions on Biomedical Engineering | 2009

xDAWN Algorithm to Enhance Evoked Potentials: Application to Brain–Computer Interface

Bertrand Rivet; Antoine Souloumiac; Virginie Attina; Guillaume Gibert

A brain-computer interface (BCI) is a communication system that allows to control a computer or any other device thanks to the brain activity. The BCI described in this paper is based on the P300 speller BCI paradigm introduced by Farwell and Donchin. An unsupervised algorithm is proposed to enhance P300 evoked potentials by estimating spatial filters; the raw EEG signals are then projected into the estimated signal subspace. Data recorded on three subjects were used to evaluate the proposed method. The results, which are presented using a Bayesian linear discriminant analysis classifier, show that the proposed method is efficient and accurate.


Applied Psycholinguistics | 2015

Universality and language-specific experience in the perception of lexical tone and pitch

Denis Burnham; Benjawan Kasisopa; Amanda Reid; Sudaporn Luksaneeyanawin; Francisco Lacerda; Virginie Attina; Nan Xu Rattanasone; Iris-Corinna Schwarz; Diane Webster

Two experiments focus on Thai tone perception by native speakers of tone languages (Thai, Cantonese, and Mandarin), a pitch–accent (Swedish), and a nontonal (English) language. In Experiment 1, there was better auditory-only and auditory–visual discrimination by tone and pitch–accent language speakers than by nontone language speakers. Conversely and counterintuitively, there was better visual-only discrimination by nontone language speakers than tone and pitch–accent language speakers. Nevertheless, visual augmentation of auditory tone perception in noise was evident for all five language groups. In Experiment 2, involving discrimination in three fundamental frequency equivalent auditory contexts, tone and pitch–accent language participants showed equivalent discrimination for normal Thai speech, filtered speech, and violin sounds. In contrast, nontone language listeners had significantly better discrimination for violin sounds than filtered speech and in turn speech. Together the results show that tone perception is determined by both auditory and visual information, by acoustic and linguistic contexts, and by universal and experiential factors.


Attention Perception & Psychophysics | 2015

Perceptual assimilation of lexical tone: The roles of language experience and visual information

Amanda Reid; Denis Burnham; Benjawan Kasisopa; Ronan G. Reilly; Virginie Attina; Nan Xu Rattanasone; Catherine T. Best

Using Best’s (1995) perceptual assimilation model (PAM), we investigated auditory–visual (AV), auditory-only (AO), and visual-only (VO) perception of Thai tones. Mandarin and Cantonese (tone-language) speakers were asked to categorize Thai tones according to their own native tone categories, and Australian English (non-tone-language) speakers to categorize Thai tones into their native intonation categories—for instance, question or statement. As comparisons, Thai participants completed a straightforward identification task, and another Australian English group identified the Thai tones using simple symbols. All of the groups also completed an AX discrimination task. Both the Mandarin and Cantonese groups categorized AO and AV Thai falling tones as their native level tones, and Thai rising tones as their native rising tones, although the Mandarin participants found it easier to categorize Thai level tones than did the Cantonese participants. VO information led to very poor categorization for all groups, and AO and AV information also led to very poor categorizations for the English intonation categorization group. PAM’s predictions regarding tone discriminability based on these category assimilation patterns were borne out for the Mandarin group’s AO and AV discriminations, providing support for the applicability of the PAM to lexical tones. For the Cantonese group, however, PAM was unable to account for one specific discrimination pattern—namely, their relatively good performance on the Thai high–rising contrast in the auditory conditions—and no predictions could be derived for the English groups. A full account of tone assimilation will likely need to incorporate considerations of phonetic, and even acoustic, similarity and overlap between nonnative and native tone categories.


Journal of the Acoustical Society of America | 2010

Speech articulator movements recorded from facing talkers using two electromagnetic articulometer systems simultaneously

Mark Tiede; Rikke L. Bundgaard-Nielsen; Christian Kroos; Guillaume Gibert; Virginie Attina; Benjawan Kasisopa; Eric Vatikiotis-Bateson; Catherine T. Best

Two 3‐D electromagnetic articulometer systems, the Carstens AG500 and Northern Digital WAVE, have been used simultaneously without mutual interference to record the speech articulator movements of two talkers facing one another 2 m apart. A series of benchmark tests evaluating the stability of fixed distances between sensors attached to a rotating rigid body was first conducted to determine whether the two systems could operate independently, with results showing no significant effect of dual operation on either system. In the experiment proper, two native speakers of American English participated as subjects. Sensors were glued to three points on the tongue, the upper and lower incisors, lips, and left and right mastoid processes for each subject. Independent audio tracks were recorded using separate directional microphones, which were used to align the kinematic data from both subjects during post‐processing. Data collected were of two types: extended spontaneous conversation and repeated incongruent word sequences (e.g., talker one produced “cop top...;” talker two “top cop...”). Both talkers show strong positive correlations between speech rate (in syllables/s) and head movement. The word sequences also show error and rate effects related to mutual entrainment. [Work supported by ARC Human Communication Science Network (RN0460284), MARCS Auditory Laboratories, NIH.]


2015 International Conference on Advances in Biomedical Engineering (ICABME) | 2015

Objective measures in cochlear implanted patients: A computational framework to evaluate artifact rejection methodologies

Faten Mina; Virginie Attina; Evelyne Veuillet; Eric Truy; Yvan Duroc; Hung Thai-Van

Auditory steady-state responses (ASSRs) constitute a reliable measure of auditory perception in normal hearing subjects. The use of these measures in cochlear implanted patients is hindered by the vast diffusion of the electrical cochlear stimulation artifact that highly contaminates EEG scalp recordings. Therefore, attenuating or moreover suppressing this artifact ahead of response detection is crucial. Yet, the currently used denoising algorithms may have unpredictable effects on these responses (ASSRs). In this paper, we propose a computational framework that allows the simulation of the mixture of the stimulation artifact and its corresponding evoked ASSRs on EEG scalp electrodes. The utility of this relatively basic model resides in its usefulness in quantifying the effects of any applied denoising method on the information contained in the signal of interest (responses known a priori). Here, an application to two independent component analysis algorithms (infomax and infomax extended) is presented. The model predicts a better performance for infomax compared to infomax extended.


Journal of Psycholinguistic Research | 2014

Cross-Language Perception of Cantonese Vowels Spoken by Native and Non-native Speakers

Connie K. So; Virginie Attina

This study examined the effect of native language background on listeners’ perception of native and non-native vowels spoken by native (Hong Kong Cantonese) and non-native (Mandarin and Australian English) speakers. They completed discrimination and an identification task with and without visual cues in clear and noisy conditions. Results indicated that visual cues did not facilitate perception, and performance was better in clear than in noisy conditions. More importantly, the Cantonese talker’s vowels were the easiest to discriminate, and the Mandarin talker’s vowels were as intelligible as the native talkers’ speech. These results supported the interlanguage speech native intelligibility benefit patterns proposed by Hayes-Harb et al. (J Phonetics 36:664–679, 2008). The Mandarin and English listeners’ identification patterns were similar to those of the Cantonese listeners, suggesting that they might have assimilated Cantonese vowels to their closest native vowels. In addition, listeners’ perceptual patterns were consistent with the principles of Best’s Perceptual Assimilation Model (Best in Speech perception and linguistic experience: issues in cross-language research. York Press, Timonium, 1995).


Archive | 2012

Temporal organization of Cued Speech production

Denis Beautemps; Marie-Agnès Cathiard; Virginie Attina; Christophe Savariaux

Speech communication is multi-modal by nature. It is well known that hearing people use both auditory and visual information for speech perception (Reisberg, McLean et al. 1987) . For deaf people visual speech constitutes the main speech modality. Listeners with hearing loss who have been orally educated typically rely on speech-reading based on lip and facial visual information. However due to the similarity in the visual lip shapes of speech units lip-reading alone is not sufficient. Even the best speech-readers do not identify more than 50 percent of phonemes in nonsense syllables (Owens and Blazek 1985) or in words or sentences (Bernstein, Demorest et al. 2000). This chapter deals with Cued Speech, a manual augmentation for lipreading visual information. Our interest in this method was motivated by its effectiveness in allowing access to complete phonological representations of speech for deaf people from the age of one month, access to language and eventually performance in reading and writing similar to that of hearing people. Finally with the current high level of development of cochlear implants this method helps facilitate access to the auditory modality. A large amount of work has been devoted to the effectiveness of Cued Speech but none has investigated the motor organisation of Cued Speech production, i.e. the coarticulation of Cued Speech articulators. Why might the production of an artificial system as long ago as 1967 be of interest? Apart from the clear evidence that such a coding system helps in acquiring another artificial system such as reading, Cued Speech provides a unique opportunity to study lip-hand coordination at syllable level. This contribution presents a study of the temporal organisation of the manual cue in relation to the movement of the lips and the acoustic indices of the corresponding speech sound, in order to characterise the nature of the syllabic structure of Cued Speech with reference to speech coarticulation.


Speech Communication | 2004

A pilot study of temporal organization in Cued Speech production of French syllables: rules for a Cued Speech synthesizer

Virginie Attina; Denis Beautemps; Marie-Agnès Cathiard; Matthias Odisio


european signal processing conference | 2008

“P300 speller” Brain-Computer Interface: Enhancement of P300 evoked potential by spatial filters

Bertrand Rivet; Antoine Souloumiac; Guillaume Gibert; Virginie Attina


Proceedings of the International Conference on Audio-Visual Speech Processing (AVSP2011), Aug 31 - Sep 3, 2011, Volterra, Italy | 2011

Auditory-visual discrimination and identification of lexical tone within and across tone languages

Denis Burnham; Virginie Attina; Benjawan Kasisopa

Collaboration


Dive into the Virginie Attina's collaboration.

Top Co-Authors

Avatar

Benjawan Kasisopa

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Denis Beautemps

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Denis Beautemps

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catherine T. Best

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Vatikiotis-Bateson

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge