Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Valerie Hazan is active.

Publication


Featured researches published by Valerie Hazan.


Journal of Phonetics | 2000

The development of phonemic categorization in children aged 6+12

Valerie Hazan; Sarah Barrett

Abstract The aim of this study was to assess the development of phonemic categorization across a range of phonemic contrasts: /g/-/k/, /d/-/g/, /s/-/z/ and /s/-/∫/. More specifically, we aimed to investigate the age at which children achieve adult-like competence, both in terms of their consistency in categorizing a range of phonemic contrasts, and in terms of their ability to categorize stimuli with limited acoustic cue information. Six-stimulus synthetic continua were created in which acoustic cues signaling these contrasts were manipulated singly or in combination. Stimuli were presented to 84 normally-hearing children aged between 6;0 and 12;6 years and 13 adult controls in the form of two-alternative forced-choice identification tests using an adaptive procedure. The gradients of the identification functions increased significantly in steepness between the ages of 6 and 12 but, by 12 years, children were still not, on average, categorizing the phonemic contrasts as consistently as adults. This study provides therefore further evidence that phoneme boundary sharpening occurs well into the second decade of life. Children were also less consistent than adults in categorizing continua containing limited acoustic cue information. Children aged 6–12 therefore appear to show less flexibility in their perceptual strategies than adults.


Speech Communication | 1996

The SUS test: a method for the assessment of text-to-speech synthesis intelligibility using semantically unpredictable sentences

Christian Benoît; Martine Grice; Valerie Hazan

This paper describes the experimental set-up used by the SAM (ESPRIT-BRA Project no. 2589: Multilingual Speech Input/Output: Assessment, Methodology and Standardisation) group for evaluating the intelligibility of text-to-speech systems at sentence level. The SUS test measures overall intelligibility of Semantically Unpredictable Sentences which can be automatically generated using five basic syntactic structures and a number of lexicons containing the most frequently occurring mini-syllabic words in each language. The sentence material has the advantage of not being fixed, as words can be extracted from the lexicons randomly to form a new set of sentences each time the test is run. Various text-to-speech systems in a number of languages have been evaluated using this test. Results have demonstrated that the SUS test is effective and that it allows for reliable comparison across synthesisers provided guidelines are followed carefully regarding the definition of the test material and actual running of the test. These recommendations are the result of experience gained during the SAM project and beyond. They are presented here so as to provide users with a standardized evaluation method which is flexible and easy to use and is applicable to a number of different languages.


Journal of the Acoustical Society of America | 2005

Phonetic training with acoustic cue manipulations: A comparison of methods for teaching English /r/-/l/ to Japanese adults

Paul Iverson; Valerie Hazan; Kerry Bannister

Recent work [Iverson et al. (2003) Cognition, 87, B47-57] has suggested that Japanese adults have difficulty learning English /r/ and /l/ because they are overly sensitive to acoustic cues that are not reliable for /r/-/l/ categorization (e.g., F2 frequency). This study investigated whether cue weightings are altered by auditory training, and compared the effectiveness of different training techniques. Separate groups of subjects received High Variability Phonetic Training (natural words from multiple talkers), and 3 techniques in which the natural recordings were altered via signal processing (All Enhancement, with F3 contrast maximized and closure duration lengthened; Perceptual Fading, with F3 enhancement reduced during training; and Secondary Cue Variability, with variation in F2 and durations increased during training). The results demonstrated that all of the training techniques improved /r/-/l/ identification by Japanese listeners, but there were no differences between the techniques. Training also altered the use of secondary acoustic cues; listeners became biased to identify stimuli as English /l/ when the cues made them similar to the Japanese /r/ category, and reduced their use of secondary acoustic cues for stimuli that were dissimilar to Japanese /r/. The results suggest that both category assimilation and perceptual interference affect English /r/ and /l/ acquisition.


Speech Communication | 2005

Effect of audiovisual perceptual training on the perception and production of consonants by Japanese learners of English

Valerie Hazan; Anke Sennema; Midori Iba; Andrew Faulkner

Abstract This study investigates whether L2 learners can be trained to make better use of phonetic information from visual cues in their perception of a novel phonemic contrast. It also evaluates the impact of audiovisual perceptual training on the learners’ pronunciation of a novel contrast. The use of visual cues for speech perception was evaluated for two English phonemic contrasts: the /v/–/b/–/p/ labial/labiodental contrast and /l/–/r/ contrast. In the first study, 39 Japanese learners of English were tested on their perception of the /v/–/b/–/p/ distinction in audio, visual and audiovisual modalities, and then undertook ten sessions of either auditory (‘A training’) or audiovisual (‘AV training’) perceptual training before being tested again. AV training was more effective than A training in improving the perception of the labial/labiodental contrast. In a second study, 62 Japanese learners of English were tested on their perception of the /l/–/r/ contrast in audio, visual and audiovisual modalities, and then undertook ten sessions of perceptual training with either auditory stimuli (‘A training’), natural audiovisual stimuli (‘AV Natural training’) or audiovisual stimuli with a synthetic face synchronized to natural speech (‘AV Synthetic training’). Perception of the /l/–/r/ contrast improved in all groups but learners trained audiovisually did not improve more than those trained auditorily. Auditory perception improved most for ‘A training’ learners and performance in the lipreading alone condition improved most for ‘natural AV training’ learners. The learners’ pronunciation of /l/–/r/ improved significantly following perceptual training, and a greater improvement was obtained for the ‘AV Natural training’ group. This study shows that sensitivity to visual cues for non-native phonemic contrasts can be enhanced via audiovisual perceptual training. AV training is more effective than A training when the visual cues to the phonemic contrast are sufficiently salient. Seeing the facial gestures of the talker also leads to a greater improvement in pronunciation, even for contrasts with relatively low visual salience.


Language and Speech | 1993

PERCEPTION AND PRODUCTION OF A VOICING CONTRAST BY FRENCH-ENGLISH BILINGUALS

Valerie Hazan; Georges Boulakia

The use of spectral information at vowel onset, which constitutes a stronger cue to the voicing contrast in English than in French, was investigated in French-English bilinguals in order to determine whether the primary language in terms of early experience determines acoustic cue weighting. The /pεn/-/bεn/ minimal pair, meaningful in both languages, was used as a base for identification tests, which were presented with either an English or a French precursor word before each token. Two stimulus continua, formed of digitally-edited natural speech tokens, had an identical VOT range but varied in their [εn] stem. In their production of the contrast, bilinguals showed clear evidence of code-switching but did not always produce monolingual-like VOTs in their weaker language. In perception, the code-switching effect was significant but small. The bilingual group with English as primary early language showed a greater effect of vowel onset characteristics, in conflictingcue conditions, than the bilingual group with French as their primary early language, and, on average, cue-weighting was not affected by the language of the precursor. An effect of language dominance on cue-weighting was therefore found.


Speech Communication | 1998

The effect of cue-enhancement on the intelligibility of nonsense word and sentence materials presented in noise

Valerie Hazan; Andrew J. R. Simpson

Abstract Two sets of experiments were performed to test the perceptual benefits of enhancing consonantal regions which contain a high density of acoustic cues to phonemic contrasts. In the first set, hand-annotated consonantal regions of natural vowel–consonant–vowel (VCV) stimuli were amplified to increase their salience, and filtered to stylise the cues they contained. In the second set, corresponding regions in natural semantically-unpredictable sentence (SUS) material were annotated and enhanced in the same way. Both sets of stimuli were combined with speech-shaped noise and presented to normally-hearing listeners. The VCV experiments showed statistically significant improvements in intelligibility as a result of enhancement; significant improvements were also obtained for sentence material after some adjustments in enhancement strategies and levels. These results demonstrate the benefits gained from enhancement techniques which use knowledge of acoustic cues to phonetic contrasts to improve the intelligibility of speech in the presence of background noise.


Journal of the Acoustical Society of America | 2006

The use of visual cues in the perception of non-native consonant contrasts

Valerie Hazan; Anke Sennema; Andrew Faulkner; Marta Ortega-Llebaria; Midori Iba; Hyunsong Chung

This study assessed the extent to which second-language learners are sensitive to phonetic information contained in visual cues when identifying a non-native phonemic contrast. In experiment 1, Spanish and Japanese learners of English were tested on their perception of a labial/ labiodental consonant contrast in audio (A), visual (V), and audio-visual (AV) modalities. Spanish students showed better performance overall, and much greater sensitivity to visual cues than Japanese students. Both learner groups achieved higher scores in the AV than in the A test condition, thus showing evidence of audio-visual benefit. Experiment 2 examined the perception of the less visually-salient /1/-/r/ contrast in Japanese and Korean learners of English. Korean learners obtained much higher scores in auditory and audio-visual conditions than in the visual condition, while Japanese learners generally performed poorly in both modalities. Neither group showed evidence of audio-visual benefit. These results show the impact of the language background of the learner and visual salience of the contrast on the use of visual cues for a non-native contrast. Significant correlations between scores in the auditory and visual conditions suggest that increasing auditory proficiency in identifying a non-native contrast is linked with an increasing proficiency in using visual cues to the contrast.


Attention Perception & Psychophysics | 1991

Individual variability in the perception of cues to place contrasts in initial stops

Valerie Hazan; Stuart Rosen

Synthetic continua of two minimal pairs, BAIT-DATE and DATE-GATE, closely modeled on natural utterances by a female speaker, were presented to a group of 16 listeners for identification infull-cue andreduced-cue conditions. Grouped results showed that categorization curves for full-and reduced-cue conditions differed significantly in both contrasts. However, an averaging of results obscures marked variability in labeling behavior. Some listeners showed large changes in categorization between the full- and reduced-cue conditions, whereas others showed relatively small or no changes. In a follow-up study, perception of the BAIT-DATE contrast was compared with the perception of a highly stylized BA-DA continuum. A smaller degree of intersubject and between-condition variability was found for these less complex synthetic stimuli. The amount of variability found in the labeling of speech contrasts may be dependent on cue salience, which will be determined by the speech pattern complexity of the stimuli and by the vowel environment.


Cognitive Neuropsychology | 1995

Phonemic processing problems in developmental phonological dyslexia

Jackie Masterson; Valerie Hazan; Lilani Wijayatilake

Abstract The phonemic discrimination of subjects with developmental dyslexia was investigated in the present study. Two adult developmental phonological dyslexics are first reported. Both were good at reading real words but had difficulty reading and spelling novel stimuli. Further testing revealed a perceptual discrimination problem that was restricted to a narrow range of phonemes in both subjects. In order to test the generality of this finding, 20 further developmental dyslexics were tested on their nonword reading skill and phonemic discrimination ability. There was a significant association between the two variables…subjects poor at phonemic discrimination were also very likely to be poor at nonword reading. It is suggested that phonemic discrimination problems at an early age may disrupt the normal acquisition of alphabetic processing skills for reading and spelling. Remedial implications of the findings are discussed.


Behavior Research Methods | 2011

DiapixUK: task materials for the elicitation of multiple spontaneous speech dialogs

Rachel Baker; Valerie Hazan

The renewed focus of attention on investigating spontaneous speech samples in speech and language research has increased the need for recordings of speech in interactive settings. The DiapixUK task is a new and extended set of picture materials based on the Diapix task by Van Engen et al. (Language and Speech, 53, 510–540, 2010), where two people are recorded while conversing to solve a ‘spot the difference’ task. The new task materials allow for multiple recordings of the same speaker pairs due to a larger set of picture pairs that have a number of tested features: equal difficulty across all 12 picture pairs, no learning effect of completing more than one picture task and balanced contributions from both speakers. The new materials also provide extra flexibility, making them useful in a wide range of research projects; they are multi-layered electronic images that can be adapted to suit different research needs. This article presents details of the development of the DiapixUK materials, along with data taken from a large corpus of spontaneous speech that are used to demonstrate its new features. Current and potential applications of the task are also discussed.

Collaboration


Dive into the Valerie Hazan's collaboration.

Top Co-Authors

Avatar

Outi Tuomainen

University College London

View shared research outputs
Top Co-Authors

Avatar

Sonia Granlund

University College London

View shared research outputs
Top Co-Authors

Avatar

Stuart Rosen

University College London

View shared research outputs
Top Co-Authors

Avatar

Andrew Faulkner

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duncan Markham

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rachel Baker

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeesun Kim

University of Western Sydney

View shared research outputs
Researchain Logo
Decentralizing Knowledge