Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Fourcin is active.

Publication


Featured researches published by Adrian Fourcin.


Brain | 2010

Intonation processing in congenital amusia: discrimination, identification and imitation

Fang Liu; Aniruddh D. Patel; Adrian Fourcin; Lauren Stewart

This study investigated whether congenital amusia, a neuro-developmental disorder of musical perception, also has implications for speech intonation processing. In total, 16 British amusics and 16 matched controls completed five intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on discrimination, identification and imitation of statements and questions that were characterized primarily by pitch direction differences in the final word. This intonation-processing deficit in amusia was largely associated with a psychophysical pitch direction discrimination deficit. These findings suggest that amusia impacts upon ones language abilities in subtle ways, and support previous evidence that pitch processing in language and music involves shared mechanisms.


Journal of the Acoustical Society of America | 1977

Cross‐language study of speech‐pattern learning

Claude Simon; Adrian Fourcin

In order to investigate the nature of some processes in speech acquisition, synthetic speechlike stimuli were played to groups of English and French children between two and fourteen years of age. The acoustic parameters varied were voice onset time and first‐formant transition. Three stages were observed in the development of children’s labeling behavior. These were called scattered labeling, progressive labeling, and categorical labeling, respectively. Individual response patterns were examined. The first stage (scattered labeling) includes mostly children of two to three years of age for the English and up to about four for the French. Children label most confidently those stimuli closest in physical terms to those of their natural speech environment. All stimuli with intermediate VOT values are labeled quasirandomly. Progressive labeling behavior is found mostly amongst children aged three and four for the English, up to about seven for the French. Children’s response curves go progressively—almost linearly—from one type of label (voiced) to the other (voiceless): response follows stimulus continuum. Categorical labeling becomes the dominant pattern only at the age of five to six for the English, one or two years later for the French. This development was found to be highly significant (p smaller than 0.003 for both English and French, using Kendall’s tau measure of correlation). English children learn to make use of the F1 transition feature around five years, whereas French children never use it as a voicing cue. French children will have fewer features than English children at their disposal: This may account for the later age at which French children, as a group, reach the various labeling behavior stages, and for labeling curves being less sharply categorical for French than for English children. These findings indicate that categorical labeling for speech sounds is not innate but learned through a relatively slow process which is to a certain extent language specific. The implications of the results are discussed in the light of previous work in the field.


Language and Speech | 1978

Intonation and speaker identification.

Evelyn Abberton; Adrian Fourcin

The work described investigated the ability of listeners to identify familiar speakers solely on the basis of suprasegmental laryngeal information. The results of perceptual experiments using both natural stimuli, and synthetic stimuli with manipulations in the time and frequency domains, show that mean fundamental frequency and fundamental-frequency contour shape provide important speaker identifying information for an age-, sex- and accent- matched group even in the absence of all supraglottal features. The investigation is set in the context of the normalisation level of speech perception.


International Journal of Language & Communication Disorders | 1995

Laryngograph: speech pattern element tools for therapy, training and assessment

Adrian Fourcin; Evelyn Abberton; David Miller; David Howells

The clinically based, real-time analysis of speech into physically definable elements, which are of direct perceptual and productive importance, has become more readily possible in recent years as the result of microprocessor developments. The combination of the acoustic signal of speech derived from a microphone together with the accompanying Laryngograph signal provides the basis for a highly reliable set of facilities. The paper describes methods and results for the analysis of voice, frication and timbre for both quantitative analysis and teaching and therapy using interactive visual displays. A brief discussion is given of links to work in stroboscopy, electropalatography and the associated use of additional sensors. Finally reference is made to a complete clinical work station combining these different facilities together with the quantitative analytical procedures of the speech pattern audiometer.


Journal of the Acoustical Society of America | 1992

Speech pattern hearing aids for the profoundly hearing impaired: Speech perception and auditory abilities

Andrew Faulkner; Virginia Ball; Stuart Rosen; Brian C. J. Moore; Adrian Fourcin

A family of prototype speech pattern hearing aids for the profoundly hearing impaired has been compared to amplification. These aids are designed to extract acoustic speech patterns that convey essential phonetic contrasts, and to match this information to residual receptive abilities. In the first study, the presentation of voice fundamental frequency information from a wearable SiVo (sinusoidal voice) aid was compared to amplification in 11 profoundly deafened adults. Intonation reception was often better, and never worse, with fundamental frequency information. Four subjects scored more highly in audio-visual consonant identification with fundamental frequency information, five performed better with amplified speech, and two performed similarly under these two conditions. Five of the 11 subjects continued use of the SiVo aid after the tests were complete. A second study examined a laboratory prototype compound speech pattern aid, which encoded voice fundamental frequency, amplitude envelope, and the presence of voiceless excitation. In five profoundly deafened adults, performance was better in consonant identification when additional speech patterns were present than with fundamental frequency alone; the main advantage was derived from amplitude information. In both consonant identification and connected discourse tracking, performance with appropriately matched compound speech pattern signals was better than with amplified speech in three subjects, and similar to performance with amplified speech in the other two. In nine subjects, frequency discrimination, gap detection, and frequency selectivity were measured, and were compared to speech receptive abilities with both amplification and fundamental frequency presentation. The subjects who showed the greatest advantage from fundamental frequency presentation showed the greatest average hearing losses, and the least degree of frequency selectivity. Compound speech pattern aids appear to be more effective for some profoundly hearing-impaired listeners than conventional amplifying aids, and may be a valuable alternative to cochlear implants.


Logopedics Phoniatrics Vocology | 2008

Hearing and phonetic criteria in voice measurement: Clinical applications

Adrian Fourcin; Evelyn Abberton

Quantitative clinical voice analysis is discussed with special reference to four factors: 1) measurement criteria that are based on well established auditory parameters; 2) voice material that is modelled on the connected speech of ordinary spoken communication rather than sustained vowels; 3) direct monitoring so as to provide both acoustic and vocal fold contact signals; and 4) phonetic structural similarities across what are ordinarily regarded as highly dissimilar languages. These factors have motivated the development and clinical application of physical analyses that provide measurements related both to vocal fold function and to the perceptual attributes of pitch, loudness, and an important aspect of voice quality.


international conference on spoken language processing | 1996

BABEL: an Eastern European multi-language database

Peter Roach; Simon Arnfield; William J. Barry; J. Baltova; Marian Boldea; Adrian Fourcin; W. Gonet; Ryszard Gubrynowicz; E. Hallum; Lori Lamel; Krzysztof Marasek; Alain Marchal; Einar Meister; Klára Vicsi

BABEL is a joint European project under the COPERNICUS scheme (Project 1304), comprising partners from five Eastern European countries and three Western ones. The project is producing a multi-language database of five of the most widely-differing Eastern European languages (Bulgarian, Estonian, Hungarian, Romanian and Polish). The collection and formatting of the data conforms to the protocols established by the ESPRIT SAM project and the resulting EUROM databases.


Hearing Research | 1987

Structural effects of short term and chronic extracochlear electrical stimulation on the guinea pig spiral organ.

H.C. Dodson; J.R. Walliker; L.H. Bannister; E.E. Douek; Adrian Fourcin

To assess the effects of extracochlear electrical stimulation on cochlear structure, guinea pigs were implanted and stimulated with single middle ear electrodes either at round window or promontory sites, and their cochleae examined by transmission electron microscopy. Implanted but unstimulated, or unimplanted control animals were examined in the same way. Alternating current stimulation at the promontory for 2 h at 150 Hz, 500 microA, caused outer hair cell efferent endings to become dense and vacuolated, but no hair cells were damaged. With direct current stimulation at 500 microA for 2 h the basal regions of the stimulated cochlea were badly damaged and many outer hair cells lysed. Long term (up to 1200 h) round window stimulation at 100 or 141 Hz, 15-91 microA rms, did not cause cell death or inner hair cell damage, but basal outer hair cells and their efferent endings were badly affected in both ipsilateral and contralateral cochleae. The compound action potential of the auditory evoked response to broad band click stimuli was not altered by chronic electrical stimulation. It is concluded that chronic stimulation with the parameters used does not threaten cochlear survival, and it is proposed that the bilateral structural changes induced by chronic stimulation are caused by excessive activation of the cochlear efferent pathways.


Annals of the New York Academy of Sciences | 1983

Speech perception with promontory stimulation.

Adrian Fourcin; E.E. Douek; Brian C. J. Moore; S. Rosen; J.R. Walliker; David M. Howard; Evelyn Abberton; S. L. Frampton

The present work of the EPI e group is characterized by a combination of four features. First, our patients are essentially all adults who have had largely normal speech communication ability prior to total bilateral loss of hearing. Second, the electrical stimulation that we use to give them a new sensation of hearing is always applied externally at the round window or on the cochlear promontory by means of a single active electrode. Third, the information that is transmitted along this single channel is designed to give maximum assistance to the deaf lipreader and is organized in terms of speech pattern components rather than the whole speech signal. These speech pattern components can be used separately or in combination, and are transformed to match the patient’s electrical hearing. Finally, our program of rehabilitation involves work with the patient’s speaking ability as well as the new sensation of hearing, and quantitative measures of both speech and hearing are made routinely.


International Journal of Language & Communication Disorders | 1972

Laryngographic Analysis and Intonation

Evelyn Abberton; Adrian Fourcin

Intonation and voice quality are studied for a variety of reasons by workers in a wide range of often overlapping disciplines. Psychologists and psychiatrists may use these features to obtain information about the personality and psychological state of patients. See for example, Pittenger (1957), McQuown (1957) and Hockett et al. (i960). Physiologists, neurologists and clinicians, among others, are concerned to establish the mechanisms involved in phonation and pitch change (see Harris, Sawashima). Linguists and phoneticians analyse the systematic use of intonation and voice quality in language - their grammatical, semantic and social roles - and seek to establish their perceptual and physiological correlates. Crystal (1969), Halliday (1967), Fry (1958), Chan (1971), Fourcin (1968).

Collaboration


Dive into the Adrian Fourcin's collaboration.

Top Co-Authors

Avatar

Evelyn Abberton

University College London

View shared research outputs
Top Co-Authors

Avatar

Andrew Faulkner

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Rosen

University College London

View shared research outputs
Top Co-Authors

Avatar

Lori Lamel

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Valerie Hazan

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge