Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satsuki Nakai is active.

Publication


Featured researches published by Satsuki Nakai.


Infant Behavior & Development | 2013

The influence of babbling patterns on the processing of speech.

Rory A. DePaolis; Marilyn Vihman; Satsuki Nakai

This study compared the preference of 27 British English- and 26 Welsh-learning infants for nonwords featuring consonants that occur with equal frequency in the input but that are produced either with equal frequency (Welsh) or with differing frequency (British English) in infant vocalizations. For the English infants a significant difference in looking times was related to the extent of production of the nonword consonants. The Welsh infants, who showed no production preference for either consonant, exhibited no such influence of production patterns on their response to the nonwords. The results are consistent with a previous study that suggested that pre-linguistic babbling helps shape the processing of input speech, serving as an articulatory filter that selectively makes production patterns more salient in the input.


Journal of Phonetics | 2009

Utterance-final lengthening and quantity in Northern Finnish

Satsuki Nakai; Sari Kunnari; Alice Turk; Kari Suomi; Riikka Ylitalo

Abstract Utterance-final lengthening in Northern Finnish was investigated using tightly controlled laboratory materials, with particular focus on its interaction with the languages single (short) vs. double (long) vowel distinction. Like many other languages, Finnish exhibited utterance-final lengthening, although the estimates of magnitudes of lengthening on final vowels varied greatly depending on the treatment of the utterance-final breathy/voiceless portion of the vowel. As has been also shown for other languages, the lengthening occurred as early as the stressed, penultimate syllable of disyllabic words and was generally progressive. Crucially, vowel quantity interacted with the lengthening in a manner consistent with a hypothesis that Finnish regulates utterance-final lengthening to preserve its quantity system. Specifically, the voiced portion (the portion that is relevant to the perception of vowel quantity) of the longest single vowel (the half-long vowel) was restricted. Additionally, double vowels were lengthened less when the vowel in an adjacent syllable was also double, suggesting syntagmatic constraints. Our results support the view that utterance-final lengthening is a universal tendency but is implemented in language-specific ways and must be learned.


Journal of Phonetics | 2012

Quantity constraints on the temporal implementation of phrasal prosody in Northern Finnish

Satsuki Nakai; Alice Turk; Kari Suomi; Sonia Granlund; Riikka Ylitalo; Sari Kunnari

This study investigated interactions between vowel quantity and two types of prosodic lengthening (accentual lengthening and the combined effect of accentual and utterance-final lengthening) in disyllabic words in Northern Finnish. Two quantity-related constraints were observed. First, in both types of prosodic lengthening, vowels were lengthened less when they were next to a syllable containing a double vowel than when they were next to a syllable containing a single vowel (a quantity neighbour constraint). Second, a durational ceiling effect was observed for the phonologically single, half-long vowel under the combined effect of accentual and utterance-final lengthening. These findings can be seen to support the view that quantity languages regulate the non-phonemic use of duration because of the high functional load of duration at the phonemic level. Additionally, the combined effect of accentual and utterance-final lengthening appeared to have its own lengthening profile, distinct from the simple sum of the two lengthening effects suggested previously. Implications for speech timing research will be discussed.


Journal of Phonetics | 2013

Recording speech articulation in dialogue: Evaluating a synchronized double Electromagnetic Articulography setup

Christian Geng; Alice Turk; James M. Scobbie; Cedric Macmartin; Philip Hoole; Korin Richmond; Alan Wrench; Marianne Pouplier; Ellen Gurman Bard; Ziggy Campbell; Catherine Dickie; Eddie Dubourg; William J. Hardcastle; Evia Kainada; Simon King; Robin J. Lickley; Satsuki Nakai; Steve Renals; Kevin White; Ronny Wiegand

We demonstrate the workability of an experimental facility that is geared towards the acquisition of articulatory data from a variety of speech styles common in language use, by means of two synchronized electromagnetic articulography (EMA) devices. This approach synthesizes the advantages of real dialogue settings for speech research with a detailed description of the physiological reality of speech production. We describe the facilitys method for acquiring synchronized audio streams of two speakers and the system that enables communication among control room technicians, experimenters and participants. Further, we demonstrate the feasibility of the approach by evaluating problems inherent to this specific setup: The first problem is the accuracy of temporal synchronization of the two EMA machines, the second is the severity of electromagnetic interference between the two machines. Our results suggest that the synchronization method used yields an accuracy of approximately 1 ms. Electromagnetic interference was derived from the complex-valued signal amplitudes. This dependent variable was analyzed as a function of the recording status – i.e. on/off – of the interfering machines transmitters. The intermachine distance was varied between 1 m and 8.5 m. Results suggest that a distance of approximately 6.5 m is appropriate to achieve data quality comparable to that of single speaker recordings.


Second Language Research | 2015

A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition.

Satsuki Nakai; Shane Lindsay; Mitsuhiko Ota

When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one’s L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation of kettle, as L1 Dutch speakers perceptually map the vowel in the two English words to a single vowel phoneme in their L1. In an auditory word-learning experiment using Greek and Japanese speakers of English, we asked whether such cross-lexical activation in L2 spoken-word recognition necessarily involves inaccurate perception by the L2 listeners, or can also arise from interference from L1 phonology at an abstract level, independent of the listeners’ phonetic processing abilities. Results suggest that spurious activation of L2 words containing L2-specific contrasts in spoken-word recognition is contingent on the L2 listeners’ inadequate phonetic processing abilities.


Journal of the Acoustical Society of America | 2011

Separability of prosodic phrase boundary and phonemic information.

Satsuki Nakai; Alice Turk

It was hypothesized that the retrieval of prosodic and phonemic information from the acoustic signal is facilitated when prosodic information is encoded by co-occurring suprasegmental cues. To test the hypothesis, two-choice speeded classification experiments were conducted, which examined processing interaction between prosodic phrase-boundary vs stop-place information in speakers of Southern British English. Results confirmed that the degree of interaction between boundary and stop-place information diminished when the pre-boundary vowel was signaled by duration and F(0), compared to when it was signaled by either duration or F(0) alone. It is argued that the relative ease of retrieval of prosodic and phonemic information arose from advantages of prosodic cue integration.


Innovation in Language Learning and Teaching | 2018

Viewing speech in action: speech articulation videos in the public domain that demonstrate the sounds of the International Phonetic Alphabet (IPA)

Satsuki Nakai; David Beavan; Eleanor Lawson; Grégory Leplâtre; James M. Scobbie; Jane Stuart-Smith

ABSTRACT In this article, we introduce recently released, publicly available resources, which allow users to watch videos of hidden articulators (e.g. the tongue) during the production of various types of sounds found in the world’s languages. The articulation videos on these resources are linked to a clickable International Phonetic Alphabet chart ([International Phonetic Association. 1999. Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. Cambridge: Cambridge University Press]), so that the user can study the articulations of different types of speech sounds systematically. We discuss the utility of these resources for teaching the pronunciation of contrastive sounds in a foreign language that are absent in the learner’s native language.


Journal of the Acoustical Society of America | 2010

The Edinburgh Speech Production Facility’s articulatory corpus of spontaneous dialogue.

Alice Turk; James M. Scobbie; Christian Geng; Cedric Macmartin; Ellen Gurman Bard; Barry Campbell; Catherine Dickie; Eddie Dubourg; Bill Hardcastle; Phil Hoole; Evia Kanaida; Robin J. Lickley; Satsuki Nakai; Marianne Pouplier; Simon King; Stephen Renals; Korin Richmond; Sonja Schaeffler; Ronnie Wiegand; Kevin White; Alan Wrench

The EPSRC‐funded Edinburgh Speech Production is built around two synchronized Carstens AG500 electromagnetic articulographs (EMAs) in order to capture articulatory/acoustic data from spontaneous dialogue. An initial articulatory corpus was designed with two aims. The first was to elicit a range of speech styles/registers from speakers, and therefore provide an alternative to fully scripted corpora. The second was to extend the corpus beyond monologue, by using tasks that promote natural discourse and interaction. A subsidiary driver was to use dialects from outwith North America: dialogues paired up a Scottish English and a Southern British English speaker. Tasks. Monologue: Story reading of “Comma Gets a Cure” [Honorof et al. (2000)], lexical sets [Wells (1982)], spontaneous story telling, diadochokinetic tasks. Dialogue: Map tasks [Anderson et al. (1991)], “Spot the Difference” picture tasks [Bradlow et al. (2007)], story‐recall. Shadowing of the spontaneous story telling by the second participant. Each...


Journal of the Acoustical Society of America | 1998

The effect of vowel prototypicality and extremity on discrimination sensitivity

Satsuki Nakai

The present study examines the effect of vowel extremity and prototypicality on discrimination sensitivity. Kuhl [1991] has shown that listeners are not as capable of distinguishing other variations of /i/ from a prototypical /i/ (the stimulus with the highest goodness rating, henceforth P) as from a nonprototypical /i/ (a stimulus with a low goodness rating). However, considering evidence suggesting that discrimination sensitivity grows poorer towards the exterior of the vowel space [Schouten and van Hessen, 1992], it is unclear whether it is vowel prototypicality or extremity that correlates with low discrimination sensitivity, for P is the most extreme among the stimuli used in the discrimination task in Kuhl [1991]. In order to investigate the above issue, discrimination curves were obtained from native speakers of Japanese and Greek in the corner of the vowel space where Japanese /u/ and Greek /u/ are located. As Japanese /u/ is not cardinal, the effect of vowel prototypicality and extremity on discr...


Infant and Child Development | 2004

Distinguishing novelty and familiarity effects in infant preference procedures

Carmel Houston-Price; Satsuki Nakai

Collaboration


Dive into the Satsuki Nakai's collaboration.

Top Co-Authors

Avatar

Alice Turk

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eleanor Lawson

Queen Margaret University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Beavan

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claire Timmins

Queen Margaret University

View shared research outputs
Top Co-Authors

Avatar

Alan Wrench

Queen Margaret University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge