Takashi Otake
Dokkyo University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Takashi Otake.
Journal of Phonetics | 2006
Anne Cutler; Andrea Weber; Takashi Otake
Abstract The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
Journal of the Acoustical Society of America | 2004
Anne Cutler; Takashi Otake
Pseudo‐homophony may result when non‐native listeners cannot distinguish phonemic contrasts. Thus Dutch listeners have difficulty distinguishing the vowels of English cattle versus kettle, because this contrast is subsumed by a single Dutch vowel category; in consequence, both words may be activated whenever either is heard. A lexical decision study in English explored this phenomenon by testing for repetition priming. The materials contained among 340 items 18 pairs such as cattle/kettle, i.e., contrasting only in those vowels, and 18 pairs contrasting only in r/l (e.g., right/light). These materials, spoken by a native American English speaker, were presented to fluent non‐native speakers of English, 48 Dutch Nijmegen University students, and 48 Japanese Dokkyo University students; the listeners performed lexical decision on each spoken item, and response time was measured. Dutch listeners responded significantly faster to one member of a cattle/kettle pair after having heard the other member earlier in...
Language and Speech | 2010
Natasha Warner; Takashi Otake; Takayuki Arai
While listeners are recognizing words from the connected speech stream, they are also parsing information from the intonational contour. This contour may contain cues to word boundaries, particularly if a language has boundary tones that occur at a large proportion of word onsets. We investigate how useful the pitch rise at the beginning of an accentual phrase (APR) would be as a potential word-boundary cue for Japanese listeners. A corpus study shows that it should allow listeners to locate approximately 40—60% of word onsets, while causing less than 1% false positives. We then present a word-spotting study which shows that Japanese listeners can, indeed, use accentual phrase boundary cues during segmentation. This work shows that the prosodic patterns that have been found in the production of Japanese also impact listeners’ processing.
Language and Speech | 2013
Takashi Otake; Anne Cutler
Analysis of a corpus of spontaneously produced Japanese puns from a single speaker over a two-year period provides a view of how a punster selects a source word for a pun and transforms it into another word for humorous effect. The pun-making process is driven by a principle of similarity: the source word should as far as possible be preserved (in terms of segmental sequence) in the pun. This renders homophones (English example: band–banned) the pun type of choice, with part–whole relationships of embedding (cap–capture), and mutations of the source word (peas–bees) rather less favored. Similarity also governs mutations in that single-phoneme substitutions outnumber larger changes, and in phoneme substitutions, subphonemic features tend to be preserved. The process of spontaneous punning thus applies, on line, the same similarity criteria as govern explicit similarity judgments and offline decisions about pun success (e.g., for inclusion in published collections). Finally, the process of spoken-word recognition is word-play-friendly in that it involves multiple word-form activation and competition, which, coupled with known techniques in use in difficult listening conditions, enables listeners to generate most pun types as offshoots of normal listening procedures.
Journal of the Acoustical Society of America | 2012
Anne Cutler; Takashi Otake; Laurence Bruggeman
Studies of spoken-word recognition have revealed that competition from embedded words differs in strength as a function of where in the carrier word the embedded word is found and have further shown embedding patterns to be skewed such that embeddings in initial position in carriers outnumber embeddings in final position. Lexico-statistical analyses show that this skew is highly attenuated in Japanese, a noninflectional language. Comparison of the extent of the asymmetry in the three Germanic languages English, Dutch, and German allows the source to be traced to a combination of suffixal morphology and vowel reduction in unstressed syllables.
Journal of the Acoustical Society of America | 1998
Takashi Otake; Kiyoko Yoneyama; Hideki Maki
Human listeners may form conscious representations of potential within‐word structure in which lexicon is represented by some phonological units. An earlier study examining monolingual speakers of Japanese and English with native inputs suggests that levels of representation by Japanese speakers may be involved with richer knowledge of word‐internal structure, while English speakers are sensitive to syllables [Otake et al., Proceedings of EUROSPEECH 95 3, 1703–1706 (1995)]. The present study investigated how monolingual speakers of English learning Japanese could form conscious representations of potential within‐word structure in Japanese. Three groups of subjects (N: 36, 33, and 40 for three different levels) were presented with 150 Japanese spoken words and asked to mark on a written transcript of each word the second natural division point from the onset in the word. The statistical analysis showed that all groups exploited syllables to represent Japanese words irrespective of Japanese proficiency. Th...
Journal of the Acoustical Society of America | 1997
Takashi Otake; Keiko Yamamoto
Assuming that listeners may form conscious representations of potential within‐word structure, it was investigated what phonological units could be exploited by monolingual speakers of Japanese and English [Otake et al., Proc. EUROSPEECH 95, 1703–1706 (1995)]. The present study investigated how bilingual Japanese speakers of English and monolingual speakers of Japanese and English could form conscious representations of potential within‐word structure in the two languages. The three groups of subjects were presented with spoken words in both languages and asked to mark on a written transcript of each word (e.g., buranko for Japanese and veranda for English) the second natural division point from the onset in the word. The statistical analysis showed that both the Japanese and English monolinguals exploited morae and syllables, respectively, and that the bilinguals exploited syllables. These results suggest that the exploitation of phonological units to represent within‐word structure may depend upon the d...
Journal of the Acoustical Society of America | 1997
Takashi Otake; Anne Cutler
A gating experiment addressed the question of how early in the process of recognizing spoken Japanese words pitch–accent information may be exploited. Twenty‐four pairs of Japanese words such as nimotsu/nimono, beginning with the same bimoraic CVCV sequence but with the accent pattern of this initial CVCV being HL in one word and LH in the other, were presented, in increasingly large fragments, to 36 native speakers of Tokyo Japanese. After presentation of each fragment, which was incremented in each case by one phoneme transition from the previous fragment, listeners recorded a guess regarding the word’s identity and a confidence rating for that guess. The results showed that the accent patterns of the word guesses corresponded to the accent patterns of the actually spoken words with a probability significantly above chance from the second fragment onwards, i.e., from the middle of the vowel in the first mora of the word. Accent correspondence averaged 79.6% at this point, rising to 89% by the fourth fragment (vowel of second mora). This demonstrates that Japanese listeners can exploit pitch–accent information effectively at an early stage in the presentation of a word, and use it to constrain selection of lexical candidates.
Journal of the Acoustical Society of America | 1989
Takashi Otake
The purpose of this investigation is to evaluate whether the temporal compensation effect in Japanese can be attributable to mora timing. It has been asserted that the temporal compensation effect has an important role in regulating a CV syllable duration in Japanese [Port et al., Phonetica 37, 235–252 (1980)]. In an earlier study [T. Otake, J. Acoust. Soc. Am. Suppl. 1 84, S97 (1988)], Arabic and Japanese were investigated under the same conditions with respect to the compensation effect, and it was found that both languages showed the same compensation effect, which may suggest that it is a universal phenomenon [Beckman, Phonetica 39, 113–135 (1982)], The experiment reported here used the method of Port et al. (1980) to test the hypothesis further by investigating other languages that belong to stress timing (English and German) and syllable timing (Spanish and French). In addition, Chinese, which does not belong to any of these timings, was investigated. Pilot results show that the temporal compensation effect reported by Port et al. (1980) can be equally observable in the above languages.
Journal of the Acoustical Society of America | 2006
Takashi Otake
One complication about the debate of mora timing is that durational units are closely related to syllable structure in which morae can be treated as subunits of syllables. As a consequence, durational properties of the components in three types of syllables, CVN, CVV, and CVQ, can equally be examined from the point of representations of within‐word structure. Durational properties in Japanese speech timing undoubtedly play an important role, but knowledge of representations of within‐word structure cannot be neglected because it is uniquely language specific. In this presentation it is discussed how knowledge of representations of within‐word structure plays a significant role in recognition of morae from the point of psycholinguistic framework, presenting various types of data that were collected from (1) Japanese preschool children (before and after acquiring kana orthography), (2) Japanese monolinguals and Japanese English bilinguals, and (3) Japanese L2 learners (from beginning to advanced whose nativ...