Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcel R. Giezen is active.

Publication


Featured researches published by Marcel R. Giezen.


Cognition | 2015

Parallel language activation and inhibitory control in bimodal bilinguals.

Marcel R. Giezen; Henrike K. Blumenfeld; Anthony Shook; Viorica Marian; Karen Emmorey

Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level.


Bilingualism: Language and Cognition | 2016

Psycholinguistic, cognitive, and neural implications of bimodal bilingualism

Karen Emmorey; Marcel R. Giezen; Tamar H. Gollan

Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.


Bilingualism: Language and Cognition | 2016

Language co-activation and lexical selection in bimodal bilinguals: Evidence from picture-word interference.

Marcel R. Giezen; Karen Emmorey

We used picture-word interference (PWI) to discover a) whether cross-language activation at the lexical level can yield phonological priming effects when languages do not share phonological representations, and b) whether semantic interference effects occur without articulatory competition. Bimodal bilinguals fluent in American Sign Language (ASL) and English named pictures in ASL while listening to distractor words that were 1) translation equivalents, 2) phonologically related to the target sign through translation, 3) semantically related, or 4) unrelated. Monolingual speakers named pictures in English. Production of ASL signs was facilitated by words that were phonologically related through translation and by translation equivalents, indicating that cross-language activation spreads from lexical to phonological levels for production. Semantic interference effects were not observed for bimodal bilinguals, providing some support for a post-lexical locus of semantic interference, but which we suggest may instead reflect time course differences in spoken and signed production in the PWI task.


Journal of Deaf Studies and Deaf Education | 2014

Relationships Between Spoken Word and Sign Processing in Children With Cochlear Implants

Marcel R. Giezen; Anne Baker; Paola Escudero

The effect of using signed communication on the spoken language development of deaf children with a cochlear implant (CI) is much debated. We report on two studies that investigated relationships between spoken word and sign processing in children with a CI who are exposed to signs in addition to spoken language. Study 1 assessed rapid word and sign learning in 13 children with a CI and found that performance in both language modalities correlated positively. Study 2 tested the effects of using sign-supported speech on spoken word processing in eight children with a CI, showing that simultaneously perceiving signs and spoken words does not negatively impact their spoken word recognition or learning. Together, these two studies suggest that sign exposure does not necessarily have a negative effect on speech processing in some children with a CI.


Acta Psychologica | 2017

The relation between working memory and language comprehension in signers and speakers

Karen Emmorey; Marcel R. Giezen; Jennifer A.F. Petrich; Erin Spurgeon; Lucinda O'Grady Farnady

This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension.


Journal of Child Language | 2016

Rapid Learning of Minimally Different Words in Five- to Six-Year-Old Children: Effects of Acoustic Salience and Hearing Impairment.

Marcel R. Giezen; Paola Escudero; Anne Baker

This study investigates the role of acoustic salience and hearing impairment in learning phonologically minimal pairs. Picture-matching and object-matching tasks were used to investigate the learning of consonant and vowel minimal pairs in five- to six-year-old deaf children with a cochlear implant (CI), and children of the same age with normal hearing (NH). In both tasks, the CI children showed clear difficulties with learning minimal pairs. The NH children also showed some difficulties, however, particularly in the picture-matching task. Vowel minimal pairs were learned more successfully than consonant minimal pairs, particularly in the object-matching task. These results suggest that the ability to encode phonetic detail in novel words is not fully developed at age six and is affected by task demands and acoustic salience. CI children experience persistent difficulties with accurately mapping sound contrasts to novel meanings, but seem to benefit from the relative acoustic salience of vowel sounds.


Bilingualism: Language and Cognition | 2017

Evidence for a bimodal bilingual disadvantage in letter fluency

Marcel R. Giezen; Karen Emmorey

Many bimodal bilinguals are immersed in a spoken language-dominant environment from an early age and, unlike unimodal bilinguals, do not necessarily divide their language use between languages. Nonetheless, early ASL-English bilinguals retrieved fewer words in a letter fluency task in their dominant language compared to monolingual English speakers with equal vocabulary level. This finding demonstrates that reduced vocabulary size and/or frequency of use cannot completely account for bilingual disadvantages in verbal fluency. Instead, retrieval difficulties likely reflect between-language interference. Furthermore, it suggests that the two languages of bilinguals compete for selection even when they are expressed with distinct articulators.


Bilingualism: Language and Cognition | 2016

Insights from bimodal bilingualism: Reply to commentaries

Karen Emmorey; Marcel R. Giezen; Tamar H. Gollan

The commentaries on our Keynote article “Psycholinguistic, cognitive, and neural implications of bimodal bilingualism” were enthusiastic about what can be learned by studying bilinguals who acquire two languages that are understood via distinct perceptual systems (vision vs. audition) and that are produced with distinct linguistic articulators (the hands vs. the vocal tract). The authors also brought out several new ideas, extensions, and issues related to bimodal bilingualism, which we discuss in this reply.


Journal of Deaf Studies and Deaf Education | 2018

Comparing Semantic Fluency in American Sign Language and English

Zed Sevcikova Sehyr; Marcel R. Giezen; Karen Emmorey

This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g., name as many animals as possible in 1 min) for deaf native and early ASL signers and hearing monolingual English speakers. The results showed similar fluency scores in both modalities when fingerspelled responses were included for ASL. Experiment 2 compared ASL and English fluency scores in hearing native and late ASL-English bilinguals. Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Fingerspelling was relatively common in all groups of signers and was used primarily for low-frequency items. We conclude that semantic fluency is sensitive to language dominance and that performance can be compared across the spoken and signed modality, but fingerspelled responses should be included in ASL fluency scores.


Journal of Speech Language and Hearing Research | 2010

Use of acoustic cues by children with cochlear implants

Marcel R. Giezen; Paola Escudero; Anne Baker

Collaboration


Dive into the Marcel R. Giezen's collaboration.

Top Co-Authors

Avatar

Karen Emmorey

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Baker

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erin Spurgeon

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge