Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akiko Callan is active.

Publication


Featured researches published by Akiko Callan.


NeuroImage | 2006

Song and speech: Brain regions involved with perception and covert production

Vassiliy Tsytsarev; Takashi Hanakawa; Akiko Callan; Maya Katsuhara; Hidenao Fukuyama; Robert Turner

This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.


NeuroImage | 2004

Phonetic perceptual identification by native- and second-language speakers differentially activates brain regions involved with acoustic phonetic processing and those involved with articulatory-auditory/orosensory internal models.

Jeffery A. Jones; Akiko Callan; Reiko Akahane-Yamada

This experiment investigates neural processes underlying perceptual identification of the same phonemes for native- and second-language speakers. A model is proposed implicating the use of articulatory-auditory and articulatory-orosensory mappings to facilitate perceptual identification under conditions in which the phonetic contrast is ambiguous, as in the case of second-language speakers. In contrast, native-language speakers are predicted to use auditory-based phonetic representations to a greater extent for perceptual identification than second-language speakers. The English /r-l/ phonetic contrast, although easy for native English speakers, is extremely difficult for native Japanese speakers who learned English as a second language after childhood. Twenty-two native English and twenty-two native Japanese speakers participated in this study. While undergoing event-related fMRI, subjects were aurally presented with syllables starting with a /r/, /l/, or a vowel and were required to rapidly identify the phoneme perceived by pushing one of three buttons with the left thumb. Consistent with the proposed model, the results show greater activity for second- over native-language speakers during perceptual identification of /r/ and /l/ relative to vowels in brain regions implicated with instantiating forward and inverse articulatory-auditory articulatory-orosensory models [Brocas area, anterior insula, anterior superior temporal sulcus/gyrus (STS/G), planum temporale (PT), superior temporal parietal area (Stp), SMG, and cerebellum]. The results further show that activity in brain regions implicated with instantiating these internal models is correlated with better /r/ and /l/ identification performance for second-language speakers. Greater activity found for native-language speakers especially in the anterior STG/S for /r/ and /l/ perceptual identification is consistent with the hypothesis that native-language speakers use auditory phonetic representations more extensively than second-language speakers.


NeuroImage | 2003

Learning-induced neural plasticity associated with improved identification performance after training of a difficult second-language phonetic contrast.

Keiichi Tajima; Akiko Callan; Rieko Kubo; Shinobu Masaki; Reiko Akahane-Yamada

Adult native Japanese speakers have difficulty perceiving the English /r-l/ phonetic contrast even after years of exposure. However, after extensive perceptual identification training, long-lasting improvement in identification performance can be attained. This fMRI study investigates localized changes in brain activity associated with 1 month of extensive feedback-based perceptual identification training by native Japanese speakers learning the English /r-l/ phonetic contrast. Before and after training, separate functional brain imaging sessions were conducted for identification of the English /r-l/ contrast (difficult for Japanese speakers), /b-g/ contrast (easy), and /b-v/ contrast (difficult), in which signal-correlated noise served as the reference control condition. Neural plasticity, denoted by exclusive enhancement in brain activity for the /r-l/ contrast, does not involve only reorganization in brain regions concerned with acoustic-phonetic processing (superior and medial temporal areas) but also the recruitment of additional bilateral cortical (supramarginal gyrus, planum temporale, Brocas area, premotor cortex, supplementary motor area) and subcortical regions (cerebellum, basal ganglia, substantia nigra) involved with auditory-articulatory (perceptual-motor) mappings related to verbal speech processing and learning. Contrary to what one may expect, brain activity for perception of a difficult contrast does not come to resemble that of an easy contrast as learning proceeds. Rather, the results support the hypothesis that improved identification performance may be due to the acquisition of auditory-articulatory mappings allowing for perception to be made in reference to potential action.


Journal of Cognitive Neuroscience | 2004

Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information

Jeffery A. Jones; Kevin G. Munhall; Christian Kroos; Akiko Callan; Eric Vatikiotis-Bateson

Perception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites. Some studies have identified potential sites in the superior temporal gyrus/sulcus (STG/S) that are responsive to multisensory information from the auditory speech signal and visual speech movement. One limitation of these studies is that they do not control for activity resulting from attentional modulation cued by such things as visual information signaling the onsets and offsets of the acoustic speech signal, as well as activity resulting from MSI of properties of the auditory speech signal with aspects of gross visual motion that are not specific to place of articulation information. This fMRI experiment uses spatial wavelet bandpass filtered Japanese sentences presented with background multispeaker audio noise to discern brain activity reflecting MSI induced by auditory and visual correspondence of place of articulation information that controls for activity resulting from the above-mentioned factors. The experiment consists of a low-frequency (LF) filtered condition containing gross visual motion of the lips, jaw, and head without specific place of articulation information, a midfrequency (MF) filtered condition containing place of articulation information, and an unfiltered (UF) condition. Sites of MSI selectively induced by auditory and visual correspondence of place of articulation information were determined by the presence of activity for both the MF and UF conditions relative to the LF condition. Based on these criteria, sites of MSI were found predominantly in the left middle temporal gyrus (MTG), and the left STG/S (including the auditory cortex). By controlling for additional factors that could also induce greater activity resulting from visual motion information, this study identifies potential MSI sites that we believe are involved with improved speech perception intelligibility.


Cognitive Brain Research | 2001

Multimodal contribution to speech perception revealed by independent component analysis : a single-sweep EEG case study

Akiko Callan; Christian Kroos; Eric Vatikiotis-Bateson

In this single-sweep electroencephalographic case study, independent component analysis (ICA) was used to investigate multimodal processes underlying the enhancement of speech intelligibility in noise (for monosyllabic English words) by visualizing facial motion concordant with the audio speech signal. Wavelet analysis of the single-sweep IC activation waveforms revealed increased high-frequency energy for two ICs underlying the visual enhancement effect. For one IC, current source density analysis localized activity mainly to the superior temporal gyrus, consistent with principles of multimodal integration. For the other IC, activity was distributed across multiple cortical areas perhaps reflecting global mappings underlying the visual enhancement effect.


NeuroImage | 2005

When meaningless symbols become letters: neural activity change in learning new phonograms.

Akiko Callan; Shinobu Masaki

Left fusiform gyrus and left angular gyrus are considered to be respectively involved with visual form processing and associating visual and auditory (phonological) information in reading. However, there are a number of studies that fail to show the contribution of these regions in carrying out these aspects of reading. Considerable differences in the type of stimuli and tasks used in the various studies may account for the discrepancy in results. This functional magnetic resonance imaging (fMRI) study attempts to control aspects of experimental stimuli and tasks to specifically investigate brain regions involved with visual form processing and character-to-phonological (i.e., simple grapheme-to-phonological) conversion processing for single letters. Subjects performed a two-back identification task using known Japanese, and previously unknown Korean, and Thai phonograms before and after training on one of the unknown language orthographies. Japanese subjects learned either five Korean or five Thai phonograms. Brain regions related to visual form processing were assessed by comparing activity related to native (Japanese) phonograms with that of non-native (Korean and Thai) phonograms. There was no significant differential brain activity for visual form processing. Brain regions related to character-to-phonological conversion processing were assessed by comparing pre- and post-tests of trained non-native phonograms with that of native phonograms and non-trained non-native phonograms. Significant differential activation post-relative to pre-training exclusively for the trained non-native phonograms was found in left angular gyrus. In addition, psychophysiologic interaction (PPI) analysis revealed greater integration of left angular gyrus with primary visual cortex as well as with superior temporal gyrus for the trained phonograms post-relative to pre-training. The results suggest that left angular gyrus is involved with character-to-phonological conversion in letter perception.


Cognitive Brain Research | 2000

Single-sweep EEG analysis of neural processes underlying perception and production of vowels

Akiko Callan; Kiyoshi Honda; Shinobu Masaki

This single-sweep electroencephalographic study using independent component analysis was conducted to determine the neural processes underlying both speech perception and production of vowels. The same neural processes located in auditory and motor areas of the brain that significantly distinguish between a speech production and a control mental rehearsal task were found for both auditory evoked responses and speech planning responses. Thus identifying common task dependent neural processes underlying speech production and perception.


Human Brain Mapping | 2009

Neural Correlates of Resolving Uncertainty in Driver's Decision Making

Akiko Callan; Rieko Osu; Yuya Yamagishi; Naomi Inoue

Neural correlates of driving and of decision making have been investigated separately, but little is known about the underlying neural mechanisms of decision making in driving. Previous research discusses two types of decision making: reward‐weighted decision making and cost‐weighted decision making. There are many reward‐weighted decision making neuroimaging studies but there are few cost‐weighted studies. Considering that driving involves serious risk, it is assumed that decision making in driving is cost weighted. Therefore, neural substrates of cost‐weighted decision making can be assessed by investigation of drivers decision making. In this study, neural correlates of resolving uncertainty in drivers decision making were investigated. Turning right in left‐hand traffic at a signalized intersection was simulated by computer graphic animation based videos. When the drivers view was occluded by a big truck, the uncertainty of the oncoming traffic was resolved by an in‐car video assist system that presented the drivers occluded view. Resolving the uncertainty reduced activity in a distributed area including the amygdala and anterior cingulate. These results implicate the amygdala and anterior cingulate as serving a role in cost‐weighted decision making. Hum Brain Mapp 2009.


Frontiers in Psychology | 2014

Multisensory and modality specific processing of visual speech in different regions of the premotor cortex

Jeffery A. Jones; Akiko Callan

Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speakers articulating face and heard her voice), visual only (only saw the speakers articulating face), and audio only (only heard the speakers voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures.


PLOS ONE | 2012

Dynamic Visuomotor Transformation Involved with Remote Flying of a Plane Utilizes the ‘Mirror Neuron’ System

Mario Gamez; Daniel B. Cassel; Cengiz Terzibas; Akiko Callan; Mitsuo Kawato; Masa-aki Sato

Brain regions involved with processing dynamic visuomotor representational transformation are investigated using fMRI. The perceptual-motor task involved flying (or observing) a plane through a simulated Red Bull Air Race course in first person and third person chase perspective. The third person perspective is akin to remote operation of a vehicle. The ability for humans to remotely operate vehicles likely has its roots in neural processes related to imitation in which visuomotor transformation is necessary to interpret the action goals in an egocentric manner suitable for execution. In this experiment for 3rd person perspective the visuomotor transformation is dynamically changing in accordance to the orientation of the plane. It was predicted that 3rd person remote flying, over 1st, would utilize brain regions composing the ‘Mirror Neuron’ system that is thought to be intimately involved with imitation for both execution and observation tasks. Consistent with this prediction differential brain activity was present for 3rd person over 1st person perspectives for both execution and observation tasks in left ventral premotor cortex, right dorsal premotor cortex, and inferior parietal lobule bilaterally (Mirror Neuron System) (Behaviorally: 1st>3rd). These regions additionally showed greater activity for flying (execution) over watching (observation) conditions. Even though visual and motor aspects of the tasks were controlled for, differential activity was also found in brain regions involved with tool use, motion perception, and body perspective including left cerebellum, temporo-occipital regions, lateral occipital cortex, medial temporal region, and extrastriate body area. This experiment successfully demonstrates that a complex perceptual motor real-world task can be utilized to investigate visuomotor processing. This approach (Aviation Cerebral Experimental Sciences ACES) focusing on direct application to lab and field is in contrast to standard methodology in which tasks and conditions are reduced to their simplest forms that are remote from daily life experience.

Collaboration


Dive into the Akiko Callan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffery A. Jones

Wilfrid Laurier University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rieko Kubo

Japan Advanced Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Kroos

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Ando

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge