Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Atsushi Aoyama is active.

Publication


Featured researches published by Atsushi Aoyama.


Spatial Cognition and Computation | 2012

Impaired Spatial-Temporal Integration of Touch in Xenomelia (Body Integrity Identity Disorder)

Atsushi Aoyama; Peter Krummenacher; Antonella Palla; Leonie Maria Hilti; Peter Brugger

Abstract Body integrity identity disorder (BIID), or xenomelia, is a failure to integrate a fully functional limb into a coherent body schema. It manifests as the desire for amputation of the particular limb below an individually stable ‘demarcation line.’ Here we show, in five individuals with xenomelia, defective temporal order judgments to two tactile stimuli, one proximal, the other distal of the demarcation line. Spatio-temporal integration, known to be mediated by the parietal lobes, was biased towards the undesired body part, apparently capturing the individuals attention in a pathologically exaggerated way. This finding supports the view of xenomelia as a parietal lobe syndrome.


Brain Research | 2006

Modulation of early auditory processing by visually based sound prediction.

Atsushi Aoyama; Hiroshi Endo; Satoshi Honda; Tsunehiro Takeda

Brain activity was measured by magnetoencephalography (MEG) to investigate whether the early auditory system can detect changes in audio-visual patterns when the visual part is presented earlier. We hypothesized that a template underlying the mismatch field (MMF) phenomenon, which is usually formed by past sound regularities, is also used in visually based sound prediction. Activity similar to the MMF may be elicited by comparing an incoming sound with the template. The stimulus was modeled after a keyboard: an animation in which one of two keys was depressed was accompanied by either a lower or higher tone. Congruent audio-visual pairs were designed to be frequent and incongruent pairs to be infrequent. Subjects were instructed to predict an incoming sound based on key movement in two sets of trials (prediction condition), whereas they were instructed not to do so in the other two sets (non-prediction condition). For each condition, the movement took 50 ms in one set (Delta = 50 ms) and 300 ms in the other (Delta = 300 ms) to reach the bottom, at which time a tone was delivered. As a result, only under the prediction condition with Delta = 300 ms was additional activity for incongruent pairs observed bilaterally in the supratemporal area within 100-200 ms of the auditory stimulus onset; this activity had spatio-temporal properties similar to those of MMF. We concluded that a template is created by the visually based sound prediction only after the visual discriminative and sound prediction processes have already been performed.


Neuroreport | 2011

Visual mismatch response evoked by a perceptually indistinguishable oddball.

Takayoshi Kogai; Atsushi Aoyama; Kaoru Amano; Tsunehiro Takeda

Mismatch field (MMF) is an early magnetoencephalographic response evoked by deviant stimuli within a sequence of standard stimuli. Although auditory MMF is reported to be an automatic response, the automaticity of visual MMF has not been clearly demonstrated, partly because of the difficulty in designing an ignore condition. Our modified oddball paradigm had a masking stimulus inserted between briefly presented standard and deviant stimuli (vertical gratings with different spatial frequencies). Perceptual discrimination between masked standard and deviant stimuli was difficult, but the early magnetoencephalographic response for the deviant was significantly larger than that for the standard, when the former had a higher spatial frequency than the latter. Our findings strongly support the hypothesis that visual MMF is evoked automatically.


Journal of Integrative Neuroscience | 2013

Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task

Atsushi Aoyama; Tomohiro Haruyama; Shinya Kuriki

Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.


Journal of the Physical Society of Japan | 2016

Input response of neural network model with lognormally distributed synaptic weights

Yoshihiro Nagano; Ryo Karakida; Norifumi Watanabe; Atsushi Aoyama; Masato Okada

Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologic...


Neuroreport | 2007

Predictive and sensory integration begins at an early stage of visual processing.

Atsushi Aoyama; Hiroshi Endo; Satoshi Honda; Tsunehiro Takeda

Brain activity was measured by magnetoencephalography to investigate the spatiotemporal stage of visual processing at which predictive and sensory integration begins. We examined the consequences of a visual mismatch between preliminary prediction and incoming stimulus. Following auditory cues (1000- and 1250-Hz tones) for prediction, congruent and incongruent images, pictures of two musical keys, were presented to volunteers. When they predicted visual inputs on the basis of preceding auditory cues, we detected a mismatch signal for predictive–sensory incongruities in the striate and extrastriate areas for 100–200 ms after image presentation. As this signal reflects a compatibility analysis, we propose that the integration process begins in these areas approximately 100 ms after image presentation.


Neuroreport | 2017

Parasympathetic activation enhanced by slow respiration modulates early auditory sensory gating

Atsushi Aoyama; Yu Shimura; Takao Ohmuta; Yohei Nomoto; Masashi Kawasumi

Sensory gating is a preattentional mechanism to filter irrelevant information from the environment. It is typically reflected as a suppression of the event-related P50 component for successive sounds in the auditory modality. Although stress-induced sympathetic activation has been reported to disrupt P50 suppression, little is known about the modulatory effect of parasympathetic activation on early auditory sensory gating. We determined the parasympathetic effect on the magnetic P50 (P50m) suppression by controlling the respiratory rhythm and recording data simultaneously with magnetoencephalography and electrocardiography, using three successive click sounds as stimulus and ten normal individuals as study participants. The respiratory rhythm was guided by visual cues and set at 0.3, 0.25, or 0.2 Hz for distinct auditory stimulus sequence blocks. Heart rate variability analysis showed that slow respiration leads to significantly large high-frequency power, which is known as the parasympathetic index, whereas low-frequency/high-frequency ratio, known as the sympathetic index, did not differ with the respiratory rhythm. Although P50m suppression was observed in the left and right primary auditory areas for every respiratory condition, the left P50m intensity for the first sound was significantly decreased in the case of slow respiration, thereby indicating disruption of the left P50m suppression. Since background alpha oscillatory power, reflecting the arousal level, was similar for every respiratory rhythm, it is concluded that parasympathetic activation enhanced by slow respiration modulates P50m gating by reducing the initial neural sensitivity for an auditory input. Not only sympathetic but also parasympathetic effects should be considered in the evaluation of P50/P50m biomarkers.


international conference on artificial neural networks | 2014

Analysis of Neural Circuit for Visual Attention using Lognormally Distributed Input

Yoshihiro Nagano; Norifumi Watanabe; Atsushi Aoyama

Visual attention has recently been reported to modulate neural activity of narrow spiking and broad spiking neurons in V4, with increased firing rate and less inter-trial variations. We simulated these physiological phenomena using a neural network model based on spontaneous activity, assuming that the visual attention modulation could be achieved by a change in variance of input firing rate distributed with a lognormal distribution. Consistent with the physiological studies, an increase in firing rate and a decrease in inter-trial variance was simultaneously obtained in the simulation by increasing variance of input firing rate distribution. These results indicate that visual attention forms strong sparse and weak dense input or a ‘winner-take-all’ state, to improve the signal-to-noise ratio of the target information.


Adaptive Behavior | 2014

Automated physiological recovery of avocado plants for plant-based adaptive machines

Dana D. Damian; Shuhei Miyashita; Atsushi Aoyama; Dominique Cadosch; Po Ting Huang; Michael Ammann; Rolf Pfeifer

Interfacing robots with real biological systems is a potential approach to realizing truly adaptive machines, which is a long-standing engineering challenge. Although plants are widely spread and versatile, little attention has been given to creating cybernetic systems incorporating plants. Producing such systems requires two main steps: the acquisition and interpretation of biological signals, and issuing the appropriate stimulation signals for controlling the physiological response of the biological part. We investigate an automated physiological recovery of young avocado plants by realizing a closed interaction loop between the avocado plant and a water-control device. The study considers the two aforementioned steps by reading out postural cues (leaf inclination) and electrophysiological (biopotential) signals from the plant, and controlling the water resource adaptive to the drought condition of an avocado plant. Analysis of the two signals reveals time-frequency patterns of increased power and global synchronization in the narrow bands when water is available, and local synchronization in the broad bands for water shortage. The results indicate the feasibility of interface technologies between plants and machines, and provide preliminary support for achieving adaptive plant-based ‘machines’ based on plants’ large and robust physiological spectrum and machines’ control scheme diversity. We further discuss fundamental impediments hindering the use of living organisms like plants for artificial systems.


systems, man and cybernetics | 2011

Attempt on plant machine interface

Dominique Cadosch; Po Ting Huang; Dana D. Damian; Shuhei Miyashita; Atsushi Aoyama; Rolf Pfeifer

In this paper, we investigate possible means of communication between plants and machines. Plants are capable of sensing a variety of environmental information. In particular, Avocado trees have an apparent response to increasing drought levels. We read out two different communication channels: morphological changes (leaf inclination) and the electric potential of the stem (biopotential) using distance sensors and biopotential electrodes, respectively. Leaf inclination reliably triggers irrigation, whereas the changes of the biopotential indicate water uptake and can be used to automatically stop the irrigation. Hence, through systematic experiments we demonstrate that morphological changes and biopotentials provide suitable control signals for interfacing plants with machines, and open a possibility to exploit abilities of plants in robotic systems.

Collaboration


Dive into the Atsushi Aoyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroshi Endo

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norifumi Watanabe

Tokyo University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge