Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shiro Ikeda is active.

Publication


Featured researches published by Shiro Ikeda.


Neurocomputing | 2001

An approach to blind source separation based on temporal structure of speech signals

Noboru Murata; Shiro Ikeda; Andreas Ziehe

Abstract In this paper, we introduce a new technique for blind source separation of speech signals. We focus on the temporal structure of the signals. The idea is to apply the decorrelation method proposed by Molgedey and Schuster in the time–frequency domain. Since we are applying separation algorithm on each frequency separately, we have to solve the amplitude and permutation ambiguity properly to reconstruct the separated signals. For solving the amplitude ambiguity, we use the matrix inversion and for the permutation ambiguity, we introduce a method based on the temporal structure of speech signals. We show some results of experiments with both artificially controlled data and speech data recorded in the real environment.


IEEE Transactions on Speech and Audio Processing | 2003

Combined approach of array processing and independent component analysis for blind separation of acoustic signals

Futoshi Asano; Shiro Ikeda; Michiaki Ogawa; Hideki Asoh; Nobuhiko Kitawaki

Two array signal processing techniques are combined with independent component analysis (ICA) to enhance the performance of blind separation of acoustic signals in a reflective environment. The first technique is the subspace method which reduces the effect of room reflection when the system is used in a room. Room reflection is one of the biggest problems in blind source separation (BSS) in acoustic environments. The second technique is a method of solving permutation. For employing the subspace method, ICA must be used in the frequency domain, and precise permutation is necessary for all frequencies. In this method, a physical property of the mixing matrix, i.e., the coherency in adjacent frequencies, is utilized to solve the permutation. The experiments in a meeting room showed that the subspace method improved the rate of automatic speech recognition from 50% to 68% and that the method of solving permutation achieves performance that closely approaches that of the correct permutation, differing by only 4% in recognition rate.


Neural Networks | 2000

Independent component analysis for noisy data: MEG data analysis

Shiro Ikeda; Keisuke Toyama

Independent component analysis (ICA) is a new, simple and powerful idea for analyzing multi-variant data. One of the successful applications is neurobiological data analysis such as electroencephalography (EEG), magnetic resonance imaging (MRI), and magnetoencephalography (MEG). However, many problems remain. In most cases, neurobiological data contain a lot of sensor noise, and the number of independent components is unknown. In this article, we discuss an approach to separate noise-contaminated data without knowing the number of independent components. A well-known two stage approach to ICA is to pre-process the data by principal component analysis (PCA), and then the necessary rotation matrix is estimated. Since PCA does not work well for noisy data, we implement a factor analysis model for pre-processing. In the new pre-processing, the number of sources and the amount of sensor noise are estimated. After the preprocessing, the rotation matrix is estimated using an ICA method. Through the experiments with MEG data, we show this approach is effective.


IEEE Transactions on Information Theory | 2004

Information geometry of turbo and low-density parity-check codes

Shiro Ikeda; Toshiyuki Tanaka; Shun-ichi Amari

Since the proposal of turbo codes in 1993, many studies have appeared on this simple and new type of codes which give a powerful and practical performance of error correction. Although experimental results strongly support the efficacy of turbo codes, further theoretical analysis is necessary, which is not straightforward. It is pointed out that the iterative decoding algorithm of turbo codes shares essentially similar ideas with low-density parity-check (LDPC) codes, with Pearls belief propagation algorithm applied to a cyclic belief diagram, and with the Bethe approximation in statistical physics. Therefore, the analysis of the turbo decoding algorithm will reveal the mystery of those similar iterative methods. In this paper, we recapture and extend the geometrical framework initiated by Richardson to the information geometrical framework of dual affine connections, focusing on both of the turbo and LDPC decoding algorithms. The framework helps our intuitive understanding of the algorithms and opens a new prospect of further analysis. We reveal some properties of these codes in the proposed framework, including the stability and error analysis. Based on the error analysis, we finally propose a correction term for improving the approximation.


Neural Computation | 2004

Stochastic reasoning, free energy, and information geometry

Shiro Ikeda; Toshiyuki Tanaka; Shun-ichi Amari

Belief propagation (BP) is a universal method of stochastic reasoning. It gives exact inference for stochastic models with tree interactions and works surprisingly well even if the models have loopy interactions. Its performance has been analyzed separately in many fields, such as AI, statistical physics, information theory, and information geometry. This article gives a unified framework for understanding BP and related methods and summarizes the results obtained in many fields. In particular, BP and its variants, including tree reparameterization and concave-convex procedure, are reformulated with information-geometrical terms, and their relations to the free energy function are elucidated from an information-geometrical viewpoint. We then propose a family of new algorithms. The stabilities of the algorithms are analyzed, and methods to accelerate them are investigated.


international conference on acoustics, speech, and signal processing | 2001

A combined approach of array processing and independent component analysis for blind separation of acoustic signals

Futoshi Asano; Shiro Ikeda; Michiaki Ogawa; Hideki Asoh; Nobuhiko Kitawaki

Two array signal processing techniques are combined with independent component analysis to enhance the performance of blind separation of acoustic signals in a reflective environment such as rooms. The first technique is the subspace method which reduces the effect of room reflection. The second technique is a method of solving the permutation, in which the coherency of the mixing matrix in adjacent frequencies is utilized.


international conference on artificial neural networks | 1998

An Approach to Blind Source Separation of Speech Signals

Shiro Ikeda; Noboru Murata

In this paper we introduce a new technique for blind source separation of speech signals. We focused on the temporal structure of signals which is not always the case in other major approaches. The idea is to apply the decorrelation method proposed by Molgedey and Schuster in time-frequency domain. We show some results of experiments with artificial data and speech data recorded in the real environment. Our algorithm needs considerably straightforward calculation and includes only a few parameters to be tuned.


Neural Computation | 2009

Capacity of a single spiking neuron channel

Shiro Ikeda; Jonathan H. Manton

Information transfer through a single neuron is a fundamental component of information processing in the brain, and computing the information channel capacity is important to understand this information processing. The problem is difficult since the capacity depends on coding, characteristics of the communication channel, and optimization over input distributions, among other issues. In this letter, we consider two models. The temporal coding model of a neuron as a communication channel assumes the output is where is a gamma-distributed random variable corresponding to the interspike interval, that is, the time it takes for the neuron to fire once. The rate coding model is similar; the output is the actual rate of firing over a fixed period of time. Theoretical studies prove that the distribution of inputs, which achieves channel capacity, is a discrete distribution with finite mass points for temporal and rate coding under a reasonable assumption. This allows us to compute numerically the capacity of a neuron. Numerical results are in a plausible range based on biological evidence to date.


Biological Cybernetics | 2011

An introductory review of information theory in the context of computational neuroscience

Mark D. McDonnell; Shiro Ikeda; Jonathan H. Manton

This article introduces several fundamental concepts in information theory from the perspective of their origins in engineering. Understanding such concepts is important in neuroscience for two reasons. Simply applying formulae from information theory without understanding the assumptions behind their definitions can lead to erroneous results and conclusions. Furthermore, this century will see a convergence of information theory and neuroscience; information theory will expand its foundations to incorporate more comprehensively biological processes thereby helping reveal how neuronal networks achieve their remarkable information processing abilities.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Risk assessment of radioisotope contamination for aquatic living resources in and around Japan

Hiroshi Okamura; Shiro Ikeda; Takami Morita; Shinto Eguchi

Significance Quantification of contamination risk caused by radioisotopes released from the Fukushima Dai-ichi nuclear power plant is useful for excluding or reducing groundless rumors about food safety. Our new statistical approach made it possible to evaluate the risk for aquatic food and showed that the present contamination levels of radiocesiums are low overall. However, some freshwater species still have relatively high risks. We also suggest the necessity of refining data collection plans to reduce detection limits in the future, because a small number of precise measurements are more valuable than many measurements that are below detection limits. Food contamination caused by radioisotopes released from the Fukushima Dai-ichi nuclear power plant is of great public concern. The contamination risk for food items should be estimated depending on the characteristics and geographic environments of each item. However, evaluating current and future risk for food items is generally difficult because of small sample sizes, high detection limits, and insufficient survey periods. We evaluated the risk for aquatic food items exceeding a threshold of the radioactive cesium in each species and location using a statistical model. Here we show that the overall contamination risk for aquatic food items is very low. Some freshwater biota, however, are still highly contaminated, particularly in Fukushima. Highly contaminated fish generally tend to have large body size and high trophic levels.

Collaboration


Dive into the Shiro Ikeda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shun-ichi Amari

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mareki Honma

Graduate University for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge