Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kunio Tanabe is active.

Publication


Featured researches published by Kunio Tanabe.


Journal of the American Society for Mass Spectrometry | 2012

Analysis of Renal Cell Carcinoma as a First Step for Developing Mass Spectrometry-Based Diagnostics

Kentaro Yoshimura; Lee Chuin Chen; Mridul Kanti Mandal; Tadao Nakazawa; Zhan Yu; Takahito Uchiyama; Hirokazu Hori; Kunio Tanabe; Takeo Kubota; Hideki Fujii; Ryohei Katoh; Kenzo Hiraoka; Sen Takeda

Immediate diagnosis of human specimen is an essential prerequisites in medical routines. This study aimed to establish a novel cancer diagnostics system based on probe electrospray ionization-mass spectrometry (PESI-MS) combined with statistical data processing. PESI-MS uses a very fine acupuncture needle as a probe for sampling as well as for ionization. To demonstrate the applicability of PESI-MS for cancer diagnosis, we analyzed nine cases of clear cell renal cell carcinoma (ccRCC) by PESI-MS and processed the data by principal components analysis (PCA). Our system successfully delineated the differences in lipid composition between non-cancerous and cancerous regions. In this case, triacylglycerol (TAG) was reproducibly detected in the cancerous tissue of nine different individuals, the result being consistent with well-known profiles of ccRCC. Moreover, this system enabled us to detect the boundaries of cancerous regions based on the expression of TAG. These results strongly suggest that PESI-MS will be applicable to cancer diagnosis, especially when the number of data is augmented.


Analytical Biochemistry | 2013

Real-time diagnosis of chemically induced hepatocellular carcinoma using a novel mass spectrometry-based technique

Kentaro Yoshimura; Mridul Kanti Mandal; Michio Hara; Hideki Fujii; Lee Chuin Chen; Kunio Tanabe; Kenzo Hiraoka; Sen Takeda

Real-time analyses of hepatocellular carcinoma were performed in living mice to assess the applicability of probe electrospray ionization-mass spectrometry (PESI-MS) in medical diagnosis. The number of peaks and the abundance of ions corresponding to triacylglycerols (TAGs) were higher in cancerous tissues than in noncancerous tissues. Multiple sequential scans of the specimens were performed along a predetermined line extending over the noncancerous region to detect the boundary of the cancerous region. Our system successfully discriminated the noncancerous and cancerous tissues based on the intensities of the TAG ions. These results highlight the potential application of PESI-MS for clinical diagnosis in cancer.


international conference on acoustics, speech, and signal processing | 2006

Isolated-Word Recognition with Penalized Logistic Regression Machines

Øystein Birkenes; Tomoko Matsui; Kunio Tanabe

We propose a new approach to isolated-word speech recognition based on penalized logistic regression machines (PLRMs). With this approach we combine the hidden Markov model (HMM) with multiclass logistic regression resulting in a powerful speech recognizer which provides us with the posterior probability for each word. Experiments on the English E-set show significant improvements compared to conventional HMM-based speech recognition


IEEE Transactions on Audio, Speech, and Language Processing | 2010

Penalized Logistic Regression With HMM Log-Likelihood Regressors for Speech Recognition

Øystein Birkenes; Tomoko Matsui; Kunio Tanabe; Sabato Marco Siniscalchi; Tor Andre Myrvoll; Magne Hallstein Johnsen

Hidden Markov models (HMMs) are powerful generative models for sequential data that have been used in automatic speech recognition for more than two decades. Despite their popularity, HMMs make inaccurate assumptions about speech signals, thereby limiting the achievable performance of the conventional speech recognizer. Penalized logistic regression (PLR) is a well-founded discriminative classifier with long roots in the history of statistics. Its classification performance is often compared with that of the popular support vector machine (SVM). However, for speech classification, only limited success with PLR has been reported, partially due to the difficulty with sequential data. In this paper, we present an elegant way of incorporating HMMs in the PLR framework. This leads to a powerful discriminative classifier that naturally handles sequential data. In this approach, speech classification is done using affine combinations of HMM log-likelihoods. We believe that such combinations of HMMs lead to a more accurate classifier than the conventional HMM-based classifier. Unlike similar approaches, we jointly estimate the HMM parameters and the PLR parameters using a single training criterion. The extension to continuous speech recognition is done via rescoring of N-best lists or lattices.


Physics in Medicine and Biology | 2006

A theoretical formulation of the electrophysiological inverse problem on the sphere

Jorge J. Riera; P. Valdés; Kunio Tanabe; Ryuta Kawashima

The construction of three-dimensional images of the primary current density (PCD) produced by neuronal activity is a problem of great current interest in the neuroimaging community, though being initially formulated in the 1970s. There exist even now enthusiastic debates about the authenticity of most of the inverse solutions proposed in the literature, in which low resolution electrical tomography (LORETA) is a focus of attention. However, in our opinion, the capabilities and limitations of the electro and magneto encephalographic techniques to determine PCD configurations have not been extensively explored from a theoretical framework, even for simple volume conductor models of the head. In this paper, the electrophysiological inverse problem for the spherical head model is cast in terms of reproducing kernel Hilbert spaces (RKHS) formalism, which allows us to identify the null spaces of the implicated linear integral operators and also to define their representers. The PCD are described in terms of a continuous basis for the RKHS, which explicitly separates the harmonic and non-harmonic components. The RKHS concept permits us to bring LORETA into the scope of the general smoothing splines theory. A particular way of calculating the general smoothing splines is illustrated, avoiding a brute force discretization prematurely. The Bayes information criterion is used to handle dissimilarities in the signal/noise ratios and physical dimensions of the measurement modalities, which could affect the estimation of the amount of smoothness required for that class of inverse solution to be well specified. In order to validate the proposed method, we have estimated the 3D spherical smoothing splines from two data sets: electric potentials obtained from a skull phantom and magnetic fields recorded from subjects performing an experiment of human faces recognition.


international conference on acoustics, speech, and signal processing | 2007

N-Best Rescoring for Speech Recognition using Penalized Logistic Regression Machines with Garbage Class

Øystein Birkenes; Tomoko Matsui; Kunio Tanabe; Tor Andre Myrvoll

State-of-the-art pattern recognition approaches like neural networks or kernel methods have only had limited success in speech recognition. The difficulties often encountered include the varying lengths of speech signals as well as how to deal with sequences of labels (e.g., digit strings) and unknown segmentation. In this paper we present a combined hidden Markov model (HMM) and penalized logistic regression machine (PLRM) approach to continuous speech recognition that can cope with both of these difficulties. The key ingredients of our approach are N-best rescoring and PLRM with garbage class. Experiments on the Aurora2 connected digits database show significant increase in recognition accuracy relative to a purely HMM-based system.


nordic signal processing symposium | 2006

Continuous Speech Recognition with Penalized Logistic Regression Machines

Øystein Birkenes; Tomoko Matsui; Kunio Tanabe; Tor Andre Myrvoll

Penalized logistic regression machines (PLRMs) have recently been shown to give good performance on isolated word speech re cognition. In this paper, we extend this framework to continuous speech recognition. We present two approaches that both make use of the output from an HMM Viterbi recognizer. The first approach performs probabilistic prediction with PLRM on the segments obtained from the HMM Viterbi recognizer. The resulting subwords and subword probabilities are combined to form a sentence and a sentence probability, respectively. In the second approach, an N-best list generated by the HMM Viterbi recognizer is rescored using PLRM. Experiments on the Aurora2 connected digits database show that both approaches outperform the baseline HMM Viterbi recognizer


Mathematics and Computers in Simulation | 2012

A note on computation of pseudospectra

Drahoslava Janovská; Vladimír Janovský; Kunio Tanabe

The aim is to contribute to pseudospectra computation via a path following technique. Given a matrix A Â? Â? n iÂ? n , we compute the branch consisting of a fixed singular value Â? and corresponding left and right singular vectors of the parameter dependent matrix (x+iy)I-A. The fact that the branch corresponds to the smallest singular value Â?min((x+iy)I-A)=Â? is sufficient to verify at just one point of the branch due to the continuity argument. We can exploit a standard ready-made software.


Journal of the Acoustical Society of America | 2010

Speech classification using penalized logistic regression with hidden Markov model log‐likelihood regressors.

Øystein Birkenes; Tomoko Matsui; Kunio Tanabe; Magne Hallstein Johnsen

Penalized logistic regression (PLR) is a well‐founded discriminative classifier with long roots in the history of statistics. Speech classification with PLR is possible with an appropriate choice of map from the space of feature vector sequences into the Euclidean space. In this talk, one such map is presented, namely, the one that maps into vectors consisting of log‐likelihoods computed from a set of hidden Markov models (HMMs). The use of this map in PLR leads to a powerful discriminative classifier that naturally handles the sequential data arising in speech classification. In the training phase, the HMM parameters and the regression parameters are jointly estimated by maximizing a penalized likelihood. The proposed approach is shown to be a generalization of conditional maximum likelihood (CML) and maximum mutual information (MMI) estimation for speech classification, leading to more flexible decision boundaries and higher classification accuracy. The posterior probabilities resulting from classificat...


Archive | 2008

Automatic Speech Recognition via N-Best Rescoring using Logistic Regression

Øystein Birkenes; Tomoko Matsui; Kunio Tanabe; Tor Andre Myrvoll

Automatic speech recognition is often formulated as a statistical pattern classification problem. Based on the optimal Bayes rule, two general approaches to classification exist; the generative approach and the discriminative approach. For more than two decades, generative classification with hidden Markov models (HMMs) has been the dominating approach for speech recognition (Rabiner, 1989). At the same time, powerful discriminative classifiers like support vector machines (Vapnik, 1995) and artificial neural networks (Bishop, 1995) have been introduced in the statistics and the machine learning literature. Despite immediate success in many pattern classification tasks, discriminative classifiers have only achieved limited success in speech recognition (Zahorian et al., 1997; Clarkson & Moreno, 1999). Two of the difficulties encountered are 1) speech signals have varying durations, whereas the majority of discriminative classifiers operate on fixed-dimensional vectors, and 2) the goal in speech recognition is to predict a sequence of labels (e.g., a digit string or a phoneme string) from a sequence of feature vectors without knowing the segment boundaries for the labels. On the contrary, most discriminative classifiers are designed to predict only a single class label for a given feature. In this chapter, we present a discriminative approach to speech recognition that can cope with both of the abovementioned difficulties. Prediction of a class label from a given speech segment (speech classification) is done using logistic regression incorporating a mapping from varying length speech segments into a vector of regressors. The mapping is general in that it can include any kind of segment-based information. In particular, mappings involving HMM log-likelihoods have been found to be powerful. Continuous speech recognition, where the goal is to predict a sequence of labels, is done with N-best rescoring as follows. For a given spoken utterance, a set of HMMs is used to generate an N-best list of competing sentence hypotheses. For each sentence hypothesis, the probability of each segment is found with logistic regression as outlined above. The segment probabilities for a sentence hypothesis are then combined along with a language model score in order to get a new score for the sentence hypothesis. Finally, the N-best list is reordered based on the new scores. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg

Collaboration


Dive into the Kunio Tanabe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Øystein Birkenes

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Vladimír Janovský

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sen Takeda

University of Yamanashi

View shared research outputs
Top Co-Authors

Avatar

Hideki Fujii

University of Yamanashi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge