Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haruhiko Takase is active.

Publication


Featured researches published by Haruhiko Takase.


international symposium on neural networks | 2009

Obstacle to training SpikeProp networks — Cause of surges in training process —

Haruhiko Takase; Masaru Fujita; Hiroharu Kawanaka; Shinji Tsuruoka; Hidehiko Kita; Terumine Hayashi

In this paper, we discuss an obstacle to training in SpikeProp[1], which is a type of supervised learning algorithms for spiking neural networks. In the original publication of SpikeProp, weights with mixed signs are suspected to cause failures of training. We pointed out the cause of it through some experiments. Weights with mixed signs make the dynamics of the units activity twisted, and the twisted dynamics break the assumption that SpikeProp algorithm is based on. Therefore, it causes surges in training processes. They would mean an underlying problem on training processes.


international symposium on neural networks | 2001

Weight minimization approach for fault tolerant multi-layer neural networks

Haruhiko Takase; Hidehiko Kita; Terumine Hayashi

We propose a new learning algorithm to enhance fault tolerance of multilayer neural networks (MLN). This method is based on the idea that strong connections make MLN sensitive to faults. To eliminate such connections, we introduce the new evaluation function for the new learning algorithm. It consists of not only the output error but also the sum of all squared weights. With the new evaluation function, the learning algorithm minimizes not only output error but also weights. The value of parameter to balance effects of these two terms is decided actively during training of MLN. Next, to show the effectiveness of the proposed method, we apply it to pattern recognition problems. It is shown that the miss recognition rate and the activity of hidden units are improved.


Archive | 2010

Extraction Method of Retinal Border Lines in Optical Coherence Tomography Image by Using Dynamic Contour Model

Ai Yamakawa; Dai Kodama; Shinji Tsuruoka; Hiroharu Kawanaka; Haruhiko Takase; Mohd Fadzil bin Abdul Kadir; Hisashi Matsubara; Fumio Okuyama

In the field of ophthalmology, the needs of retina diagnosis using optical coherence tomography (OCT) images have been growing, and the automatic measurement of a retina thickness and its quantitative evaluation are desired for the diagnosis of retinal diseases. Previously, the automatic measurement methods of the retinal thickness have been reported for retinal OCT images. These previous methods can extract the retinal border lines (ILM and RPE) appropriately in most cases of normal OCT image. However these methods caused the tracking error to some OCT images with large noises. In this paper, we propose a new automatic measurement method of a retinal thickness in OCT image. The method employs ODAN (One Directional Active Net) to extract ILM and RPE. ODAN employs a new energy function to extract the retinal border lines exactly and all nodes of ODAN moves only to one direction to minimize the total energy repeatedly. The energy function consists of (1) the conformity characteristics energy of image and (2) the internal strain energy. We confirmed the usefulness of the ODAN by the experimental results for ten OCT images with large noises. We compared the positions of retinal border lines by the proposed method with the positions in a manual trace by ophthalmology specialist. In the comparative result, the proposed method is useful as the basic method for the detection of retinal diseases.


Archive | 2010

A Retinal Layer Structure Analysis to Measure the Size of Disease Using Layer Boundaries Detection for Optical Coherence Tomography Images

Dai Kodama; Ai Yamakawa; Shinji Tsuruoka; Hiroharu Kawanaka; Haruhiko Takase; Mohd Fadzil bin Abdul Kadir; Hisashi Matsubara; Fumio Okuyama

In the field of ophthalmology, optical coherence tomography (OCT) is rapidly becoming popular in clinical applications to diagnose retinal disease. In this paper, we proposed a new profile analysis to evaluate the size of the retinal disease using the number of layer boundaries. The number is established by a new analysis method of a gray level profile scanned in longitudinal direction for an OCT image. We employed the proposed method for 50 OCT images of normal retina and 50 OCT images of abnormal retina. The experiment result showed that a significant difference was obtained in the significance level at 1%, when we employed Mann-Whitney U method on the standard deviation of the number of layer boundaries for normal and abnormal retinal images group. Therefore, we confirmed that the proposed method becomes one of the indexes to evaluate the size of the retinal disease. In addition, we confirmed that our system can measure the size of abnormal part in horizontal direction using the number of layer boundaries.


asia and south pacific design automation conference | 1997

An enhanced iterative improvement method for evaluating the maximum number of simultaneous switching gates for combinational circuits

Kai Zhang; Haruhiko Takase; Terumine Hayashi; Hidehiko Kita

This paper presents an enhanced iterative improvement method with multiple pins (EIIMP) to evaluate the maximum number of simultaneous switching gates. Although the iterative improvement method is a simple algorithm, it is powerful to this purpose. Keeping this advantage, we enhance it by two points. The first one is to change values for multiple successive primary inputs at a time. The second one is to rearrange primary inputs on the basis of the closeness that represents the number of overlapping gates between fan-out regions. Our method is shown to be effective by experiments for ISCAS benchmark circuits.


ieee international conference on fuzzy systems | 2016

A study on feature extraction and disease stage classification for Glioma pathology images

Kiichi Fukuma; V. B. Surya Prasath; Hiroharu Kawanaka; Bruce J. Aronow; Haruhiko Takase

Computer aided diagnosis (CAD) systems are important in obtaining precision medicine and patient driven solutions for various diseases. One of the main brain tumor is the Glioblastoma multiforme (GBM) and histopathological tissue images can provide unique insights into identifying and grading disease stages. In this work, we consider feature extraction and disease stage classification for brain tumor histopathological images using automatic image analysis methods. In particular we utilized automatic nuclei segmentation and labeling for histopathology image data obtained from The Cancer Genome Atlas (TCGA) and check for classification accuracy using support vector machine (SVM), Random Forests (RF). Our results indicate that we obtain classification accuracy 98.9% and 99.6% respectively.


Procedia Computer Science | 2016

A Study on Nuclei Segmentation, Feature Extraction and Disease Stage Classification for Human Brain Histopathological Images

Kiichi Fukuma; V. B. Surya Prasath; Hiroharu Kawanaka; Bruce J. Aronow; Haruhiko Takase

Computer aided diagnosis (CAD) systems are important in obtaining precision medicine and patient driven solutions for various diseases. One of the main brain tumor is the Glioblastoma multiforme (GBM) and histopathological tissue images can provide unique insights into identifying and grading disease stages. In this study, we consider nuclei segmentation method, feature extraction and disease stage classification for brain tumor histopathological images using automatic image analysis methods. In particular we utilized automatic nuclei segmentation and labeling for histopathology image data obtained from The Cancer Genome Atlas (TCGA) and check for significance of feature descriptors using K-S test and classification accuracy using support vector machine (SVM) and Random Forests (RF). Our results indicate that we obtain classification accuracy 98.6% and 99.8% in the case of Object-Level features and 82.1% and 86.1% in the case of Spatial Arrangement features, respectively.


soft computing | 2014

Comparative study on feature descriptors for brain image analysis

Kazuhiko Tamaki; Kiichi Fukuma; Hiroharu Kawanaka; Haruhiko Takase; Shinji Tsuruoka; Bruce J. Aronow; Shikha Chaganti

A key obstacle to developing automated histopathology assessment tools is the difficulty of defining quantifiable image features that could serve as fundamental data elements capable of distinguishing key disease types and subtypes. A variety of feature extraction and selection methods for histology images have been proposed. However, comparisons of different feature descriptor approaches remains challenging because of varying datasets and emphases chosen by different authors. As an example of how a shared reference atlas could accelerate efforts in this area. In this study, we constructed normal and disease sample datasets by standardizing histology images employed from Allen Brain Atlas. After preparing the datasets, we extracted features mentioned in the preceding studies from the datasets to characterize normal and disease tissues. To confirm statistical significance between the normal and disease images, Kolmogorov-Smirnov test was employed. The experimental results indicated that topological features are effective to distinguish the normal images from the disease ones. This paper also shows the details of construction of the datasets, segmentation of nuclei, feature descriptors and the experimental results. We discuss the effectiveness and generalizability of derived features.


Journal of Digital Imaging | 2012

Computerized Segmentation Method for Individual Calcifications Within Clustered Microcalcifications While Maintaining Their Shapes on Magnification Mammograms

Akiyoshi Hizukuri; Ryohei Nakayama; Nobuo Nakako; Hiroharu Kawanaka; Haruhiko Takase; Koji Yamamoto; Shinji Tsuruoka

In a computer-aided diagnosis (CADx) scheme for evaluating the likelihood of malignancy of clustered microcalcifications on mammograms, it is necessary to segment individual calcifications correctly. The purpose of this study was to develop a computerized segmentation method for individual calcifications with various sizes while maintaining their shapes in the CADx schemes. Our database consisted of 96 magnification mammograms with 96 clustered microcalcifications. In our proposed method, a mammogram image was decomposed into horizontal subimages, vertical subimages, and diagonal subimages for a second difference at scales 1 to 4 by using a filter bank. The enhanced subimages for nodular components (NCs) and the enhanced subimages for both nodular and linear components (NLCs) were obtained from analysis of a Hessian matrix composed of the pixel values in those subimages for the second difference at each scale. At each pixel, eight objective features were given by pixel values in the subimages for NCs at scales 1 to 4 and the subimages for NLCs at scales 1 to 4. An artificial neural network with the eight objective features was employed to enhance calcifications on magnification mammograms. Calcifications were finally segmented by applying a gray-level thresholding technique to the enhanced image for calcifications. With the proposed method, a sensitivity of calcifications within clustered microcalcifications and the number of false positives per image were 96.5% (603/625) and 1.69, respectively. The average shape accuracy for segmented calcifications was also 91.4%. The proposed method with high sensitivity of calcifications while maintaining their shapes would be useful in the CADx schemes.


international symposium on neural networks | 2003

Effect of regularization term upon fault tolerant training

Haruhiko Takase; Hidehiko Kita; Terumine Hayashi

To enhance fault tolerance of multi-layer neural networks, we proposed PAWMA (partially adaptive weight minimization approach). This method minimizes not only output error but also the sum of squares of weights (the regularization term). This method aims to decrease the number of connections whose faults strongly degrade the performance of MLNs (important connections). On the other hand, weight decay, which aims to eliminate unimportant connections, is base on the same idea. This method expects to keeping important connections and decaying unimportant connections. In this paper, we discuss about the contradiction between two effects of the regularization term. Through some experiment, we show that the difference between two effects is brought by the partially application of the regularization term.

Collaboration


Dive into the Haruhiko Takase's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fumio Okuyama

Suzuka University of Medical Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koji Yamamoto

Suzuka University of Medical Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce J. Aronow

Cincinnati Children's Hospital Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge