Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hirokazu Madokoro is active.

Publication


Featured researches published by Hirokazu Madokoro.


international symposium on neural networks | 2012

Classification of behavior patterns with trajectory analysis used for event site

Hirokazu Madokoro; Kenya Honma; Kazuhito Sato

This paper presents a method for classification and recognition of behavior patterns based on interest from human trajectories at an event site. Our method creates models using Hidden Markov Models (HMMs) for each human trajectory quantized using One-Dimensional Self-Organizing Maps (1D-SOMs). Subsequently, we apply Two-Dimensional SOMs (2D-SOMs) for unsupervised classification of behavior patterns from features according to the distance between models. Furthermore, we use a Unified distance Matrix (U-Matrix) for visualizing category boundaries based on the Euclidean distance between weights of 2D-SOMs. Our method extracts typical behavior patterns and specific behavior patterns based on interest as ascertained using questionnaires. Then our method visualize relations between these patterns. We evaluated our method based on Cross Validation (CV) using only the trajectories of typical behavior patterns. The recognition accuracy improved 9.6% over that of earlier models. We regard our method as useful to estimate interest from behavior patterns at an event site.


Journal of Computers | 2013

Hardware Implementation of Back-Propagation Neural Networks for Real-Time Video Image Learning and Processing

Hirokazu Madokoro; Kazuhito Sato

This paper presents a digital hardware Back- Propagation (BP) model for real-time learning in the field of video image processing. The model is a layer parallel architecture with a 16-bit fixed point specialized for video image processing. We have compared our model with a standard BP model that used a double-precision floating point. Simulation results show that our model has equal capabilities to those of the standard BP model. We have implemented the model on an FPGA board that we originally designed and developed for experimental use as a platform for real-time video image processing. Experimental results show that our model performed 100,000 epochs/frame learning that corresponds to 90 MCUPS and was able to test all pixels on interlace video images.


international symposium on neural networks | 2010

Unsupervised and adaptive category classification for a vision-based mobile robot

Masahiro Tsukada; Hirokazu Madokoro; Kazuhito Sato

This paper presents an unsupervised category classification method for time-series images that combines incremental learning of Adaptive Resonance Theory-2 (ART-2) and self-mapping characteristic of Counter Propagation Networks (CPNs). Our method comprises the following procedures: 1) generating visual words using Self-Organizing Maps (SOM) from 128-dimensional descriptors in each feature point of a Scale-Invariant Feature Transform (SIFT), 2) forming labels using unsupervised learning of ART-2, and 3) creating and classifying categories on a category map of CPNs for visualizing spatial relations between categories. We use a vision system on a mobile robot for taking time-series images. Experimental results show that our method can classify objects into categories according to their change of appearance during the movement of a robot.


ieee international conference on automatic face & gesture recognition | 2008

Generation of emotional feature space based on topological characteristics of facial expression images

Masaki Ishii; Kazuhito Sato; Hirokazu Madokoro; Makoto Nishida

This paper proposes a generation method of a subjectspecific emotional feature space using the Self-Organizing Map and the Counter Propagation Network. The feature space expresses the correspondence relationship between the change of facial expression pattern and the strength of emotion on the two-dimensional space centering on ldquopleasantnessrdquo and ldquoarousalrdquo. Experimental results suggested that our method was useful to estimate strength and mixture level of six basic emotions.


international symposium on neural networks | 2010

Segmentation of head MR images using hybrid neural networks of unsupervised learning

Toshimitsu Otani; Kazuhito Sato; Hirokazu Madokoro; Atsushi Inugami

This paper presents an unsupervised segmentation method using hybridized Self-Organizing Maps (SOMs) and Fuzzy Adaptive Resonance Theory (ART) based only on the brightness distribution and characteristics of head MR images. We specifically examine the features of mapping while maintaining topological relations of weights with SOMs and while integrating a suitable number of categories with Fuzzy ART. Our method can extract intracranial regions using Level Set Methods (LSMs) of deformable models from head MR images. For the extracted intracranial regions, our method segments brain tissues with high granularity using SOMs. Subsequently, these regions are integrated with Fuzzy ART while maintaining relations of anatomical structures of brain tissues and the order of brightness on T2-weighted images. We applied our method to head MR images that are used at clinical sites. We obtained effective and objective segmentation results according to the anatomical structural information of the brain for supporting diagnosis of brain atrophy. Moreover, we applied our method to a head MR image database including data of 30 men and women in their 30s–70s. Results revealed a significant correlation between aging and expanding of cerebrospinal fluid (CSF).


international conference on control, automation and systems | 2014

Visual saliency based segmentation of multiple objects using variable regions of interest

Ayaka Yamanashi; Hirokazu Madokoro; Yutaka Ishioka; Kazuhito Sato

This paper presents a segmentation method of multiple object regions based on visual saliency. Our method comprises three steps. First, attentional points are detected using saliency maps (SMs). Subsequently, regions of interest (RoIs) are extracted using scale-invariant feature transform (SIFT). Finally, foreground regions are extracted as object regions using GrabCut. Using RoIs as teaching signals, our method achieved automatic segmentation of multiple objects without learning in advance. As experimentally obtained results obtained using PASCAL2011 dataset, attentional points were extracted correctly from 18 images for two objects and from 25 images for single objects. We obtained segmentation accuracies: 64.1%, precision; 62.1%, recall, and 57.4%, F-measure. Moreover, we applied our method to time-series images obtained using a mobile robot. Attentional points were extracted correctly for seven images for two objects and three images for single objects from ten images. We obtained segmentation accuracies of 58.0%, precision; 63.1%, recall, and 58.1%, F-measure.


applied sciences on biomedical and communication technologies | 2011

Unsupervised segmentation for MR brain images

Kazuhito Sato; Sakura Kadowaki; Hirokazu Madokoro; Momoyo Ito; Atsushi Inugami

As described herein, we propose an unsupervised method for segmentation of magnetic resonance (MR) brain images by hybridizing the self-mapping characteristics of 1-D Self-Organizing Maps (SOMs) and using incremental learning functions of fuzzy Adaptive Resonance Theory (ART). As the proposed method requires the appropriate parameters to segment tissues (such as cerebrospinal fluid, gray matter and white matter) that are necessary for brain atrophy diagnosis, first we derive the optimal parameter set through the preliminary experiments. The main contribution of this work is to evaluate the effectiveness of the proposed method, considering the conventional methods that are highly accurate in terms of usefulness as classification techniques. We focus on Fuzzy C-means (FCM) and Expectation Maximization Gaussian Mixture (EM-GM) with previous setting of the number of clusters, and then Mean Shift (MS) without previous setting of the number of clusters. Through the comparative experiments on the two metrics, we confirmed that our method could achieve higher accuracy than these conventional methods. Additionally, we propose a Computer-Aided Diagnosis (CAD) system for use with brain dock examinations based on case analyses of diagnostic reading. We construct a prototype system for reducing loads on diagnosticians during quantitative analysis of the degree of brain atrophy. Field tests of 193 examples of brain dock medical examinees reveal that the system efficiently supports diagnostic work in the clinical field: the alteration of brain atrophy attributable to aging can be quantified easily, irrespective of the diagnostician.


robot and human interactive communication | 2014

Adaptive Category Mapping Networks for all-mode topological feature learning used for mobile robot vision

Hirokazu Madokoro; Kazuhito Sato; Nobuhiro Shimoi

This paper presents an adaptive and incremental learning method to visualize series data on a category map. We designate this method as Adaptive Category Mapping Networks (ACMNs). The architecture of ACMNs comprises three modules: a codebook module, a labeling module, and a mapping module. The codebook module converts input features into codebooks as low-dimensional vectors using Self-Organizing Maps (SOMs). The labeling module creates labels as a candidate of categories based on the incremental learning of Adaptive Resonance Theory (ART). The mapping module visualizes spatial relations among categories on a category map using Counter Propagation Networks (CPNs). ACMNs actualize supervised, semi-supervised, and unsupervised learning as all-mode learning to switch network structures including connections. The experimentally obtained results obtained using two open datasets reveal that the recognition accuracy of our method is superior to that of the former method. Moreover, we address applications of the visualizing function using category maps.


international conference on control, automation and systems | 2014

Parallel Implementation of Saliency Maps for Real-Time Robot Vision

Keigo Shirai; Hirokazu Madokoro; Satoshi Takahashi; Kazuhito Sato

This paper presents a parallel implementation model for real-time video image processing of saliency maps (SMs), which are a model to predict gaze directions based on human visual attention. It can extract high visual saliency regions that differ from surroundings in scene images. In computer vision, SMs are used for various applications because of the sparse feature representation. As an implementation device, we use IMAPCAR2, which is a single instruction multiple data (SIMD) processor with 64 processing elements (PEs). The features of IMAPCAR2 are high performance, low power consumption, and easy programming using one-dimensional C (1DC) of the ANSI-C compatible. We compared the performance of a sequential model and our parallel model for all steps. The processing speed of our model was 250 times higher than that of the sequential model. We compared our model with the existing parallel model. The processing speed of our model was 5.6 times higher than that of the existing model. For real-time video image processing, we implemented our model on an IMAPCAR2 evaluation board. The processing cost was 47.5 ms for the video images of 640 × 240 pixel resolution.


Journal of Multimedia | 2012

Facial Expression Spacial Charts for Describing Dynamic Diversity of Facial Expressions

Hirokazu Madokoro; Kazuhito Sato

This paper presents a new framework to describe individual facial expression spaces, particularly addressing the dynamic diversity of facial expressions that appear as an exclamation or emotion, to create a unique space for each person. We name this framework Facial Expression Spatial Charts (FESCs). The FESCs are created using Self– Organizing Maps (SOMs) and Fuzzy Adaptive Resonance Theory (ART) of unsupervised neural networks. For facial images with emphasized sparse representations using Gabor wavelet filters, SOMs extract topological information in facial expression images and classify them as categories in the fixed space that are decided by the number of units on the mapping layer. Subsequently, Fuzzy ART integrates categories classified by SOMs using adaptive learning functions under fixed granularity that is controlled by the vigilance parameter. The categories integrated by Fuzzy ART are matched to Expression Levels (ELs) for quantifying facial expression intensity based on the arrangement of facial expressions on Russell’s circumplex model. We designate the category that contains neutral facial expression as the basis category. Actually, FESCs can visualize and represent dynamic diversity of facial expressions consisting of ELs extracted from facial expressions. In the experiment, we created an original facial expression dataset consisting of three facial expressions—happiness, anger, and sadness— obtained from 10 subjects during 7–20 weeks at one-week intervals. Results show that the method can adequately display the dynamic diversity of facial expressions between subjects, in addition to temporal changes in each subject. Moreover, we used stress measurement sheets to obtain temporal changes of stress for analyzing psychological effects of the stress that subjects feel. We estimated stress levels of four grades using Support Vector Machines (SVMs). The mean estimation rates for all 10 subjects and for 5 subjects over more than 10 weeks were, respectively, 68.6% and 77.4%.

Collaboration


Dive into the Hirokazu Madokoro's collaboration.

Top Co-Authors

Avatar

Kazuhito Sato

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Nobuhiro Shimoi

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Sakura Kadowaki

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Kazuhisa Nakasho

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Momoyo Ito

University of Tokushima

View shared research outputs
Top Co-Authors

Avatar

Masaki Ishii

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Atsushi Inugami

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Carlos Cuadra

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Masahiro Tsukada

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge