Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susumu Maruno is active.

Publication


Featured researches published by Susumu Maruno.


IEEE Transactions on Neural Networks | 1993

Reduction of required precision bits for back-propagation applied to pattern recognition

Shigeo Sakaue; Toshiyuki Kohda; Hiroshi Yamamoto; Susumu Maruno; Yasuharu Shimeki

The number of precision bits for operations and data are limited in the hardware implementations of backpropagation (BP). Reduction of rounding error due to this limited precision is crucial in the implementation. The new learning algorithm is based on overestimation of significant error in order to alleviate underflow and omission of weight updating for correctly recognized patterns. While the conventional BP algorithm minimizes the squared error between output signals and supervising data, the new learning algorithm minimizes the weighted error function. In the learning simulation of multifont capital recognition, this algorithm converged recognition accuracy to 100% with only 8-b precision. In addition, the recognition accuracy for characters that did not appear in the training data reached 94.9%. This performance is equivalent to that of a conventional BP with 12-b precision. Moreover, it is found that the performance of the weighted error function is high even when only a small number of hidden neurons is used. Consequently, the algorithm reduces the required amount of weight memory.


IEEE Journal on Selected Areas in Communications | 1994

Quantizer neuron model and neuroprocessor-named quantizer neuron chip

Susumu Maruno; Toshiyuki Kohda; Hiroyuki Nakahira; Shiro Sakiyama; Masakatsu Maruyama

A quantizer neuron model and a hardware implementation of the model is described. A quantizer neuron model and a multifunctional layered network (MFLN) with quantizer neurons is proposed and applied to a character recognition system. Each layer of MFLN has a specific function defined by quantizer input, and the weights between neurons are set dynamically according to quantizer inputs. The learning speed of MFLN is extremely fast in comparison with conventional multilayered perceptrons using back propagation, and the structure of MFLN is suitable for supplemental learning with extraneous learning data sets. We tested the learning speed and compared it with three other network models: RCE networks, LVQ3, and multilayered neural network with back propagation. According to the simulation, we also developed a quantizer neuron chip (QNC) using two newly developed schemes. QNC simulates MFLN and has 4736 neurons and 2000000 synaptic weights. The processing speed of the chip achieved 20300000000 connections per second (GCPS) for recognition and 20 000 000 connection updates per second (MCUPS) for learning. QNC is implemented in a 1.2 /spl mu/m double-metal CMOS-process sea of gates and contains 27 000 gates on a 10.99/spl times/10.93 mm/sup 2/ die. The neuroboard, which consists of a main board with a QNC and a memory board for synaptic weights of the neurons, can be connected to a host personal computer and can be used for image or character recognition and learning. The quantizer neuron model, the quantizer neuron chip, and the neuroboard with QNC can realize adaptive learning or filtering. >


international symposium on neural networks | 1993

Recognition of handwritten numeric characters using neural networks designed on approximate reasoning architecture

Yoshihiro Kojima; Hiroshi Yamamoto; Toshiyuki Kohda; S. Sakaue; Susumu Maruno; Yasuharu Shimeki; K. Kawakami; M. Mizutani

We have newly developed a handwritten numeric character recognition system with neural networks based on an approximate reasoning architecture (NARA). Handwritten character recognition is one of the most difficult tasks in an area of pattern recognition because of the variation of handwritten images even in a same category of character. NARA, which consists of a classifier of input data, several sub-neural networks and an integrator of the outputs of sub-neural networks can realize a stable recognition of large variations of handwritten character images, and achieved a correct answer rate of 95.41%, an error rate of 0.20% and a rejection rate of 4.38%.


international symposium on neural networks | 1993

Adaptive segmentation of quantizer neuron architecture (ASQA)

Susumu Maruno; Taro Imagawa; Toshiyuki Kohda; Yoshihiro Kojima; Hiroshi Yamamoto; Yasuharu Shimeki

The authors have previously proposed a multi functional layered network (MFLN) employing a quantizer neuron model and proved that a learning speed of MFLN is the fastest among RCE networks, LVQ3 and multi-layered neural network with backpropagation. The authors also proved that MFLN has very nice supplemental learning performance and can realize adaptive learning or filtering. One of the biggest issues of neural networks is how to design the network structure. In this paper the authors propose an adaptive segmentation of quantizer neuron architecture (ASQA) for answering the above issue and apply them to handwritten character recognition. The networks based on ASQA consist of quantizer neurons which can proliferate themselves and form the optimum network structure for the recognition automatically during training. As a result, there is no need to design the structure of the networks and the average accuracy of the closed and the open test of 27,200 handwritten numeric characters increased to 99.6%. The best tuning of a segmentation threshold of quantizer neurons produced the optimum network size of ASQA.


international symposium on neural networks | 1993

Segmentation of handwritten Japanese character strings with Hopfield type neural networks

Hiroshi Yamamoto; S. Sakaue; Susumu Maruno; Yasuharu Shimeki

Whereas a character segmentation is an essential pre-process for performing a character recognition, this has been an extremely complicated task for Japanese document recognition. The difficulties of it are due to the irregularities of sizes and disposition of Japanese characters in addition to an existence of separated characters. Thus, we have developed a new segmentation method with a Hopfield type neural networks and applied it to handwritten Japanese character strings. A general constraining conditions for segmentation of Japanese characters is expressed as energy functions in the networks and the networks can perform segmentation of Japanese character strings pliably. Our experimental result showed a probability of correct segmentation of 82.8% in contrast to 75.9% obtained by the conventional method.


international symposium on neural networks | 1993

Object recognition system using temporal pattern recognition networks with quantizer neuron chip

Susumu Maruno; Taro Imagawa; Toshiyuki Kohda; Yasuharu Shimeki

One of the biggest issues of an object recognition is the recognition with rotation invariance under a fluctuating noisy environment. We developed an object recognition system using temporal pattern recognition network with quantizer neuron chip (QNC) and a /spl phi/-s transformation of shapes and applied them to object recognition. The shape of the object is converted to a series of angles as a function of the circumference of the shape (/spl phi/-s data) and can be treated as a series of temporal patterns. The system consists of a multifunctional layered network(MFLN) with QNC and a layer of neurons with self feedback (self feedback layer). The self feedback layer unifies the temporal recognition results of networks with QNC during a certain period defined by the time constant of self feedback and this function can realize the function of selective attention to certain areas of a series of temporal patterns. As a result, the system realizes rotation invariance in recognition and we obtained 100% recognition accuracy of 50 trials with fluctuating noise taken by CCD camera.


international symposium on neural networks | 1991

Reduction of necessary precision for the learning of pattern recognition

S. Sakaue; Toshiyuki Kohda; Hiroshi Yamamoto; Susumu Maruno; Yasuharu Shimeki

The authors propose a novel learning algorithm with weighted error function (WEF). They have reduced the necessary precision for the learning of multi-font alpha-numeric recognition to 10-bit fixed point precision using the WEF. The WEF raises the recognition accuracy by more than 25% when the precision of all operations (including multiplication and addition) and the precision of all data (including weights and backpropagation signals) are limited to 10-bit fixed point. This improves the feasibility of analog implementation and lessens the data width of digital implementation. The performance of the WEF is high even with a small number of hidden neurons. This enables the reduction of weight memory. Furthermore, the WEF accelerates the learning and thus refines the adaptability of backpropagation.<<ETX>>


The Journal of The Institute of Image Information and Television Engineers | 1996

Solid State Imaging Techniques. Adaptive Gamma Processing of the Video Camera for the Expansion of the Dynamic Range.

Shigeo Sakaue; Akihiro Tamura; Masaaki Nakayama; Susumu Maruno

We have developed a new signal processing method for expanding the dynamic range of a video camera. A variable and nonlinear gamma characteristic is applied to the input image depending on the distribution of the luminance. We set the gamma characteristic for the back-lit images so as to amplify the luminance of the dark pixels and preserve the contrast of the bright pixels. We have established the decision rule of the gamma characteristic using the learning algorithms of neural networks in order to make the decision rule correspond human vision. The implementation of the gamma decision rule consists of a cascade connection of RAMs, which decreases the required total capacity of RAMs by 1/100 compared with the implementation with a single RAM. The effect of our new method is expand the dynamic range by three times.


Systems and Computers in Japan | 1995

Character recognition system with cooperation of pattern and symbolic processing

Hisao Niwa; Hiroshi Yamamoto; Yoshihiro Kojima; Yasuharu Shimeki; Susumu Maruno; Kazuhiro Kayashima

A newly developed character recognition method is proposed that can be applied to low quality printed documents. In this method, the cooperation of pattern processing with neural networks and symbolic processing with knowledge of language is adopted. If errors occur at one part, another part detects it and sends the error information to all parts. After successive iterations until no error is detected, a recognition result is obtained. A character recognition of 98.4 percent is obtained with this method. This rate is 2.8 percent higher than the result of a conventional method with no information exchange among processing parts.


international symposium on neural networks | 1991

Multifunctional layered network with quantizer neurons

Susumu Maruno; Toshiyuki Kohda; Yoshihiro Kojima; S. Sakaue; Hiroshi Yamamoto; Yasuharu Shimeki

The authors propose a multifunctional layered network (MFLN) with a quantizer neuron model and describe the principles of the quantizer neuron and the structure of the network for a character recognition system. Each layer of the MFLN has a specific function defined by the quantizer input of the quantizer neuron, and its learning speed is extremely fast. The authors have applied it to a character recognition system and tested its initial and supplemental learning performance in comparison with three other network models (RCE networks, LVQ3, and a multilayered neural network with back-propagation). For initial learning of ten fonts, the MFLN is fastest, and it is 40 times faster than the multilayered neural network with back-propagation. For supplemental learning with seven further fonts also, the MFLN is the fastest, and it is 600 times faster than the multilayered neural network with back-propagation. The recognition rate for 10 of the fonts learned initially is 97.4% after the MFLN has learned supplementary fonts, and the MFLN displays the lowest degradation of the recognition rate of initially learned fonts.<<ETX>>

Collaboration


Dive into the Susumu Maruno's collaboration.

Researchain Logo
Decentralizing Knowledge