Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rin-ichiro Taniguchi is active.

Publication


Featured researches published by Rin-ichiro Taniguchi.


advanced video and signal based surveillance | 2006

Dynamic Control of Adaptive Mixture-of-Gaussians Background Model

Atsushi Shimada; Daisaku Arita; Rin-ichiro Taniguchi

We propose a method for create a background model in non-stationary scenes. Each pixel has a dynamic Gaussian mixture model. Our approach can automatically change the number of Gaussians in each pixel. The number of Gaussians increases when pixel values often change because of Illumination change, object moving and so on. On the other hand, when pixel values are constant in a while, some Gaussians are eliminated or integrated. This process helps reduce computational time. We conducted experiments to investigate the effectiveness of our approach.


international conference on pattern recognition | 2006

Early Recognition and Prediction of Gestures

Akihiro Mori; Seiichi Uchida; Ryo Kurazume; Rin-ichiro Taniguchi; Tsutomu Hasegawa; Hiroaki Sakoe

This paper is concerned with an early recognition and prediction algorithm of gestures. Early recognition is the algorithm to provide recognition results before input gestures are completed. Motion prediction is the algorithm to predict the subsequent posture of the performer by using early recognition. In addition to them, this paper considers a gesture network for improving the performance of these algorithms. The performance of the proposed algorithm was evaluated by experiments of real-time control of a humanoid by gestures


international conference on pattern recognition | 2000

Recognition of local features for camera-based sign language recognition system

I. Imagawa; Hideaki Matsuo; Rin-ichiro Taniguchi; Daisaku Arita; Shan Lu; Seiji Igi

A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases.


advanced video and signal based surveillance | 2007

A fast algorithm for adaptive background model construction using parzen density estimation

Tatsuya Tanaka; Atsushi Shimada; Daisaku Arita; Rin-ichiro Taniguchi

Non-parametric representation of pixel intensity distribution is quite effective to construct proper background model and to detect foreground objects accurately. However, from the viewpoint of practical application, the computation cost of the distribution estimation should be reduced. In this paper, we present fast estimation of the probability density function (PDF) of pixel value using Parzen density estimation and foreground object detection based on the estimated PDF. Here, the PDF is computed by partially updating the PDF estimated at the previous frame, and it greatly reduces the computation cost of the PDF estimation. Thus, the background model adapts quickly to changes in the scene and, therefore, foreground objects can be robustly detected. Several experiments show the effectiveness of our approach.


computer vision and pattern recognition | 2012

Evaluation report of integrated background modeling based on spatio-temporal features

Yosuke Nonaka; Atsushi Shimada; Hajime Nagahara; Rin-ichiro Taniguchi

We report evaluation results of an integrated background modeling based on spatio-temporal features. The background modeling method consists of three complementary approaches: pixel-level background modeling, region-level one and frame-level one. The pixel-level background model uses the probability density function to approximate background model. The PDF is estimated non-parametrically by using Parzen density estimation. The region-level model is based on the evaluation of the local texture around each pixel while reducing the effects of variations in lighting. The frame-level model detects sudden, global changes of the the image brightness and estimates a present background image from input image referring to a background model image. Then, objects are extracted by background subtraction. Fusing these approaches realizes robust object detection under varying illumination.


international conference on image analysis and processing | 1999

A real-time motion capture system with multiple camera fusion

Satoshi Yonemoto; Asuka Matsumoto; Daisaku Arita; Rin-ichiro Taniguchi

This paper presents a real-time motion capture system of 3D multi-part objects, whose purpose is to do seamless mapping of objects in the real world into virtual environments easily. In general, virtual environment applications such as man-machine seamless interaction require the system to estimate accurate motion parameters at real-time for natural objects such as human bodies. To achieve this requirement, we have been developing a vision-based motion capture system which reconstructs time-varying motion parameters of 3D multi-part objects. The advantage of such a vision-based system is that it is possible to acquire the other scene parameters such as shape and surface properties at the same time, using the same equipment in measuring motion. In this paper, as our first system, we have implemented a color-marker-based motion capture system which realizes multi-view fusion and have demonstrated our motion capture and reconstruction system works at real-time on PC-clusters.


international parallel and distributed processing symposium | 1990

Datarol: a massively parallel architecture for functional languages

Makoto Amamiya; Rin-ichiro Taniguchi

Proposes a parallel machine architecture which incorporates an ultra-multiprocessing facility for parallel execution of functional programs. The machine performs parallel executions along a multi-thread control flow called datarol. A datarol program, instead of using a program counter, the instructions to be executed next are explicitly specified in the preceding instructions. The explicitly specified continuation linkage enables the concurrent execution of the instructions of different function instances, as well as the parallel execution of multi-thread control flow within a function instance. Based on a continuation-based execution model, the datarol processor is designed to implement an efficient parallel execution mechanism needed for ultra-multi-processing. First, the datarol concept is discussed in comparison with a dataflow model. Next, the datarol machine architecture and datarol processor design are described. Finally, the evaluation of the datarol architecture is shown.<<ETX>>


Neurocomputing | 2016

Learning unified binary codes for cross-modal retrieval via latent semantic hashing

Xing Xu; Li He; Atsushi Shimada; Rin-ichiro Taniguchi; Huimin Lu

Nowadays the amount of multimedia data such as images and text is growing exponentially on social websites, arousing the demand of effective and efficient cross-modal retrieval. The cross-modal hashing based methods have attracted considerable attention recently as they can learn efficient binary codes for heterogeneous data, which enables large-scale similarity search. Generally, to effectively construct the cross-correlation between different modalities, these methods try to find a joint abstraction space where the heterogeneous data can be projected. Then a quantization rule is applied to convert the abstraction representation to binary codes. However, these methods may not effectively bridge the semantic gap through the latent abstraction space because they fail to capture latent information between heterogeneous data. In addition, most of these methods apply the simplest quantization scheme (i.e. sign function) which may cause information loss of the abstraction representation and result in inferior binary codes. To address these challenges, in this paper, we present a novel cross-modal hashing based method that generates unified binary codes combining different modalities. Specifically, we first extract semantic features from the modalities of images and text to capture latent information. Then these semantic features are projected to a joint abstraction space. Finally, the abstraction space is rotated to produce better unified binary codes with much less quantization loss, while preserving the locality structure of projected data. We integrate the binary code learning procedures above to develop an iterative algorithm for optimal solutions. Moreover, we further exploit the useful class label information to reduce the semantic gap between different modalities to benefit the binary code learning. Extensive experiments on four multimedia datasets show that the proposed binary coding schemes outperform several other state-of-the-art methods under cross-modal scenarios.


international conference on pattern recognition | 2008

Gesture recognition using sparse code of Hierarchical SOM

Alsushi Shimada; Rin-ichiro Taniguchi

We propose an approach to recognize time-series gesture patterns with Hierarchical Self-Organizing Map(HSOM). One of the key issue of the time-series pattern recognition is to absorb the time variant appropriately and to make clusters which include the same gesture class. In our approach, we arrange the SOM hierarchically. In each layer of the SOM the time-series patterns divided into some periods; postures, gesture elements and gestures. They are learned in each layer of HSOM. For example, postures are learned in the first layer, gesture elements are learned in the second layer and so on. Using the sparse code in the bottom layer, the SOM can perform time invariant recognition of the gesture elements and gestures.


computer vision and pattern recognition | 2013

Light Field Distortion Feature for Transparent Object Recognition

Kazuki Maeno; Hajime Nagahara; Atsushi Shimada; Rin-ichiro Taniguchi

Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance of such objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light field image as an input and model the distortion of the light field caused by the refractive property of a transparent object. We propose a new feature, called the light field distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

Collaboration


Dive into the Rin-ichiro Taniguchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge