Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenji Iwata is active.

Publication


Featured researches published by Kenji Iwata.


Pattern Recognition | 2011

Object detection based on a robust and accurate statistical multi-point-pair model

Xinyue Zhao; Yutaka Satoh; Hidenori Takauji; Shun'ichi Kaneko; Kenji Iwata; Ryushi Ozaki

In this paper, we propose a robust and accurate background model, called grayscale arranging pairs (GAP). The model is based on the statistical reach feature (SRF), which is defined as a set of statistical pair-wise features. Using the GAP model, moving objects are successfully detected under a variety of complex environmental conditions. The main concept of the proposed method is the use of multiple point pairs that exhibit a stable statistical intensity relationship as a background model. The intensity difference between pixels of the pair is much more stable than the intensity of a single pixel, especially in varying environments. Our proposed method focuses more on the history of global spatial correlations between pixels than on the history of any given pixel or local spatial correlations. Furthermore, we clarify how to reduce the GAP modeling time and present experimental results comparing GAP with existing object detection methods, demonstrating that superior object detection with higher precision and recall rates is achieved by GAP.


Pattern Recognition | 2015

Co-occurrence probability-based pixel pairs background model for robust object detection in dynamic scenes

Dong Liang; Shun'ichi Kaneko; Manabu Hashimoto; Kenji Iwata; Xinyue Zhao

An illumination-invariant background model for detecting objects in dynamic scenes is proposed. It is robust in the cases of sudden illumination fluctuation as well as burst motion. Unlike the previous works, it uses the co-occurrence differential increments of multiple pixel pairs to distinguish objects from a non-stationary background. We use a two-stage training framework to model the background. First, joint histograms of co-occurrence probability are employed to screen supporting pixels with high normalized correlation coefficient values; then, K-means clustering-based spatial sampling optimizes the spatial distribution of the supporting pixels; finally the background model maintains a sensitive criterion with few parameters to detect foreground elements. Experiments using several challenging datasets (PETS-2001, AIST-INDOOR, Wallflower and a real surveillance application) prove the robust and competitive performance of object detection in various indoor and outdoor environments. HighlightsWe present a co-occurrence pixel pairs background model.Robust in sudden illumination fluctuation and burst motion background.Spatio-temporal statistical analyses are employed to screen supporting pixels.Our method shows competitive performance in extreme environments.It does not artificially predefine any local operator, subspace or block.


asian conference on computer vision | 2014

Extended Co-occurrence HOG with Dense Trajectories for Fine-Grained Activity Recognition

Hirokatsu Kataoka; Kiyoshi Hashimoto; Kenji Iwata; Yutaka Satoh; Nassir Navab; Slobodan Ilic; Yoshimitsu Aoki

In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.


british machine vision conference | 2016

Recognition of Transitional Action for Short-Term Action Prediction using Discriminative Temporal CNN Feature.

Hirokatsu Kataoka; Yudai Miyashita; Masaki Hayashi; Kenji Iwata; Yutaka Satoh

Herein, we address transitional actions class as a class between actions. Transitional actions should be useful for producing short-term action predictions while an action is transitive. However, transitional action recognition is difficult because actions and transitional actions partially overlap each other. To deal with this issue, we propose a subtle motion descriptor (SMD) that identifies the sensitive differences between actions and transitional actions. The two primary contributions in this paper are as follows: (i) defining transitional actions for short-term action predictions that permit earlier predictions than early action recognition, and (ii) utilizing convolutional neural network (CNN) based SMD to present a clear distinction between actions and transitional actions.Using three different datasets, we will show that our proposed approach produces better results than do other state-of-the-art models. The experimental results clearly show the recognition performance effectiveness of our proposed model, as well as its ability to comprehend temporal motion in transitional actions.


International Journal of Optomechatronics | 2014

Robust Object Detection in Severe Imaging Conditions using Co-Occurrence Background Model

Dong Liang; Shun'ichi Kaneko; Manabu Hashimoto; Kenji Iwata; Xinyue Zhao; Yutaka Satoh

In this study, a spatial-dependent background model for detecting objects is used in severe imaging conditions. It is robust in the cases of sudden illumination fluctuation and burst motion background. More importantly, it is quite sensitive under the cases of underexposure, low-illumination, and narrow dynamic range, all of which are very common phenomenon using a surveillance camera. The background model maintains statistical models in the form of multiple pixel pairs with few parameters. Experiments using several challenging datasets (Heavy Fog, PETS-2001, AIST-INDOOR, and a real surveillance application) confirm the robust performance in various imaging conditions.


pacific-rim symposium on image and video technology | 2006

Hybrid camera surveillance system by using stereo omni-directional system and robust human detection

Kenji Iwata; Yutaka Satoh; Ikushi Yoda; Katsuhiko Sakaue

We propose a novel surveillance system that uses hybrid camera network. The system contains a Stereo Omni-directional System (SOS) and PTZ cameras to take the wide range images and face images of enough resolution for identification. The SOS can capture both omni-directional color images and range data simultaneously in real time. The robust human detection methods include robust background subtraction method (RRC), skin color segmentation and face tracking method. First, the system detects persons from an omni-directional image, and then the detailed face images are obtained with the PTZ cameras. The PTZ cameras can track the faces by using the four directional features and the relaxation matching. In addition, the system has automatic camera position calibration feature. Thus, the user can use the system without any troublesome settings.


international conference on knowledge-based and intelligent information and engineering systems | 2003

Robust Facial Parts Detection by Using Four Directional Features and Relaxation Matching

Kenji Iwata; Hitoshi Hongo; Kazuhiko Yamamoto; Yoshinori Niwa

A facial parts detection technique that is robust for face poses is described. The initial probability is obtained by template matching of the four directional features. Positions of facial parts are detected by relaxation matching using the spring connection of each facial part. The relaxation method has a problem that requires much computation time. We propose a high-speed relaxation technique for facial parts detection. Experiments show that facial parts are detectable to the multi- directional face image database.


Journal of Neurosurgical Anesthesiology | 2013

Nicorandil protects pial arterioles from endothelial dysfunction induced by smoking in rats.

Kenji Iwata; Hiroki Iida; Mami Iida; Motoyasu Takenaka; Kumiko Tanabe; Naokazu Fukuoka; Masayoshi Uchida

Background: Our aims are to investigate the effect of nicorandil, which is used for angina prevention and treatment, on the endothelial dysfunction induced by acute smoking and to clarify the underlying mechanism. Materials and Methods: A closed cranial window preparation was used to measure changes in pial vessel diameters in Sprague-Dawley rats. The responses of arterioles were examined to an endothelium-dependent vasodilator acetylcholine (ACh) before smoking. After intravenous nicorandil (200 &mgr;g/kg bolus infusion and then 60 &mgr;g/kg/min continuous infusion; n=6) or saline (control; n=6) pretreatment, the pial vasodilator response to topical 10−5 M ACh infusion was reexamined both before and 1 hour after 1-minute cigarette smoking. Thereafter, either glibenclamide or N-&ohgr;-nitro-L-arginine methyl ester (L-NAME) was infused 20 minutes before nicorandil infusion. In the glibenclamide (n=6) or L-NAME; n=6 pretreatment group, the pial vasodilator response to topical ACh was examined before and after smoking. Percentage changes in pial vessel diameters were used for the statistical analysis. Results: Cerebral arterioles were dilated during topical ACh infusion. After smoking, 10−5 M ACh constricted cerebral arterioles (−7.7±1.8%). After smoking, in the nicorandil-pretreatment group, 10−5 M ACh dilated cerebral pial arterioles by 10.5±3.0%. When given before nicorandil infusion, glibenclamide, but not L-NAME, abolished the preventive effects of nicorandil against smoking-induced endothelial dysfunction in pial vessels. Conclusions: Acute cigarette smoking causes dysfunction of endothelium-dependent pial vasodilatation, and nicorandil prevents this effect of smoking. The mechanism underlying this protective effect may depend mainly on adenosine triphosphate–sensitive potassium-channel activation.


ieee region 10 conference | 2009

Statistical reach feature method and its application to robust image registration

Ryushi Ozaki; Yutaka Satoh; Kenji Iwata; Katsuhiko Sakaue

In this paper, a novel method for image registration method which affords robust results for various disturbances in the real world, including local and/or global variations of illumination, occlusions, and noises, is proposed. The registration process is based on a set of selected point-pairs with binary coded signs of differences, which is constructed from the given template image. The selection of the point-pairs is defined on the basis of statistical view point. The authors developed the mathematical model of the inverting-ratio which is the quantity strongly related to the similarity index, for the Gaussian-disturbances. The authors also verified the model by numerical experiments. The mathematical model, enforced by the numerical results, gives the theoretical backbone for the robustness of the proposed method.


2008 Bio-inspired, Learning and Intelligent Systems for Security | 2008

Development of Software for Real-Time Unusual Motions Detection by Using CHLAC

Kenji Iwata; Yutaka Satoh; Katsuhiko Sakaue; Takumi Kobayashi; Nobuyuki Otsu

We developed software that automatically detects unusual motions in a video sequence in real-time by using an efficient implementation of CHLAC and a visual software framework called Lavatube. Video monitoring systems that can automatically detect unusual motions are currently in great demand. CHLAC has been shown to work effectively and favorably for this task. A newly developed parallel processing technique increases the speed of CHLAC by using SIMD instructions. The platform software Lavatube supports easy construction of a video processing system using graph-connecting icons on a GUI. We demonstrated the effectiveness of the proposed system through experiments in indoor and outdoor environments.

Collaboration


Dive into the Kenji Iwata's collaboration.

Top Co-Authors

Avatar

Yutaka Satoh

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Katsuhiko Sakaue

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ikushi Yoda

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge