Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hitoshi Habe is active.

Publication


Featured researches published by Hitoshi Habe.


asian conference on pattern recognition | 2015

Appearance-based multiple fish tracking for collective motion analysis

Kei Terayama; Koki Hongo; Hitoshi Habe; Masa-aki Sakagami

We propose a visual tracking method for dense fish schools in which occlusions occur frequently. Although much progress has been made for tracking multiple objects in video images, it is challenging to track individuals in highly dense groups. For occluded fishes, estimation of their positions and directions is difficult. However, if we know the number of fishes in a local area, we can accurately estimate their states by matching all of the combinations of possible parameters on the basis of our appearance model. We apply the idea to track multiple fishes in a school. Experimental results show that multiple fishes are practically tracked with our method compared to a well-known tracking method, and the average difference is less than 4%b of the mean body length of the school.


Ipsj Transactions on Computer Vision and Applications | 2016

Multiple fish tracking with an NACA airfoil model for collective behavior analysis

Kei Terayama; Hitoshi Habe; Masa-aki Sakagami

We propose a visual tracking method with an NACA airfoil model for dense fish schools in which occlusions occur frequently. Although much progress has been made for tracking multiple objects, it remains a challenging task to track individuals due to factors such as occlusion and target appearance variation. In this paper, we first introduce a NACA airfoil model as a deformable appearance model of fish. For occluded fish, we estimate their positions, angles, and postures with template matching and simulated annealing algorithms to effectively optimize their parameters. To improve performance of tracking, we repeatedly track fish with the parameter estimation algorithm forwards and backwards. We prepared two real fish scenes in which the average number of fish is over 25 in each frame and multiple fish superimpose over 50 times. Experimental results for the scenes show that fish are practically tracked with our method compared to a tracking method based on a mixture particle filter. Over 75 % of fish in each scene have been tracked throughout the scene, and the average difference is less than 4 % of the mean body length of the school.


Thirteenth International Conference on Quality Control by Artificial Vision 2017 | 2017

Spatial and temporal segmented dense trajectories for gesture recognition

Kaho Yamada; Takeshi Yoshida; Kazuhiko Sumi; Hitoshi Habe; Ikuhisa Mitsugami

Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.


Archive | 2017

Behavior Understanding Based on Intention-Gait Model

Yasushi Yagi; Ikuhisa Mitsugami; Satoshi Shioiri; Hitoshi Habe

Gait is known as one of biometrics, and there have been many studies on gait authentication. In those studies, it is implicitly assumed that the gait of a certain person is always constant. It is, however, untrue in reality; a person usually walks differently according to their mood and physical/mental conditions, which we call “inertial states.” Motivated by this fact, we organized the research project “Behavior Understanding based on Intention-Gait Model”, which was supported by JST-CREST from 2010 to 2017. The goal of this project was to map “gait”, in the broad sense of the term, to inertial states such as attention, social factors, and cognitive ability. In this chapter, we provide an overview of the three kinds of estimation technologies considered in this project: attention, social factors, and cognitive ability.


international conference signal processing systems | 2016

A Video Scene Detection of the Instantaneous Motion by Farmed Fry

Koji Abe; Ryota Shimizu; Hitoshi Habe; Yoshiaki Taniguchi; Nobukazu Iguchi

As a method for supporting fish farming, this paper presents a video scene detection when farmed fry start instantaneously in a tank due to environmental stimuli. Although some environmental stimuli such as sound noises or lighting startle the fry and the stimuli bring about the instantaneous response, actual situations around the tanks in which the stimuli occur are unclear in detail. From the fact the fry often die due to crashes to the tanks wall and between the fry by the response, a monitoring system for the fry and situation around the pool could find causes of the stimuli, and it could result in decrease of the death number of the fry. In this research, the fry which swim in a tank are monitored by a video camera and the video scenes at the response are detected by a SVM with a feature value which represents frys acceleration using sequential frames of the moving image. Preparing the moving images which include scenes of the response by fish in a tank, performances of the proposed method were examined. From experimental results, accuracy ratios of the recall and the precision for the scene detection have shown more than 80% on average and 100% under normal illuminances (108.5 lux on average), respectively.


complex, intelligent and software intensive systems | 2016

Flexible Screen Sharing System between PC and Tablet for Collaborative Activities

Hiroyuki Masaki; Hitoshi Habe; Nobukazu Iguchi

We have developed a screen sharing system to share contents between two persons, and applicable to one-to-one remote teaching. The screen of a PC at one side is shared with the screen of a tablet at the other side through the network to convey instructions from an operator to a collaborator. This system makes it possible to arbitrarily select a part of the screen of the PC at the operator side. The selected screen is presented in the tablet at the collaborator side. The collaborator can adjust the scale of contents, and capture the screen. By analyzing such operations, the system can understand and record which parts of the contents the collaborator paid attention to. In addition, the camera of the tablet can be used as a simple scanner to digitize paper documents easily. Further, characters and symbols drawn with a finger or a pen on the tablet screen can be presented on the PC at the other side.


asian conference on pattern recognition | 2013

Group Leadership Estimation Based on Influence of Pointing Actions

Hitoshi Habe; Kohei Kajiwara; Ikuhisa Mitsugami; Yasushi Yagi

When we act in a group with family members, friends, colleagues, each group member often play the respective role to achieve a goal that all group members have in common. This paper focuses on leadership among various kinds of roles observed in a social group and proposes a method to estimate a leader based on an interaction analysis. In order to estimate a leader in a group, we extract pointing actions of each person and measure how other people change their actions triggered by the pointing actions, i.e. how much influence the pointing actions have. When we can see the tendency that one specific person makes pointing actions and the actions have a high influence on another member, it is very likely that the person is a leader in a group. The proposed method is based on this intuition and measures the influence of pointing actions using their motion trajectories. We demonstrate that the proposed method has a potential for estimating the leadership through a comparison between the computed influence measures and subjective evaluations using some actual videos taken in a science museum.


international conference on pattern recognition | 2012

Dynamic scene reconstruction using asynchronous multiple Kinects

Mitsuru Nakazawa; Ikuhisa Mitsugami; Yasushi Makihara; Hozuma Nakajima; Hitoshi Habe; Hirotake Yamazoe; Yasushi Yagi


international conference on pattern recognition | 2012

Easy depth sensor calibration

Hirotake Yamazoe; Hitoshi Habe; Ikuhisa Mitsugami; Yasushi Yagi


international conference on pattern recognition | 2012

Point cloud transport

Hozuma Nakajima; Yasushi Makihara; Hsu Hsu; Ikuhisa Mitsugami; Mitsuru Nakazawa; Hirotake Yamazoe; Hitoshi Habe; Yasushi Yagi

Collaboration


Dive into the Hitoshi Habe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kazuhiko Sumi

Aoyama Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge