Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hisato Fukuda is active.

Publication


Featured researches published by Hisato Fukuda.


asian conference on computer vision | 2010

Video deblurring and super-resolution technique for multiple moving objects

Takuma Yamaguchi; Hisato Fukuda; Ryo Furukawa; Hiroshi Kawasaki; Peter F. Sturm

Video camera is now commonly used and demand of capturing a single frame from video sequence is increasing. Since resolution of video camera is usually lower than digital camera and video data usually contains a many motion blur in the sequence, simple frame capture can produce only low quality image; image restoration technique is inevitably required. In this paper, we propose a method to restore a sharp and highresolution image from a video sequence by motion deblur for each frame followed by super-resolution technique. Since the frame-rate of the video camera is high and variance of feature appearance in successive frames and motion of feature points are usually small, we can still estimate scene geometries from video data with blur. Therefore, by using such geometric information, we first apply motion deblur for each frame, and then, super-resolve the images from the deblurred image set. For better result, we also propose an adaptive super-resolution technique considering different defocus blur effects dependent on depth. Experimental results are shown to prove the strength of our method.


international conference on intelligent computing | 2017

Detecting Inner Emotions from Video Based Heart Rate Sensing

Keya Das; Sarwar Ali; Koyo Otsu; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno

Recognizing human emotion by computer vision is an interesting and challenging problem. In particular, the reading of inner emotions, has received limited attention. In this paper, we use a remote video-based heart rate sensing technique to obtain physiological data that provides an indication of a person’s inner emotions. This method allows for contactless estimates of heart rate data while the subject is watching emotionally stimulating video clips. We also compare against a wearable heart rate sensor to validate the usefulness of the proposed remote heart rate reading framework. We found that the reading of heart rates of a subject effectively detects the inner emotional reactions of human subjects while they were watching funny and horror videos—despite little to no facial expressions at times. These findings are validated from the reading of heart rates for 40 subjects with our vision-based method compared against conventional wearable sensors. We also find that the change in heart rate along with emotionally stimulating content is statistically significant and our remote sensor is well correlated with the wearable contact sensor.


conference of the industrial electronics society | 2016

Remote monitoring and communication system with a doll-like robot for the elderly

Kouyou Otsu; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno

In this paper, we propose a remote monitoring and communication system for elderly care. We aim to detect emergencies concerning elderly people living alone and send alerts to his/her family from a remote location. For this reason, the system has two modules: the monitoring module and the video chat module. When the monitoring module of the system detects any unusual events concerning the elderly, the system sends an alert message to their familys smart phone. Then, their family can monitor the current situation via the video chat module. The elderly and his/her family member can also easily use the video chat function for day-to-day communication without requiring any complex actions for the users. For the video chat display at the elderly side, we use a television set, which all elderly would be expected to have in their rooms. This gives our system a familiar appearance so the elderly can use it without feeling aversion towards the system. The monitoring system for detecting emergencies is a doll-like robot. Besides monitoring, the robot can also initiate simple conversations autonomously in normal situations, which can improve the elderly persons sense of familiarity towards the system.


international symposium on visual computing | 2013

Object Recognition for Service Robots through Verbal Interaction Based on Ontology

Hisato Fukuda; Satoshi Mori; Yoshinori Kobayashi; Yoshinori Kuno; Daisuke Kachi

We are developing a helper robot able to fetch objects requested by users. This robot tries to recognize objects through verbal interaction with the user concerning objects that it cannot detect autonomously. Since the robot recognizes objects based on verbal interaction with the user, such a robot must by necessity understand human descriptions of said objects. However, humans describe objects in various ways: they may describe attributes of whole objects, those of parts, or those viewable from a certain direction. Moreover, they may use the same descriptions to describe a range of different objects. In this paper, we propose an ontological framework for interactive object recognition to deal with such varied human descriptions. In particular, we consider human descriptions about object attributes, and develop an interactive object recognition system based on this ontology.


international symposium on visual computing | 2011

Material information acquisition using a ToF range sensor for interactive object recognition

Md. Abdul Mannan; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno

This paper proposes a noncontact active vision technique that analyzes the reflection pattern of infrared light to estimate the object material according to the degree of surface smoothness (or roughness). To obtain the surface micro structural details and the surface orientation information of a free-form 3D object, the system employs only a time-of-flight range camera. It measures reflection intensity patterns with respect to surface orientation for various material objects. Then it classifies these patterns by Random Forest (RF) classifier to identify the candidate of material of reflected surface. We demonstrate the efficiency of the method through experiments by using several household objects under normal illuminating condition. Our main objective is to introduce material information in addition to color, shape and other attributes to recognize target objects more robustly in the interactive object recognition framework.


international conference on intelligent computing | 2018

Classification of Emotions from Video Based Cardiac Pulse Estimation

Keya Das; Antony Lam; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno

Recognizing emotion from video is an active research theme with many applications such as human-computer interaction and affective computing. The classification of emotions from facial expression is a common approach but it is sometimes difficult to differentiate genuine emotions from faked emotions. In this paper, we use a remote video based cardiac activity sensing technique to obtain physiological data to identify emotional states. We show that from the remotely sensed cardiac pulse patterns alone, emotional states can be differentiated. Specifically, we conducted an experimental study on recognizing the emotions of people watching video clips. We recorded 26 subjects that all watched the same comedy and horror video clips and then we estimated their cardiac pulse signals from the video footage. From the cardiac pulse signal alone, we were able to classify whether the subjects were watching the comedy or horror video clip. We also compare against classifying for the same task using facial action units and discuss how the two modalities compare.


international conference on intelligent computing | 2018

Natural Calling Gesture Recognition in Crowded Environments

Aye Su Phyo; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno

Most existing gesture recognition algorithms use fixed postures in simple environments. In natural calling, the user may perform gestures in various positions and the environment may be occupied by many people with many hand motions. This paper presents an algorithm for natural calling gesture recognition by detecting gaze, the position of the hand-wrist, and fingertips. A challenge to solve is how to make the natural calling gesture recognition work in crowded environments with randomly moving objects. The approach taken here is to get the key-points of individual people using a real-time detector by using a camera and detect gaze and hand-wrist positions. Then, zooming into the hand-wrist part and getting the key-points of fingertips, we calculate the positions of the fingertips to recognize calling gestures. We tested the proposed approach in video under different conditions: from one person to over four people that sit and walk around. We obtained an average recognition accuracy of 87.12%, thus showing the effectiveness of our approach.


international conference on intelligent computing | 2018

Smart Robotic Wheelchair for Bus Boarding Using CNN Combined with Hough Transforms

Sarwar Ali; Shamim Al Mamun; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshonori Kuno

In recent times, several smart robotic wheelchair research studies have been conducted for the sake of providing a safe and comfortable ride for the user with real-time autonomous operations like object recognition. Further reliability support is essential for such wheelchairs to perform in real-time, common actions like boarding buses or trains. In this paper, we propose a smart wheelchair that can detect buses and precisely recognize bus doors and whether they are opened or closed for automated boarding. We use a modified simple CNN algorithm (i.e. modified Tiny-YOLO) as a base network on the CPU for fast detection of buses and bus doors. After that, we feed the detected information of our Hough line transform based method for accurate localization information of open bus doors. This information is indispensable for our bus-boarding robotic wheelchair to board buses. To evaluate the performance of our proposed method, we also compare the accuracy of our modified Tiny-YOLO and our proposed combined detection method with the original ground truth.


human robot interaction | 2018

Teleoperation of a Robot through Audio-Visual Signal via Video Chat

Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno

Telepresence robots have the potential for improving human-to-human communication when a person cannot be physically present at a given location. One way to achieve this is to construct a system that consists of a robot and video conferencing setup. However, a conventional implementation would involve building a separate server or control path for teleoperation of the robot in addition to the video conferencing system. In this paper, we propose an approach to robot teleoperation via a video call that does not require the use of an additional server or control path. Instead, we propose directly teleoperating the robot via the audio and video signals of the video call itself. We experiment on which signals are most suitable for this task and present our findings.


International Conference on Applied Human Factors and Ergonomics | 2018

Robotic Shopping Trolley for Supporting the Elderly

Yoshinori Kobayashi; Seiji Yamazaki; Hidekazu Takahashi; Hisato Fukuda; Yoshinori Kuno

As the advance of an aging society in Japan, along with the lack of caregivers, the elderly care has become a crucial social problem. To cope with this problem, we are focusing on shopping. Shopping is one of the important daily activities and expected to be effective for the elderly rehabilitation because it feels easier than walking rehabilitation and it can give the positive effect for the cognitive functions by memorizing and checking out items to buy. The current shopping rehabilitation is carried out with a caregiver accompanied by the elderly one by one for guiding inside the store, carrying shopping basket, monitoring, etc. Consequently, the caregivers’ load is very high. In this paper, we propose a robotic shopping trolley that can reduce the caregivers’ load in shopping rehabilitation. We evaluate its effectiveness through experiments at an actual supermarket.

Collaboration


Dive into the Hisato Fukuda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akiko Yamazaki

Tokyo University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge