Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shunsuke Ota is active.

Publication


Featured researches published by Shunsuke Ota.


robotics automation and mechatronics | 2015

Handshake request motion model with an approaching human for a handshake robot system

Mitsuru Jindai; Shunsuke Ota; Yusuke Ikemoto; Tohru Sasaki

Humans shake hands as a sign of greeting when they first meet one another to express a feeling of rapport. A handshake is an embodied interaction that involves physical contact. For interactions involving a human and a robot, if the robot generates a handshake motion that is emotionally acceptable to humans, it will lessen any feeling of aversion that the human has when initiating an interaction with the robot. Thus, in our previous study, a handshake request motion model was proposed, in which the robot generates a handshake approaching motion before actually shaking hands with the human. In this model, the robot stretches its hand out to a human to request a handshake. Furthermore, the effectiveness of the model was demonstrated by experiments using a handshake robot system when the human and robot remained stationary. However, in handshakes between two humans, one of them usually requests to shake hands with the other who approaches him or her in order to promote an embodied interaction. Therefore, in this paper, a handshake request motion model with an approaching human is proposed, based on an analysis of human handshake motions. In this model, the robot generates the request motion for a handshake when a human approaches the robot. Furthermore, a switching handshake control is developed where the robot generates either the request motion, or the response motion, for a handshake according to the motion of the approaching human. The effectiveness was demonstrated by a sensory evaluation.


Industrial Robot-an International Journal | 2018

Road area detection method based on DBNN for robot navigation using single camera in outdoor environments

K.M. Ibrahim Khalilullah; Shunsuke Ota; Toshiyuki Yasuda; Mitsuru Jindai

The purpose of this study is to develop a cost-effective autonomous wheelchair robot navigation method that assists the aging population.,Navigation in outdoor environments is still a challenging task for an autonomous mobile robot because of the highly unstructured and different characteristics of outdoor environments. This study examines a complete vision guided real-time approach for robot navigation in urban roads based on drivable road area detection by using deep learning. During navigation, the camera takes a snapshot of the road, and the captured image is then converted into an illuminant invariant image. Subsequently, a deep belief neural network considers this image as an input. It extracts additional discriminative abstract features by using general purpose learning procedure for detection. During obstacle avoidance, the robot measures the distance from the obstacle position by using estimated parameters of the calibrated camera, and it performs navigation by avoiding obstacles.,The developed method is implemented on a wheelchair robot, and it is verified by navigating the wheelchair robot on different types of urban curve roads. Navigation in real environments indicates that the wheelchair robot can move safely from one place to another. The navigation performance of the developed method and a comparison with laser range finder (LRF)-based methods were demonstrated through experiments.,This study develops a cost-effective navigation method by using a single camera. Additionally, it utilizes the advantages of deep learning techniques for robust classification of the drivable road area. It performs better in terms of navigation when compared to LRF-based methods in LRF-denied environments.


society of instrument and control engineers of japan | 2017

Development of robot navigation method based on single camera vision using deep learning

K.M. Ibrahim Khalilullah; Shunsuke Ota; Toshiyuki Yasuda; Mitsuru Jindai

This paper presents a complete vision guided real-time approach to robot navigation in urban narrow roads based on drivable road area detection using deep learning. In this approach, an illuminant-invariant road database is created from captured images. This database is used to train the Deep Belief Neural Network (DBNN) for road detection. During navigation, a camera takes a snapshot of the road and then the captured image is converted into an illuminant-invariant image. After that, DBNN takes this image as an input. It extracts the road features layer-by-layer for detection. The experimental wheelchair robot follows detected road boundary for navigation. The performance of the developed algorithm is demonstrated by the experiments. In addition, we encompass the large areas of autonomous robot navigation in a single camera.


society of instrument and control engineers of japan | 2017

Development of nodding detection using neural network based on communication characteristics

Shunsuke Ota; Mitsuru Jindai; Toshiyuki Yasuda; Yoshihiro Sejima

In order to communicate smoothly, human does not only use verbal information but also use non-verbal information such as nodding. Nodding can be defined as the action of moving the head vertically up-and-down rhythmically. Nodding plays a huge role as a form of non-verbal information to show approval, agreement or understanding during communications. However, a system to detect nodding action where the head motion and voice rhythm are integrated has not been proposed. Therefore, in this paper, we developed a nodding detection using Neural Network based on the communication characteristics of the head motion and voice rhythm. Initially, the voice data of the speaker side and the head motion of the listener side are measured. The human nodding is detected by measured data using NN.


society of instrument and control engineers of japan | 2017

Development of hand-up response motion model in with and without voice greeting environment

Nuradam Bin Azmi; Shunsuke Ota; Toshiyuki Yasuda; Mitsuru Jindai

Greeting interaction with physical contacts, such as the handshake motions of robots has been discussed in previous papers. In these papers, researchers have discussed the robots response to humans handshake request in with and without voice greeting environments. Based on a study result, during handshake greeting between human and robot, starting the voice greeting before the hand motion shows favorable feedback from humans [1]. On the other hand, greeting interaction without physical contact, such as the hand-up motions of robots has not been discussed. Therefore, in this paper, a hand-up response motion model was proposed for generation of hand-up greeting with human in with and without voice greeting environments. Initially, mutual hand-up greetings between human in with and without voice greeting environments are analyzed. Based on the differences in motion characteristics of humans during mutual hand-up greeting in both environments, a hand-up response motion model in which a robot responds to humans request for a hand-up greeting is proposed. Using the developed hand-up robot motion system, the effectiveness of the motion model is demonstrated through sensory evaluation experiments. Subsequently, the differences of the motion characteristics of the robots response to hand-up greeting interactions in the case of hand-up greeting with voice greeting and without voice greeting preferred by humans are discussed.


international conference on ultra modern telecommunications | 2015

Handshake response motion model with approaching of human based on an analysis of human handshake motions

Shunsuke Ota; Mitsuru Jindai; Tohru Sasaki; Yusuke Ikemoto

Humans often greet one another using a handshake, a common gesture of friendship. In interactions between humans and robots, the robot can smoothly begin to coexist and communicate with humans and allay feelings of aversion in them by generating a handshake motion that is emotionally acceptable to humans. Thus, in our previous study, a handshake response motion model was proposed, in which the robot generates a handshake approaching motion before actually shaking hands with the human. In this model, the robot responds with a handshake when the human requests one. The effectiveness of the model was demonstrated by experiments using a handshake robot system when the human and robot remained stationary. However, in interactions between two humans, one usually requests a handshake while approaching the other. Therefore, in this paper, a handshake response motion model with approaching of human has been proposed based on an analysis of human handshake motions. In this model, the robot generates the response motion of a handshake when a human requests one while approaching the robot. Furthermore, the effectiveness of the proposed model was demonstrated by sensory evaluation using a robot system that employed the model.


ieee/sice international symposium on system integration | 2014

A handshake response motion model during active approach to a human

Shunsuke Ota; Mitsuru Jindai; Tadao Fukuta; Tomio Watanabe

A handshake is an embodied interaction for displaying closeness through physical contact. In this study, we develop a handshake response motion model for a handshake during active approach to a human on the basis of analysis of handshake motions between humans. We also develop a handshake robot system that uses the proposed model. A sensory evaluation is performed for analyzing the time lag preferred by humans between the approaching motion and the hand motion generated by the robot system. Another sensory evaluation is performed for determining the preferred timing of a handshake motion that is accompanied by a voice greeting. The evaluation results demonstrate that the proposed model can generate a handshake response motion during active approach that is preferred by humans. Furthermore, the effectiveness of the proposed model is demonstrated.


ieee international conference on cyber technology in automation control and intelligent systems | 2012

A small-size handshake robot system for a generation of handshake approaching motion

Mitsuru Jindai; Shunsuke Ota; Hitoshi Yamauchi; Tomio Watanabe

Humans often greet one another using a handshake, a common gesture of friendship. In the case of a human and a robot, the robot can smoothly begin to communicate and coexist with humans without feelings of aversion in humans; if it generates a handshake motion that is emotionally acceptable to humans. Thus, we have proposed a handshake request motion model and a handshake respond motion model to generate a handshake approaching motion prior to actually shaking hands with the human. In the handshake request motion model, the robot stretches its hand out to a human to request a handshake. In the handshake respond motion model, the robot responds a handshake when the human requests a handshake. The effectiveness of these models is demonstrated by experiments using a handshake robot system which is fabricated based on the average size of a human arm. Therefore, in this paper, we develop a small-size handshake robot system that uses the handshake request motion model and the handshake respond motion model to generate a handshake approaching motion with a human. The effectiveness of the robot system is demonstrated by sensory evaluation. Furthermore, a handshake motion between robots is realized by using two small-size handshake robot systems which are adopted these models.


MATEC Web of Conferences | 2016

A Hug Behavior Generation Model Based on Analyses of Human Behaviors for Hug Robot System

Mitsuru Jindai; Shunsuke Ota; Tohru Sasaki


Transactions of the JSME (in Japanese) | 2015

A mobile handshake robot system for generation of handshake request motion during active approach to a human

Shunsuke Ota; Mitsuru Jindai; Hitoshi Yamauchi; Tomio Watanabe

Collaboration


Dive into the Shunsuke Ota's collaboration.

Top Co-Authors

Avatar

Mitsuru Jindai

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomio Watanabe

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hitoshi Yamauchi

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Yoshihiro Sejima

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hironori Takimoto

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge