Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junichi Sugiyama is active.

Publication


Featured researches published by Junichi Sugiyama.


world haptics conference | 2011

Belt tactile interface for communication with mobile robot allowing intelligent obstacle detection

Dzmitry Tsetserukou; Junichi Sugiyama; Jun Miura

This paper focuses on the construction of a novel belt tactile interface and telepresence system intended for mobile robot control. The robotic system consists of a mobile robot and a wearable master robot. The elaborated algorithms allow the robot to precisely recognize the shape, boundaries, movement direction, speed, and distance to the obstacle by means of the laser range finders. The designed tactile belt interface receives the detected information and maps it through the vibrotactile patterns. We designed the patterns in such a way that they convey the obstacle parameters in a very intuitive, robust, and unobtrusive manner. The robot movement direction and speed are governed by the tilt of the users torso. The sensors embedded into the belt interface measure the user orientation and gestures precisely. Such an interface lets to deeply engage the user into the teleoperation process and to deliver them the tactile perception of the remote environment at the same time. The key point is that the user gets the opportunity to use own arms, hands, fingers for operation of the robotic manipulators and another devices installed on the mobile robot platform. The experimental results of user study revealed the effectiveness of the designed vibration patterns for obstacle parameter presentation. The accuracy in 100% for detection of the moving object by participants was achieved. We believe that the developed robotic system has significant potential in facilitating the navigation of mobile robot while providing a high degree of immersion into remote space.


international conference on computer graphics and interactive techniques | 2011

NAVIgoid: robot navigation with haptic vision

Junichi Sugiyama; Dzmitry Tsetserukou; Jun Miura

Telepresence robotic system allows a person to feel as if they were present at a place other than their true location. The sense of telexistence is provided with such stimuli as vision, hearing, sense of touch, etc. [1]. The user of such system is capable of affecting the remote location, and hence, the user position and actions must be sensed and transmitted to the remote robot (teleoperation).


robotics and biomimetics | 2011

A wearable robot control interface based on measurement of human body motion using a camera and inertial sensors

Junichi Sugiyama; Jun Miura

This paper describes a wearable robot control interface based on measurement of human body motion, which allows a user to control the robot intuitively. This wearable interface is composed of a camera and inertial sensors and estimates body motion of the user: movement and arm motion. The estimated motion is then converted into commands of movement or arm motion of a humanoid robot. The body motion estimation is performed by a combination of a monocular SLAM and an EKF-based motion estimation using inertial sensors. The arm motion is estimated with a Wii Remote with a visual marker using the same motion model and sensor data integration. We implemented efficient algorithms, especially for image processing, for a real-time motion estimation. We conducted motion estimation and robot control experiments, which show the effectiveness of the proposed method.


robot and human interactive communication | 2009

Development of a vision-based interface for instructing robot motion

Junichi Sugiyama; Jun Miura

This paper describes a vision-based interface for instructing robot motion easily. An interface that makes the robot move in the same way as users motion is effective for an intuitive motion instruction. Such an interface can be realized by estimating the pose (position and orientation) of the interface and executing move commands for making the robot take the same pose.We estimate the pose of the interface by a monocular SLAM method, which is based on visual features and the extended Kalman filter. By additionally using an orientation sensor and an accelerometer, the reliability and the accuracy of pose estimation are improved. From the estimated pose, the target values of the robot position and the head orientation are set and the robot moves to achieve them. We implemented an experimental system which run in real-time (30 [Hz]) and successfully applied it to controlling a humanoid robot.


human-robot interaction | 2013

A wearable visuo-inertial interface for humanoid robot control

Junichi Sugiyama; Jun Miura

This paper describes a wearable visuo-inertial interface for humanoid robot control, which allows a user to control motion of a humanoid robot intuitively. The interface composed of a camera and inertial sensors and estimates body motion of the user: movement (walk), hand motion, and grasping gesture. The body motion (walk) estimation is performed by a combination of a monocular SLAM and a vision-inertial fusion using an extended Kalman filter. The hand motion is also estimated by using the same motion model and sensor fusion as the body motion estimation. The estimated motion was used to operate the movement or arm motion of the humanoid robot. We conducted the experiment on robot operation. The results revealed that the user intuitively controlled the robot and it responded to the operator commands correctly.


IAS | 2016

Human-Robot Collaborative Remote Object Search

Jun Miura; Shin Kadekawa; Kota Chikaarashi; Junichi Sugiyama

Object search is one of the typical tasks for remotely controlled service robots. Although object recognition technologies have been well developed, an efficient search strategy (or viewpoint planning method) is still an issue. This paper describes a new approach to human-robot collaborative remote object search. An analogy for our approach is ride on shoulders; a user controls a fish-eye camera on a remote robot to change views and search for a target object, independently of the robot. Combined with a certain level of automatic search capability of the robot, this collaboration can realize an efficient target object search. We developed an experimental system to show the feasibility of the approach.


Archive | 2011

Rice-processed material and method for producing the same

Junichi Sugiyama; 純一 杉山; Mizuki Tsuta; 瑞樹 蔦; Mario Shibata; 真理朗 柴田; Kaori Tomita; かおり 富田


Journal of Cereal Science | 2018

Effect of Retort treatment on physicochemical properties of high-amylose rice gel made by high-speed shear treatment

Keisuke Isaka; Mario Shibata; Marin Osawa; Junichi Sugiyama; Tomoaki Hagiwara


KAGAKU TO SEIBUTSU | 2015

Fluorescence Fingerprint for Food Evaluation and Its Applications

Mizuki Tsuta; Junichi Sugiyama


The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2014

1A1-N05 Human-Robot Collaborative Remote Object Search(Robots for Home/Office Application)

Shin Kadekawa; Kouta Chikaarashi; Junichi Sugiyama; Jun Miura

Collaboration


Dive into the Junichi Sugiyama's collaboration.

Top Co-Authors

Avatar

Jun Miura

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mario Shibata

National Agriculture and Food Research Organization

View shared research outputs
Top Co-Authors

Avatar

Mizuki Tsuta

National Agriculture and Food Research Organization

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dzmitry Tsetserukou

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shin Kadekawa

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Atsushi Shigemura

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hiraki Goto

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hiroaki Masuzawa

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Junji Satake

Toyohashi University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge