Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsuyoshi Tasaki is active.

Publication


Featured researches published by Tsuyoshi Tasaki.


intelligent robots and systems | 2005

Spatially mapping of friendliness for human-robot interaction

Tsuyoshi Tasaki; Kazunori Komatani; Tetsuya Ogata; Hiroshi G. Okuno

It is important that robots interact with multiple people. However, most research has dealt with only interaction between one robot and one person and assumed that the distance between them does not change. This paper focuses on the spatial relationships between a robot and multiple people during interaction. Based on the distance between them, our robot selects appropriate functions to use. It does this using a method we developed for spatially mapping the friendliness of each space around the robot. The robot interacts with the highest friendliness spaces (people) selectively, thereby enabling interaction between the robot and multiple people. Our humanoid robot, SIG2 which the proposed method was implemented into, interacted with about 30 visitors, at the Kyoto University Museum. The results obtained using questionnaires after interaction showed that the actions of SIG2 were easy to understand even when it interacted with multiple people at the same time and that SIG2 behaved in a friendly manner.


industrial and engineering applications of artificial intelligence and expert systems | 2005

Distance-based dynamic interaction of humanoid robot with multiple people

Tsuyoshi Tasaki; Shohei Matsumoto; Hayato Ohba; Mitsuhiko Toda; Kazuhiro Komatani; Tetsuya Ogata; Hiroshi G. Okuno

Research on human-robot interaction is getting an increasing amount of attention. Because almost all the research has dealt with communication between one robot and one person, quite little is known about communication between a robot and multiple people. We developed a method that enables robots to communicate with multiple people by selecting an interactive partner using criteria based on the concept of proxemics. In this method, a robot changes active sensory-motor modalities based on the interaction distance between itself and a person. Our method was implemented in a humanoid robot, SIG2, using a subsumption architecture. SIG2 has various sensory-motor modalities to interact with humans. A demonstration of SIG2 showed that the proposed method works well during interaction with multiple people.This research was partially supported by the Ministry of Education, Culture, Sports, Science and Technology, Grant-in-Aid for Scientific Research No.15200015 and No.1601625, and COE Program of Informatics Research Center for Development of Knowledge Society Infrastructure.


ieee/sice international symposium on system integration | 2013

Spatio-temporal bird's-eye view images using multiple fish-eye cameras

Taka-Aki Sato; Alessandro Moro; Atsushi Sugahara; Tsuyoshi Tasaki; Atsushi Yamashita; Hajime Asama

In camera images for urban search and rescue (USAR) robots teleoperation, it is important to reduce blind spots and get surroundings as much as possible because of safety requirements. We propose a method to create synthesized birds-eye view images from multiple fish-eye cameras as spatiotemporal data which can reduce blind spots. In practical use, it is very important to get images robustly even when some troubles such as camera broken and network disturbances occur. This method develops showing birds-eye view images robustly even if some of images are not acquired by compensating past stored spatio-temporal data to these images. Effectiveness of the proposed method is verified through experiments.


intelligent robots and systems | 2010

Mobile robot self-localization based on tracked scale and rotation invariant feature points by using an omnidirectional camera

Tsuyoshi Tasaki; Seiji Tokura; Takafumi Sonoura; Fumio Ozaki; Nobuto Matsuhira

Self-localization is important for mobile robots in order to move accurately, and many works use an omnidirectional camera for self-localization. However, it is difficult to realize fast and accurate self-localization by using only one omnidirectional camera without any calibration. For its realization, we use “tracked scale and rotation invariant feature points” that are regarded as landmarks. These landmarks can be tracked and do not change for a “long” time. In a landmark selection phase, robots detect the feature points by using both a fast tracking method and a slow “Speed Up Robust Features (SURF)” method. After detection, robots select landmarks from among detected feature points by using Support Vector Machine (SVM) trained by feature vectors based on observation positions. In a self-localization phase, robots detect landmarks while switching detection methods dynamically based on a tracking error criterion that is calculated easily even in the uncalibrated omnidirectional image. We performed experiments in an approximately 10 [m] × 10 [m] mock supermarket by using a navigation robot ApriTau™ that had an omnidirectional camera on its top. The results showed that ApriTau™ could localize 2.9 times faster and 4.2 times more accurately by using the developed method than by using only the SURF method. The results also showed that ApriTau™ could arrive at a goal within a 3 [cm] error from various initial positions at the mock supermarket.


robot and human interactive communication | 2009

Robotic transportation system for shopping support services

Seiji Tokura; Takafumi Sonoura; Tsuyoshi Tasaki; Nobuto Matsuhira; Masahito Sano; Kiyoshi Komoriya

A robotic transportation system for shopping support services has been developed. The system is required to act according to the customers request and position to provide services. In collaboration with mobile robots, the system provides shopping support services.


Advanced Robotics | 2014

The interaction between a robot and multiple people based on spatially mapping of friendliness and motion parameters

Tsuyoshi Tasaki; Tetsuya Ogata; Hiroshi G. Okuno

We aim to achieve interaction between a robot and multiple people. For this, robots should localize people, select an interaction partner, and act appropriately for him/her. It is difficult to deal with all these problems using only the sensors installed into the robots. We focus on that people use a rough interaction distance among other people . We divide this interaction area into different spaces based on both the interaction distances and sensor abilities of robots. Our robots localize people roughly within this divided space. To select an interaction partner, they map friendliness holding the interaction history onto the divided space, and integrate the sensor information. Furthermore, we developed a method for appropriately changing the motions, sizes, and speeds based on the distance. Our robots regard the divided spaces as Q-Learning states, and learn the motion parameters. Our robot interacted with 27 visitors. It localized a partner with an F-value of 0.76 through integration, which is higher than that of a single sensor. A factor analysis was performed on the results from questionnaires. Exciting and Friendly were the representatives of the first and second factors, respectively. For both factors, a motion with friendliness provided higher impression scores than that without friendliness. Graphical Abstract


international symposium on mixed and augmented reality | 2012

Depth perception control by hiding displayed images based on car vibration for monocular head-up display

Tsuyoshi Tasaki; Akihisa Moriya; Aira Hotta; Takashi Sasaki; Haruhiko Okumura

We have developed a novel depth perception control method for a monocular head-up display (HUD) in a car. However, it is difficult to achieve an accurate depth perception in the real world because of car vibration. To resolve this problem, we focus on a property that people complement hidden images by previous continuous observed images. We hide the image on the HUD when the car is vibrated. We aim to point at the accurate depth position by using HUD images with having users compliment the hidden image positions based on the continuous images before car vibration. We developed a car which detects big vibration by an acceleration sensor and is equipped with our monocular HUD. Our method pointed at the depth position within a 3.4 [m] error, which was 2 times more accurate than the previous method does.


intelligent robots and systems | 2009

Obstacle classification and location by using a mobile omnidirectional camera based on tracked floor boundary points

Tsuyoshi Tasaki; Fumio Ozaki

Locating all obstacles around a moving robot and classifying them as stable obstacles or not by a sensor such as an omnidirectional camera are essential for the robots smooth movement and avoiding problems in calibrating many cameras. However, there are few works on locating and classifying all obstacles around a robot while it is moving by only one omnidirectional camera. In order to locate obstacles, we regard floor boundary points where robots can measure the distance from the robot by one omnidirectional camera as obstacles. Tracking them, we can classify obstacles by comparing the movement of each tracked point with odometry data. Moreover, our method changes a threshold to detect the points based on the result of comparing in order to enhance classification. The classification ratio of our method is 85.0%, which is four times higher than that of a method without changing a parameter to detect the points.


Journal of Robotics | 2011

People Detection Based on Spatial Mapping of Friendliness and Floor Boundary Points for a Mobile Navigation Robot

Tsuyoshi Tasaki; Fumio Ozaki; Nobuto Matsuhira; Tetsuya Ogata; Hiroshi G. Okuno

Navigation robots must single out partners requiring navigation and move in the cluttered environment where people walk around. Developing such robots requires two different people detections: detecting partners and detecting all moving people around the robots. For detecting partners, we design divided spaces based on the spatial relationships and sensing ranges. Mapping the friendliness of each divided space based on the stimulus from the multiple sensors to detect people calling robots positively, robots detect partners on the highest friendliness space. For detecting moving people, we regard objects’ floor boundary points in an omnidirectional image as obstacles. We classify obstacles as moving people by comparing movement of each point with robot movement using odometry data, dynamically changing thresholds to detect. Our robot detected 95.0% of partners while it stands by and interacts with people and detected 85.0% of moving people while robot moves, which was four times higher than previous methods did.


Archive | 2010

In-vehicle display device and display method

Takashi Sasaki; 佐々木 隆; Aira Hotta; 堀田 あいら; Akihisa Moriya; 彰久 森屋; Tsuyoshi Tasaki; 豪 田崎; Haruhiko Okumura; 奥村 治彦

Collaboration


Dive into the Tsuyoshi Tasaki's collaboration.

Top Co-Authors

Avatar

Nobuto Matsuhira

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge