Kazuto Nakashima
Kyushu University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kazuto Nakashima.
Advanced Robotics | 2018
Kazuto Nakashima; Hojung Jung; Yuki Oto; Yumi Iwashita; Ryo Kurazume; Óscar Martínez Mozos
ABSTRACT Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features. GRAPHICAL ABSTRACT
international conference on robotics and automation | 2017
Ryo Kurazume; Yoonseok Pyo; Kazuto Nakashima; Akihiro Kawamura; Tokuo Tsuji
This paper proposes new software and hardware platforms named ROS-TMS and Big Sensor Box, respectively, for an informationally structured environment. We started the development of a management system for an informationally structured environment named Town Management System (TMS) in the Robot Town Project in 2005. Since then we have been continuing our efforts to improve performance and to enhance TMS functions. Recently, we launched a new version of TMS named ROS-TMS, which resolves some critical problems in TMS by adopting the Robot Operating System (ROS) and utilizing the high scalability and numerous resources of ROS. In this paper, we first discuss the structure of a software platform for the informationally structured environment and describe in detail our latest system, ROS-TMS version 4.0. Next, we introduce a hardware platform for the informationally structured environment named Big Sensor Box, in which a variety of sensors are embedded and service robots are operated according to the structured information under the management of ROS-TMS. Robot service experiments including a fetch-and-give task and autonomous control of a wheelchair robot are also conducted in Big Sensor Box.
intelligent robots and systems | 2017
Yuta Horikawa; Asuka Egashira; Kazuto Nakashima; Akihiro Kawamura; Ryo Kurazume
This paper presents a near-future perception system named “Previewed Reality”. The system consists of an informationally structured environment (ISE), an immersive VR display, a stereo camera, an optical tracking system, and a dynamic simulator. In an ISE, a number of sensors are embedded, and information such as the position of furniture, objects, humans, and robots, is sensed and stored in a database. The position and orientation of the immersive VR display are also tracked by an optical tracking system. Therefore, we can forecast the next possible events using a dynamic simulator and synthesize virtual images of what users will see in the near future from their own viewpoint. The synthesized images, overlaid on a real scene by using augmented reality technology, are presented to the user. The proposed system can allow a human and a robot to coexist more safely by showing possible hazardous situations to the human intuitively in advance.
ieee/sice international symposium on system integration | 2016
Kazuto Nakashima; Julien Girard; Yumi Iwashita; Ryo Kurazume
To provide daily-life assistance appropriately by a service robot, the management of housewares information in a room or a house is an indispensable function. Especially, the information about what and where objects are in the environment are fundamental and critical knowledge. We can track housewares with high reliability by attaching markers such as RFID tags to them, however, markerless housewares management system is still useful since it is easy-to-use and low cost. In this work, we present an object management system using an egocentric vision and a region-based convolutional neural network (R-CNN) to automatically detect and register housewares. The proposed system consists of smart glasses equipped with a wearable camera, a cloud database which manages object information, and a processing server for detecting and registering housewares to the cloud database. We perform two experiments. First, we train the R-CNN on a newly-constructed dataset to detect various housewares and configure a houseware-specific detector. All systems are composed of ROS packages. Second, we conduct experiments for automatic housewares registration using the proposed system. We demonstrate that the proposed system can detect, recognize, and register housewares approximately in real time.
The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2015
Kazuto Nakashima; Yumi Iwashita; Yoonseok Pyo; Asamichi Takamine; Ryo Kurazume
This paper proposes a new concept of ”fourth-person sensing” for service robots. The proposed concept combines wearable cameras (the first-person viewpoint), sensors mounted on robots (the second-person viewpoint) and sensors embedded in the informationally structured environment (the third-person viewpoint). Each sensor has its advantage and disadvantage, while the proposed concept can compensate the disadvantages by combining the advantages of all sensors. The proposed concept can be used to understand a user’s intention and context of the scene with high accuracy, thus it enables to provide proactive services by service robots. As one of applications of the proposed concept, we developed a HCI system combines the first-person sensing and the third-person one. We show the effectiveness of the proposed concepts through experiments.
international conference on emerging security technologies | 2014
Yumi Iwashita; Kazuto Nakashima; Yoonseok Pyo; Ryo Kurazume
Service robots, which co-exist with humans to provide various services, obtain information from sensors placed in an environment and/or sensors mounted on robots. In this paper we newly propose the concept of fourth-person sensing which combines wearable cameras (first-person sensing), sensors mounted on robots (second-person sensing), and distributed sensors in the environment (third-person sensing). The proposed concept takes advantages of all three sensing systems, while removing disadvantage of each of them. The first-person sensing can analyze what a person wearing a camera is doing and details around him/her, so the fourth-person sensing has chance to provide pro-active services, which are triggered by predicted human intention and difficult with the second- and third-person systems, by estimating his intention/behavior. We introduce an example scenario using the fourth-person sensing and show the effectiveness of the proposed concept through experiments.
world automation congress | 2018
Yumi Iwashita; Adrian Stoica; Kazuto Nakashima; Ryo Kurazume; Jim Torresen
ieee/sice international symposium on system integration | 2017
Kazuto Nakashima; Seungwoo Nham; Hojung Jung; Yumi Iwashita; Ryo Kurazume; Óscar Martínez Mozos
The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2017
Yuta Horikawa; Kazuto Nakashima; Akihiro Kawamura; Ryo Kurazume
The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2016
Kazuto Nakashima; Yumi Iwashita; Asamichi Takamine; Ryo Kurazume