Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sunao Hashimoto is active.

Publication


Featured researches published by Sunao Hashimoto.


human factors in computing systems | 2011

An actuated physical puppet as an input device for controlling a digital manikin

Wataru Yoshizaki; Yuta Sugiura; Albert C. Chiou; Sunao Hashimoto; Masahiko Inami; Takeo Igarashi; Yoshiaki Akazawa; Katsuaki Kawachi; Satoshi Kagami; Masaaki Mochimaru

We present an actuated handheld puppet system for controlling the posture of a virtual character. Physical puppet devices have been used in the past to intuitively control character posture. In our research, an actuator is added to each joint of such an input device to provide physical feedback to the user. This enhancement offers many benefits. First, the user can upload pre-defined postures to the device to save time. Second, the system is capable of dynamically adjusting joint stiffness to counteract gravity, while allowing control to be maintained with relatively little force. Third, the system supports natural human body behaviors, such as whole-body reaching and joint coupling. This paper describes the user interface and implementation of the proposed technique and reports the results of expert evaluation. We also conducted two user studies to evaluate the effectiveness of our method.


Journal of robotics and mechatronics | 2013

TouchMe: An augmented reality interface for remote robot control

Sunao Hashimoto; Akihiko Ishida; Masahiko Inami; Takeo Igarashi

TouchMe: An Augmented Reality Interface for Remote Robot Control Sunao Hashimoto∗1,∗5, Akihiko Ishida∗2,∗5, Masahiko Inami∗3,∗5, and Takeo Igarashi∗4,∗5 ∗1Meiji University 4-21-1 Nakano, Nakano-ku, Tokyo 164-8525, Japan E-mail: [email protected] ∗2Tokyo University of Science 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan E-mail: [email protected] ∗3Keio University 4-1-1 Hiyoshi, Kohoku-ku, Yokohama, Kanagawa 223-8521, Japan E-mail: [email protected] ∗4The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan E-mail: [email protected] ∗5JST ERATO Igarashi Design Interface Project 7-3-1 Hongou, Bunkyo-ku, Tokyo 113-8656, Japan


robot and human interactive communication | 2011

An augmented reality system for teaching sequential tasks to a household robot

Richard Fung; Sunao Hashimoto; Masahiko Inami; Takeo Igarashi

We present a method of instructing a sequential task to a household robot using a hand-held augmented reality device. The user decomposes a high-level goal such as “prepare a drink” into steps such as delivering a mug under a kettle and pouring hot water into the mug. The user takes a photograph of each step using the device and annotates it with necessary information via touch operation. The resulting sequence of annotated photographs serves as a reference for review and reuse at a later time. We created a working prototype system with various types of robots and appliances.


The Visual Computer | 2014

Design and enhancement of painting interface for room lights

Seung Tak Noh; Sunao Hashimoto; Daiki Yamanaka; Youichi Kamiyama; Masahiko Inami; Takeo Igarashi

We propose a novel painting interface that enables users to design an illumination distribution for a real room using an array of computer-controlled lights. Users specify which area of the room is to be well-lit and which is to be dark by painting a target illumination distribution on a tablet device displaying the image obtained by a camera mounted in the room. The painting result is overlaid on the camera image as contour lines of the target illumination intensity. The system then runs an optimization to calculate light parameters to deliver the requested illumination condition. We implemented a GPU-based parallel search to achieve real-time processing. In our system, we used actuated lights that can change the lighting direction to generate the requested illumination condition more faithfully than static lights. We built a miniature-scale experimental environment and ran a user study to compare our method with a standard direct manipulation method using sliders. The results showed that the users preferred our method for informal light control.


international symposium on mixed and augmented reality | 2012

Lighty: A painting interface for room illumination by robotic light array

Seung Tak Noh; Sunao Hashimoto; Daiki Yamanaka; Youichi Kamiyama; Masahiko Inami; Takeo Igarashi

We propose an AR-based painting interface that enables users to design an illumination distribution for a real room using an array of computer-controlled lights. Users specify an illumination distribution of the room by painting on the image obtained by a camera mounted in the room. The painting result is overlaid on the camera image as contour lines of the target illumination intensity. The system runs an optimization interactively to calculate light parameters to deliver the requested illumination condition. In this implementation, we used actuated lights that can change the lighting direction to generate the requested illumination condition more accurately and efficiently than static lights. We built a miniature-scale experimental environment and ran a user study to compare our method with a standard direct manipulation method using widgets. The results showed that the users preferred our method for informal light control.


human-robot interaction | 2010

Photograph-based interaction for teaching object delivery tasks to robots

Sunao Hashimoto; Andrei Ostanin; Masahiko Inami; Takeo Igarashi

Personal photographs are important media for communication in our daily lives. People take photos to remember things about themselves and show them to others to share the experience. We expect that a photograph can be useful tool for teaching a task to a robot. We propose a novel humanrobot interaction using photographs. The user takes a photo to remember the target in a real-world situation involving a task and shows it to the system to make it physically execute the task. We developed a prototype system in which the user took a photo of a dish arrangement on a table and showed it to the system later to then have a small robot deliver and arrange the dishes in the same way.


advances in computer entertainment technology | 2012

GENIE: photo-based interface for many heterogeneous LED lamps

Jordan Tewell; Sunao Hashimoto; Masahiko Inami; Takeo Igarashi

We present an interface to allow for easy selection and creative control of color changing lamp fixtures in the home, using the analogy of taking a snapshot to select them. The user is presented with a GUI on their mobile phone to control light attributes such as color, brightness, and scheduling and is provided a means to specify a group of lights to be controlled at once. This is achieved using an IR filter switcher on the phone to capture IR blobs pulsating from inside the lamps and uses a central server to communicate between the two. The system can operate under normal, indoor lighting conditions and is concealed inside the lamps without any need to place fiducials or other obscuring means of identification in the environment.


human-robot interaction | 2011

Snappy: snapshot-based robot interaction for arranging objects

Sunao Hashimoto; Andrei Ostanin; Masahiko Inami; Takeo Igarashi

Photograph is a very useful tool for describing configurations of real-world objects to others. People immediately understand various pieces of information such as “what is the target object” and “where is the target position” by looking at a photograph, even without verbal descriptions. Our goal was to leverage these features of photographs to enrich human-robot interactions. We propose to use photographs as a front-end between a human and a home robot system. We named this method “Snappy”. The user takes a photo to remember the target in a real-world situation involving a task and shows it to the system to make it physically execute the task. We developed a prototype system in which the user took a photo of a dish layout on a table and showed it to the system later to then have robots deliver and arrange the dishes in the same way (Figure1 and Figure2).


intelligent robots and systems | 2010

Neural network estimation of LAL/VPC resions of silkmoth using Genetic Algorithm

Ryosuke Chiba; Sunao Hashimoto; Ryohei Kanzaki; Jun Ota

When a male silk moth senses sexual pheromone of a female partner by using its antenna, it repeats certain series of walking pattern and arrives to the partner. This walking pattern is generated in Lateral Accessory Lobe (LAL) and the ventral protocerebrum (VPC) domain which controls physical exercise. Therefore, in this study, we elucidate the process of this behavior by constructing a neural network model of the LAL domain. Concretely, we build a model that treats some numbers of neurons as one neuron and estimate strength of each connection between 10 neuron representatives of neuron groups with Genetic Algorithm. The estimated network is verified and consided from engineering and biology.


human factors in computing systems | 2013

LightCloth: senseable illuminating optical fiber cloth for creating interactive surfaces

Sunao Hashimoto; Ryohei Suzuki; Youichi Kamiyama; Masahiko Inami; Takeo Igarashi

Collaboration


Dive into the Sunao Hashimoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeo Igarashi

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Katsuaki Kawachi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masaaki Mochimaru

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Satoshi Kagami

Tokyo University of Science

View shared research outputs
Top Co-Authors

Avatar

Wataru Yoshizaki

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge