Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nobuchika Sakata is active.

Publication


Featured researches published by Nobuchika Sakata.


international symposium on wearable computers | 2003

WACL: supporting telecommunications using - wearable active camera with laser pointer

Nobuchika Sakata; Takeshi Kurata; Takekazu Kato; Masakatsu Kourogi; Hideaki Kuzuoka

We propose a wearable active camera with laser pointer(WACL) as a human interface device for use intelecommunications. The WACL laser pointer is attached tothe active camera-head and it can point a laser spot whileelevating and panning the camera-head. In our system, aremote instructor can observe around the worker, who iswearing the WACL, independently of the workers motion,and can clearly and naturally instruct the worker in tasksby pointing the laser spot at real objects. This paperdescribes the outline of a telecommunication supportsystem using the WACL and a method to stabilize thecamera and laser pointer independently of the wearersmotion. We have implemented and demonstrated anexample of an application using our system.


international symposium on wearable computers | 2004

Remote collaboration using a shoulder-worn active camera/laser

Takeshi Kurata; Nobuchika Sakata; Masakatsu Kourogi; Hideaki Kuzuoka; Mark Billinghurst

The wearable active camera/laser (WACL) allows the remote collaborators not only to independently set their viewpoints into the wearers workplace but also to point to real objects directly with the laser spot. In this paper, we report a user test to examine the advantages and limitations of the WACL interface in remote collaboration by comparing a head-mounted display and a head-mounted camera-based headset interface. Results show that the WACL is more comfortable to wear, is more eye-friendly, and causes less fatigue to the wearer, although there is no significant difference in task completion time. We first review related works and user studies with wearable collaborative systems, and then describe the details on the user test.


international symposium on mixed and augmented reality | 2013

Comparing pointing and drawing for remote collaboration

Seungwon Kim; Gun A. Lee; Nobuchika Sakata

In this research, we explore using pointing and drawing in a remote collaboration system. Our application allows a local user with a tablet to communicate with a remote expert on a desktop computer. We compared performance in four conditions: (1) Pointers on Still Image, (2) Pointers on Live Video, (3) Annotation on Still Image, and (4) Annotation on Live Video. We found that using drawing annotations would require fewer inputs on an expert side, and would require less cognitive load on the local worker side. In a follow-on study we compared the conditions (2) and (4) using a more complicated task. We found that pointing input requires good verbal communication to be effective and that drawing annotations need to be erased after completing each step of a task.


international conference on control, automation and systems | 2007

A pilot user study on 3-D museum guide with route recommendation using a sustainable positioning system

Takashi Okuma; Masakatsu Kourogi; Nobuchika Sakata; Takeshi Kurata

We describe our 3-D museum guide system and the pilot study we did to evaluate it. The prototype system used in the pilot study uses a human positioning system based on dead reckoning, active RFIDs, and map matching. The system manages content consisting of 3-D maps, recommended routes, and Flash files to provide appropriate navigation information based on estimated user position, azimuth, and history. The pilot study was conducted at the science museum. We got useful feedback from participants about 3-D maps, recommended routes, etc.


international symposium on wearable computers | 2012

Toe Input Using a Mobile Projector and Kinect Sensor

Daiki Matsuda; Keiji Uemura; Nobuchika Sakata; Shogo Nishida

Due to the prevalence of cell phones many people view information on small handheld LCD screens. However, these mobile devices require the use of one hand, the user needs to keep a close watch on a small display, and they have to be retrieved from a pocket or a bag. To overcome these problems, we focus on wearable projection systems that enable hands-free viewing via large projected screens, eliminating the need to retrieve and hold devices. In this paper, we present a toe input system that can realize haptic interaction, direct manipulation, and floor projection using a wearable projection system with a large projection surface. It is composed of a mobile projector, Kinect depth camera, and a gyro sensor. It is attached to the users chest and can detect when the users foot touches or rises from the floor. To evaluate the system we conducted experiments investigating object selection by foot motion.


human factors in computing systems | 2015

Automatically Freezing Live Video for Annotation during Remote Collaboration

Seungwon Kim; Gun A. Lee; Sangtae Ha; Nobuchika Sakata; Mark Billinghurst

Drawing annotations on shared live video has been investigated as a tool for remote collaboration. However, if a local user changes the viewpoint of a shared live video while a remote user is drawing an annotation, the annotation is projected and drawn at wrong place. Prior work suggested manually freezing the video while annotating to solve the issue, but this needs additional user input. We introduce a solution that automatically freezes the video, and present the results of a user study comparing it with manual freeze and no freeze conditions. Auto-freeze was most preferred by both remote and local participants who felt it best solved the issue of annotations appearing in the wrong place. With auto-freeze, remote users were able to draw annotations quicker, while the local users were able to understand the annotations clearer.


international conference on human computer interaction | 2011

Activity recognition for risk management with installed sensor in smart and cell phone

Daisuke Honda; Nobuchika Sakata; Shogo Nishida

Smart and cell phone with self-contained sensor such as accelerometer, gyroscopic and digital magnetic compass sensor have been popular. Combining certain algorithm and those sensors, it can estimate users activity, situation and even users absolute position. However, estimation of users activity, situation and users absolute position become difficult when once sensors posture and position are changing from original position in users motion. Also, according to stored, worn and handheld position and posture of those cell and smart phone are often changed. Therefore, we exclude estimation of users position and we focus to only estimation of users activity and situation for risk management. Basically, we design special classifier for detecting users unusual behavior and apply other users position data from internet to the results detected by the classifier which are combined wavelet transform and SVM. We assume that users unusual activity and situation can be detected by smart and cell phone with high accuracy.


systems, man and cybernetics | 2010

Remote collaboration using real-world projection interface

Keisuke Tajimi; Nobuchika Sakata; Keiji Uemura; Shogo Nishida

Under a remote collaboration demanding multiple fieldworkers, using a head-set interface composed from HMD and HMC in a situation of that only one wearable interface is allowed using in a field, there might be discrepancies in the transmission of instructions among the field workers and a remote expert because a worker without the interface of remote collaboration have to receive sidelong instructions from the remote expert via another worker with the interface. We expect that a real-world projection device, such as WACL, absorbs the discrepancies because the projected information can be watched by the people around the projection device. Therefore, we conduct a user study comparing the head-set interface and WACL interface to measure burdens of transmission of instructions under a remote collaboration demanding multiple field workers. The result shows that HMD is a display for single-user imposes more burdens on interface-wearer when forwarding remote instructions to non-interface-worker. On the other hands, a real world projection interface can reduce the burdens. Also, we found that direct instructions with a laser spot of WACL and sidelong instructions by the interface-wearers hand via HMD has same task performance in a simple pick-up-and-put task. Furthermore, by the effect of view controllability and information-browsability of WACL, non interface-workers can receive instructions from the remote expert directly, and carry out the instructions even when the interface-wearer is working.


international symposium on mixed and augmented reality | 2013

Study of augmented gesture communication cues and view sharing in remote collaboration

Seungwon Kim; Gun A. Lee; Nobuchika Sakata; Andreas Dünser; Elina Vartiainen; Mark Billinghurst

In this research, we explore how different types of augmented gesture communication cues can be used under different view sharing techniques in a remote collaboration system. In a pilot study, we compared four conditions: (1) Pointers on Still Image, (2) Pointers on Live Video, (3) Annotation on Still Image, and (4) Annotation on Live Video. Through this study, we found three results. First, users collaborate more efficiently using annotation cues than pointer cues for communicating object position and orientation information. Second, live video becomes more important when quick feedback is needed. Third, the type of gesture cue has more influence on performance and user preference than the type of view sharing method.


international conference on artificial reality and telexistence | 2013

Wearable input/output interface for floor projection using hands and a toe

Daiki Matsuda; Nobuchika Sakata; Shogo Nishida

Wearable projection systems enable hands-free viewing via large projected screens, and eliminating the need to hold devices. These systems need a surface of projection and interaction. However, in case of choosing a wall as the projection surface, users have to be positioned in front of the wall when they wish to access information. In this paper, we propose a wearable input/output system composed of a mobile projector, a depth sensor and a gyro sensor. It allows user to conduct “select” and “drag” operation by footing and fingertips controlling in the projected image on floor. Also it can provide user more efficient GUI operations on floor with combining hand and toe input. To confirm advantage and limitations of the system, we conduct user study. Finally we suggest guidelines and identify some problems when designing an interface for the interaction between the hand/toe and the floor. We also suggest the input method which uses both a finger and a toe.

Collaboration


Dive into the Nobuchika Sakata's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeshi Kurata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masakatsu Kourogi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takashi Okuma

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Billinghurst

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge