Keita Higuchi
University of Tokyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keita Higuchi.
ubiquitous intelligence and computing | 2013
Katsuya Fujii; Keita Higuchi; Jun Rekimoto
Surveillance and monitoring are indispensable in some large areas for purposes such as home security, road patrols, livestock monitoring, wildfire mapping, and ubiquitous sensing. Computer-controlled micro unmanned aerial vehicles (UAVs) have the potential to perform such missions because they can move autonomously in a surveillance area without being constrained by ground obstacles. However, the duration of flights is a serious problem with UAVs. A typical UAV can fly only for about 10 min using currently available Li-Po batteries, which makes it difficult to conduct tasks like aerial surveillance that clearly require longer flying periods. In this study, we developed an automatic battery replacement mechanism that allows UAVs to fly continuously without manual battery replacement along with the suggestion of the scalable and robust usage for the system. We conducted an initial experiment using this system and successfully assessed the possibility of continuous surveillance in both indoor and outdoor environments.
international conference on artificial reality and telexistence | 2013
Keita Higuchi; Katsuya Fujii; Jun Rekimoto
Flying Head is a telepresence system that remotely connects humans and unmanned aerial vehicles (UAVs). UAVs are teleoperated robots used in various situations, including disaster area inspection and movie content creation. This study aimed to integrate humans and machines with different abilities (i.e., flying) to virtually augment human abilities. Precise manipulation of UAVs normally involves simultaneous control of motion parameters and requires the skill of a trained operator. This paper proposes a new method that directly connects the users body and head motion to that of the UAV. The users natural movement can be synchronized with UAV motions such as rotation and horizontal and vertical movements. Users can control the UAV more intuitively since such manipulations are more in accordance with their kinesthetic imagery; in other words, a user can feel as if he or she became a flying machine.
augmented human international conference | 2012
Shingo Yamano; Takamitsu Hamajo; Shunsuke Takahashi; Keita Higuchi
In this paper, we propose a mobile navigation system that uses only auditory information, i.e., music, to guide the user. The sophistication of mobile devices has introduced the use of contextual information in mobile navigation, such as the location and the direction of motion of a pedestrian. Typically in such systems, a map on the screen of the mobile device is required to show the current position and the destination. However, this restricts the movements of the pedestrian, because users must hold the device to observe the screen. We have, therefore, implemented a mobile navigation system that guides the pedestrian in a non-restricting manner by adding direction information to music. By measuring the resolution of the direction that the user can perceive, the phase of the musical sound is changed to guide the pedestrian. Using this system, we have verified the effectiveness of the proposed mobile navigation system.
human factors in computing systems | 2017
Keita Higuchi; Ryo Yonetani; Yoichi Sato
This work presents EgoScanning, a novel video fast-forwarding interface that helps users to find important events from lengthy first-person videos recorded with wearable cameras continuously. This interface is featured by an elastic timeline that adaptively changes playback speeds and emphasizes egocentric cues specific to first-person videos, such as hand manipulations, moving, and conversations with people, based on computer-vision techniques. The interface also allows users to input which of such cues are relevant to events of their interests. Through our user study, we confirm that users can find events of interests quickly from first-person videos thanks to the following benefits of using the EgoScanning interface: 1) adaptive changes of playback speeds allow users to watch fast-forwarded videos more easily; 2) Emphasized parts of videos can act as candidates of events actually significant to users; 3) Users are able to select relevant egocentric cues depending on events of their interests.
computer vision and pattern recognition | 2016
Hiroshi Kera; Ryo Yonetani; Keita Higuchi; Yoichi Sato
The goal of this work is to discover objects of joint attention, i.e., objects being viewed by multiple people using head-mounted cameras and eye trackers. Such objects of joint attention are expected to act as an important cue for understanding social interactions in everyday scenes. To this end, we develop a commonality-clustering method tailored to first-person videos combined with points-of-gaze sources. The proposed method uses multiscale spatiotemporal tubes around points of gaze as a candidate of objects, making it possible to deal with various sizes of objects observed in the first-person videos. We also introduce a new dataset of multiple pairs of first-person videos and points-of-gaze data. Our experimental results show that our approach can outperform several state-of-the-art commonality-clustering methods.
intelligent user interfaces | 2018
Keita Higuchi; Soichiro Matsuda; Rie Kamikubo; Takuya Enomoto; Yusuke Sugano; Jun-ichi Yamamoto; Yoichi Sato
This paper presents a novel interface to support video coding of social attention in the assessment of children with autism spectrum disorder. Video-based evaluations of social attention during therapeutic activities allow observers to find target behaviors while handling the ambiguity of attention. Despite the recent advances in computer vision-based gaze estimation methods, fully automatic recognition of social attention under diverse environments is still challenging. The goal of this work is to investigate an approach that uses automatic video analysis in a supportive manner for guiding human judgment. The proposed interface displays visualization of gaze estimation results on videos and provides GUI support to allow users to facilitate agreement between observers by defining social attention labels on the video timeline. Through user studies and expert reviews, we show how the interface helps observers perform video coding of social attention and how human judgment compensates for technical limitations of the automatic gaze analysis.
human factors in computing systems | 2018
Seita Kayukawa; Keita Higuchi; Ryo Yonetani; Masanori Nakamura; Yoichi Sato; Shigeo Morishima
This work presents the Dynamic Object Scanning (DO-Scanning), a novel interface that helps users browse long and untrimmed first-person videos quickly. The proposed interface offers users a small set of object cues generated automatically tailored to the context of a given video. Users choose which cue to highlight, and the interface in turn adaptively fast-forwards the video while keeping scenes with highlighted cues played at original speed. Our experimental results have revealed that the DO-Scanning has an efficient and compact set of cues arranged dynamically and this set of cues is useful for browsing a diverse set of first-person videos.
Proceedings of the Internet of Accessible Things on | 2018
Rie Kamikubo; Keita Higuchi; Ryo Yonetani; Hideki Koike; Yoichi Sato
Despite the emphasis of involving users with disabilities in the development of accessible interfaces, user trials come with high costs and effort. Particularly considering the diverse range of abilities such as in the case of low vision, simulating the effect of an impairment on interaction with an interface has been approached. As a starting point to assess the role of simulation in the design cycle, this research focuses on allowing sighted individuals to experience the interface under tunnel vision using gaze-contingent simulation. We investigated its implementation reliability through the analysis of empirical tests of prototypes compared between participants under simulation and intended groups. We found that the simulation-based approach can enable developers to not only examine problems in interfaces but also be exposed to user feedback from simulated user trials with necessary evaluation measures. We discussed how the approach can complement accessibility qualities associated with user involvement at different development phases.
international conference on computer graphics and interactive techniques | 2017
Keita Higuchi; Ryo Yonetani; Yoichi Sato
This work presents EgoScanning, a video fast-forwarding interface that helps users find important events from lengthy first-person videos continuously recorded with wearable cameras. This interface features an elastic timeline that adaptively changes playback speeds and emphasizes egocentric cues specific to first-person videos, such as hand manipulations, moving, and conversations with people, on the basis of computer-vision techniques. The interface also allows users to input which of such cues are relevant to events of their interest. Through our user study, we confirm that users can find events of interest quickly from first-person videos thanks to benefits of using the EgoScanning interface.
conference on computers and accessibility | 2017
Rie Kamikubo; Keita Higuchi; Ryo Yonetani; Hideki Koike; Yoichi Sato
Active involvement of users with disabilities is difficult to employ during the iterative stages of the design process due to high costs and effort associated with user studies. This research proposes a user centered design (UCD) strategy to incorporate the use of gaze-contingent tunnel vision simulation with sighted individuals to facilitate rapid prototyping of accessible interfaces. Through three types of validation studies, we examined how our simulation techniques can provide the opportunity for continued evaluation and refinement of the design. Our simulation approach was effective in emulating scanning behaviors caused by tunnel vision along with grasping user feedback to recognize user interface and usability criteria early in the design cycle.