Keiichi Ochiai
NTT DoCoMo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keiichi Ochiai.
international conference on mobile computing and ubiquitous networking | 2015
Wataru Yamada; Daisuke Torii; Haruka Kikuchi; Hiroshi Inamura; Keiichi Ochiai; Ken Ohta
This paper describes a method to extract local event information from the micro-blog service Twitter. Twitter holds innumerable user-posted short messages called tweets that cover various topics including local events. Our proposal is composed of three steps: 1) extract tweets related to local events from local tweets by the Support Vector Machine (SVM) approach, 2) identify and extract the venues, names and times of local events mentioned in the tweets by applying Conditional Random Fields (CRF), 3) use the venues and similarity of names to aggregate duplicate local event information. We implement the proposed method and confirm that it extracts local event information with higher precision than the conventional methods.
international symposium on wearable computers | 2017
Wataru Yamada; Hiroyuki Manabe; Hiroyuki Hakoda; Keiichi Ochiai; Daizo Ikeda
We present CamTrackPoint, a new input interface that can be controlled by index finger gestures captured by the rear camera of a mobile device. CamTrackPoint mounts a 3D-printed ring on the rear cameras bezel, and it senses the movement of the users finger by tracking the light passed through the finger. The proposed method provides mobile devices with a new low cost input interface that offers physical force feedback like a pointing stick. The cost of our method is low as it needs only a simple ring-shaped part on the camera bezel. Moreover, if finger gestures are not being made, the camera has an unobscured view. We prototype the proposed method and evaluate its precision. This paper discusses the unique characteristics and possible applications of the proposed method.
international conference on mobile computing and ubiquitous networking | 2016
Wataru Yamada; Haruka Kikuchi; Keiichi Ochiai; Shu Takahashi; Yusuke Fukazawa; Hiroshi Inamura; Ken Ohta
Micro-blog service Twitter holds innumerable userposted short messages called tweets that cover various topics including local events. We proposed a method to extract a mount of various local event information using natural language processing from Twitter. This paper describes a method to extract event information and label categories such as music or culture to them. Our proposal is composed of two steps: 1) extract local event information from tweets related to local event by the Support Vector Machine and Conditional Random Fields approach. 2) label categories by combining the output from classifiers of each event category. We implement the proposed method in three ways that consist of keyword matching designed by hand, machine learning and hybrid of them. Besides, we evaluate classification performance using typical five kinds of event categories. As a result, we confirmed the method of the hybrid has highest average F-score 0.674 in the methods.
international conference on computer graphics and interactive techniques | 2009
Keiichi Ochiai; Norimichi Tsumura; Toshiya Nakaguchi; Yoichi Miyake
Photorealistic image synthesis is a challenging topic in computer graphics. Image-based techniques for capturing and reproducing the appearance of real scenes have received a great deal of attention. A long measurement time and a large amount of memory are required in order to acquire an image-based relightable dataset, i.e., light transport or reflectance field. Several approaches have been proposed with the goal of efficiently acquiring light transport [Sen et al. 2005; Fuchs et al. 2007]. However, since, with the exception of the recently proposed compressive sensing method [Peers et al. 2009], most previous studies have focused on scene adaptive sampling algorithms, conventional methods cannot perform efficiently in the case of a scene that has significant global illumination. In this paper, we present a non-adaptive sampling method for measuring light transport of a scene based on separation of the direct and global illumination components.
2017 Tenth International Conference on Mobile Computing and Ubiquitous Network (ICMU) | 2017
Kazuki Kiriu; Keiichi Ochiai; Akiya Inagaki; Naoki Yamamoto; Yusuke Fukazawa; Masatoshi Kimoto; Tsukasa Okimura; Yuri Terasawa; Takaki Maeda; Jun Ota
Mental health has received increasing attention in recent years, and a correlation between social interaction during eating and mental health noted. The purpose of this research is to determine if wearable devices can be used to recognize whether a person is eating alone or has company. Using a watch-type device and smartphone, the data, while eating in daily life, was collected. Using this data, we were able to recognize whether a person is eating alone or has company with an accuracy of 96.3%.
international conference on computer graphics and interactive techniques | 2007
Takao Makino; Norimichi Tsumura; Koichi Takase; Ryusuke Honma; Keiichi Ochiai; Nobutoshi Ojima; Toshiya Nakaguchi; Yoichi Miyake
In the field of virtual studios in the television industry, the facial live video stream and background scenes must be composed in real time. One of the complex problems in this compositing is matching the lighting between the facial live video and the background scene. In this paper, we develop a practical lighting reproduction technique to reproduce the appearance of a face in the facial live video with rigid facial motion in real time under an arbitrary environmental lighting condition.
international conference on computer graphics and interactive techniques | 2006
Keiichi Ochiai; Norimichi Tsumura; Toshiya Nakaguchi; Kimiyoshi Miyata; Yoichi Miyake
Archive | 2013
桂一 落合; Keiichi Ochiai; 大祐 鳥居; Daisuke Torii; 悠 菊地; Yu Kikuchi; 山田 渉; Wataru Yamada; 渉 山田
Archive | 2012
Keiichi Ochiai; 桂一 落合; Daisuke Torii; 大祐 鳥居
Journal of Imaging Science and Technology | 2011
Takao Makino; Koichi Takase; Keiichi Ochiai; Norimichi Tsumura; Toshiya Nakaguchi; Nobutoshi Ojima