Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masa Ogata is active.

Publication


Featured researches published by Masa Ogata.


augmented human international conference | 2015

SkinWatch: skin gesture interaction for smart watch

Masa Ogata; Michita Imai

We propose SkinWatch, a new interaction modality for wearable devices. SkinWatch provides gesture input by sensing deformation of the skin under a wearable wrist device, also known as a smart watch. A gesture command that is matched by learning data and two-dimensional linear input recognizes the gestures. The sensing part is small, thin, and stable, to accept accurate input via a users skin. We also implement an anti-error mechanism to prevent unexpected input when the user moves or rotates his or her forearm. The whole sensor costs less than


intelligent user interfaces | 2016

Facial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart Eyewear

Katsutoshi Masai; Yuta Sugiura; Masa Ogata; Kai Kunze; Masahiko Inami; Maki Sugimoto

1.50 and the sensor layer does not exceed a height of more than 3 mm in this prototype. We demonstrate sample applications with a practical task; using two-finger skin gesture input.


international conference on computer graphics and interactive techniques | 2015

AffectiveWear: toward recognizing facial expression

Katsutoshi Masai; Yuta Sugiura; Masa Ogata; Katsuhiro Suzuki; Fumihiko Nakamura; Sho Shimamura; Kai Kunze; Masahiko Inami; Maki Sugimoto

This paper presents a novel smart eyewear that uses embedded photo reflective sensors and machine learning to recognize a wearers facial expressions in daily life. We leverage the skin deformation when wearers change their facial expressions. With small photo reflective sensors, we measure the proximity between the skin surface on a face and the eyewear frame where 17 sensors are integrated. A Support Vector Machine (SVM) algorithm was applied for the sensor information. The sensors can cover various facial muscle movements and can be integrated into everyday glasses. The main contributions of our work are as follows. (1) The eyewear recognizes eight facial expressions (92.8% accuracy for one time use and 78.1% for use on 3 different days). (2) It is designed and implemented considering social acceptability. The device looks like normal eyewear, so users can wear it anytime, anywhere. (3) Initial field trials in daily life were undertaken. Our work is one of the first attempts to recognize and evaluate a variety of facial expressions in the form of an unobtrusive wearable device.


augmented human international conference | 2014

Multi-touch steering wheel for in-car tertiary applications using infrared sensors

Shunsuke Koyama; Yuta Sugiura; Masa Ogata; Anusha Withana; Yuji Uema; Makoto Honda; Sayaka Yoshizu; Chihiro Sannomiya; Kazunari Nawa; Masahiko Inami

Facial expression is a powerful way for us to exchange information nonverbally. They can give us insights into how people feel and think. There are a number of works related to facial expression detection in computer vision. However, most works focus on camera-based systems installed in the environment. With this method, it is difficult to track users face if user moves constantly. Moreover, users facial expression can be recognized at only a limited place.


asia-pacific computer and human interaction | 2012

Pygmy: a ring-shaped robotic device that promotes the presence of an agent on human hand

Masa Ogata; Yuta Sugiura; Hirotaka Osawa; Michita Imai

This paper proposes a multi-touch steering wheel for in-car tertiary applications. Existing interfaces for in-car applications such as buttons and touch displays have several operating problems. For example, drivers have to consciously move their hands to the interfaces as the interfaces are fixed on specific positions. Therefore, we developed a steering wheel where touch positions can correspond to different operating positions. This system can recognize hand gestures at any position on the steering wheel by utilizing 120 infrared (IR) sensors embedded in it. The sensors are lined up in an array surrounding the whole wheel. An Support Vector Machine (SVM) algorithm is used to learn and recognize the different gestures through the data obtained from the sensors. The gestures recognized are flick, click, tap, stroke and twist. Additionally, we implemented a navigation application and an audio application that utilizes the torus shape of the steering wheel. We conducted an experiment to observe the possibility of our proposed system to recognize flick gestures at three positions. Results show that an average of 92% of flick could be recognized.


international conference of design user experience and usability | 2014

Augmenting a Wearable Display with Skin Surface as an Expanded Input Area

Masa Ogata; Yuta Sugiura; Yasutoshi Makino; Masahiko Inami; Michita Imai

The human hand is an appropriate part to attach an agent robot. Pygmy is an anthropomorphic device that produces a presence on a human hand by magnifying the finger expressions. This device is in trial to develop an interaction model of an agent on the hand. It is based on the concept of hand anthropomorphism and uses finger movements to create the anthropomorphic effect. Wearing the device is similar to having eyes and a mouth on the hand; the wearers hand spontaneously expresses the agents presence with the emotions conveyed by the eyes and mouth. Interactive manipulation by controllers and sensors make the hand look animated. We observed that the character animated with the device provided user collaboration and interaction as though there were a living thing on the users hand. Further, the users play with the device by representing characters animated with Pygmy as their doubles.


international conference on computer graphics and interactive techniques | 2015

SkinWatch: adapting skin as a gesture surface

Masa Ogata; Ryosuke Totsuka; Michita Imai

Wearable devices such as the wristwatch-type smart watch, are becoming smaller and easier to implement. However, user interaction using wearable displays is limited owing to the small display area. On larger displays such as tablet computers, the user has more space to interact with the device and present various inputs. A wearable device has a small display area, which clearly decreases its ability to read finger gestures. We propose an augmented wearable display to expand the user input area over the skin. A user can employ finger gestures on the skin to control a wearable display. The prototype device has been implemented using techniques that sense skin deformation by measuring the distance between the skin and the wearable (wristwatch-type) device. With this sensing technique, we show three types of input functions, and create input via the skin around the wearable display and the device.


ubiquitous computing | 2016

LumiO: a plaque-aware toothbrush

Takuma Yoshitani; Masa Ogata; Koji Yatani

Skin is soft, durable and sensitive surface of human body. Skin gesture is a new interaction modality not only for wearable applications, but also on-body interfaces. SkinWatch is an interface technique and interaction modality for skin gesture input designed for the smart watch. By using the deformation sensing techniques, SkinWatch adapts human skin as a deformable surface to enable skin gesture interaction. By preventing occlusion caused by a users fingers, the view of the wearable display is clear and it is easy to present visual feedback. A users gesture input is augmented from the touch screen to the skin area around the device; the user obtains feedback not only from the display but also from tactile feedback. Detecting the skin gesture is represented as skin deformation that can be measured by a photo-reflective distance sensor. By placing the sensor array under the device, the implanted sensor is invisible to the user.


international symposium on wearable computers | 2015

Silhouette interactions: using the hand shadow as interaction modality

Eria Chita; Yuta Sugiura; Sunao Hashimoto; Kai Kunze; Masahiko Inami; Masa Ogata

Toothbrushing plays an important role in daily dental plaque removal for preventive dentistry. Prior work has investigated improvements on toothbrushing with sensing technologies. But existing toothbrushing support focuses mostly on estimating brushing coverage. Users thus only have indirect information about how well their toothbrushing removes dental plaque. We present LumiO, a toothbrush that offers users continuous feedback on the amount of plaque on teeth. Lumio uses a well-known method for plaque detection, called Quantitative Light-induced Fluorescence (QLF). QLF exploits a red fluorescence property that bacterium in the plaque demonstrates when a blue-violet ray is cast. Blue-violet light excites this fluorescence property, and a camera with an optical filter can capture plaque in pink. We incorporate this technology into an electric toothbrush to achieve improvements in performance on plaque removal in daily dental care. This paper first discusses related work in sensing for oral activities and interaction as well as dental care with technologies. We then describe the principles of QLF, the hardware design of LumiO, and our vision-based plaque detection method. Our evaluations show that the vision-based plaque detection method with three machine learning techniques can achieve F-measures of 0.68 -- 0.92 under user-dependent training. Qualitative evidence also suggests that study participants were able to have improved awareness of plaque and build confidence on their toothbrushing.


international conference on computer graphics and interactive techniques | 2015

FlashTouch: touchscreen communication combining light and touch

Masa Ogata; Yuta Sugiura; Michita Imai

We present the concept of Silhouette Interactions, using the shadow of our hand as an extension of our body to interact with our physical environment. We apply Silhouette Interactions to the application case of home appliance control, show 2 user studies to identify interesting appliances, actions and shadow gestures to be used. Informed by the studies, we also implement an initial prototype system for further evaluation. We discuss best practices and lessons learned for Silhouette Interactions.

Collaboration


Dive into the Masa Ogata's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge