Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuji Uema is active.

Publication


Featured researches published by Yuji Uema.


ubiquitous computing | 2014

Smarter eyewear: using commercial EOG glasses for activity recognition

Shoya Ishimaru; Kai Kunze; Yuji Uema; Koichi Kise; Masahiko Inami; Katsuma Tanaka

Smart eyewear computing is a relatively new subcategory in ubiquitous computing research, which has enormous potential. In this paper we present a first evaluation of soon commercially available Electrooculography (EOG) glasses (J!NS MEME) for the use in activity recognition. We discuss the potential of EOG glasses and other smart eye-wear. Afterwards, we show a first signal level assessment of MEME, and present a classification task using the glasses. We are able to distinguish of 4 activities for 2 users (typing, reading, eating and talking) using the sensor data (EOG and acceleration) from the glasses with an accuracy of 70 % for 6 sec. windows and up to 100 % for a 1 minute majority decision. The classification is done user-independent. The results encourage us to further explore the EOG glasses as platform for more complex, real-life activity recognition systems.


augmented human international conference | 2010

Fur interface with bristling effect induced by vibration

Masahiro Furukawa; Yuji Uema; Maki Sugimoto; Masahiko Inami

Wearable computing technology is one of the methods that can augment the information processing ability of humans. However, in this area, a soft surface is often necessary to maximize the comfort and practicality of such wearable devices. Thus in this paper, we propose a soft surface material, with an organic bristling effect achieved through mechanical vibration, as a new user interface. We have used fur in order to exhibit the visually rich transformation induced by the bristling effect while also achieving the full tactile experience and benefits of soft materials. Our method needs only a layer of fur and simple vibration motors. The hairs of fur instantly bristle with only horizontal mechanical vibration. The vibration is provided by a simple vibration motor embedded below the fur material. This technology has significant potential as garment textiles or to be utilized as a general soft user interface.


user interface software and technology | 2014

Tracs: transparency-control for see-through displays

David Lindlbauer; Toru Aoki; Robert Walter; Yuji Uema; Anita Höchtl; Michael Haller; Masahiko Inami; Jörg Müller

We present Tracs, a dual-sided see-through display system with controllable transparency. Traditional displays are a constant visual and communication barrier, hindering fast and efficient collaboration of spatially close or facing co-workers. Transparent displays could potentially remove these barriers, but introduce new issues of personal privacy, screen content privacy and visual interference. We therefore propose a solution with controllable transparency to overcome these problems. Tracs consists of two see-through displays, with a transparency-control layer, a backlight layer and a polarization adjustment layer in-between. The transparency-control layer is built as a grid of individually addressable transparency-controlled patches, allowing users to control the transparency overall or just locally. Additionally, the locally switchable backlight layer improves the contrast of LCD screen content. Tracs allows users to switch between personal and collaborative work fast and easily and gives them full control of transparent regions on their display.


augmented human international conference | 2014

Multi-touch steering wheel for in-car tertiary applications using infrared sensors

Shunsuke Koyama; Yuta Sugiura; Masa Ogata; Anusha Withana; Yuji Uema; Makoto Honda; Sayaka Yoshizu; Chihiro Sannomiya; Kazunari Nawa; Masahiko Inami

This paper proposes a multi-touch steering wheel for in-car tertiary applications. Existing interfaces for in-car applications such as buttons and touch displays have several operating problems. For example, drivers have to consciously move their hands to the interfaces as the interfaces are fixed on specific positions. Therefore, we developed a steering wheel where touch positions can correspond to different operating positions. This system can recognize hand gestures at any position on the steering wheel by utilizing 120 infrared (IR) sensors embedded in it. The sensors are lined up in an array surrounding the whole wheel. An Support Vector Machine (SVM) algorithm is used to learn and recognize the different gestures through the data obtained from the sensors. The gestures recognized are flick, click, tap, stroke and twist. Additionally, we implemented a navigation application and an audio application that utilizes the torus shape of the steering wheel. We conducted an experiment to observe the possibility of our proposed system to recognize flick gestures at three positions. Results show that an average of 92% of flick could be recognized.


international symposium on wearable computers | 2015

MEME: eye wear computing to explore human behavior

Kai Kunze; Katsuma Tanaka; Shoya Ishimaru; Yuji Uema; Koichi Kise; Masahiko Inami

In this demonstration, we focus on eye wear to assist people, sensing their physical, social and mental activities. Detecting and quantifying our behavior can raise awareness towards unhealthy practices. We use J!NS MEME prototypes, smart glasses with integrated electrodes to detect eye movements, in application cases from reading detection over ergonomics to talking recognition for social interaction tracking.


ieee virtual reality conference | 2015

MRI overlay system using optical see-through for marking assistance

Jun Morita; Sho Shimamura; Motoko Kanegae; Yuji Uema; Maiko Takahashi; Masahiko Inami; Tetsu Hayashida; Maki Sugimoto

In this paper we propose an augmented reality system that superimposes MRI onto the patient model. We use a half-silvered mirror and a handheld device to superimpose the MRI onto the patient model. By tracking the coordinates of the patient model and the handheld device using optical markers, we are able to transform the images to the correlated position. Voxel data of the MRI are made so that the user is able to view the MRI from many different angles.


international conference on computer graphics and interactive techniques | 2009

Fur display

Masahiro Furukawa; Naohisa Nagaya; Takuji Tokiwa; Masahiko Inami; Atsushi Okoshi; Maki Sugimoto; Yuta Sugiura; Yuji Uema

Fur Display makes invisible information visible. It not only delivers dynamic movements of appealing, bushy fur, but it is also a feathery, visual, tactile display that invites touch and interaction. Earlier versions of this concept often used rigid surfaces like tabletops, but Fur Display presents touchable fur with surprising dynamic movement. The device is simple and small, so it can be placed on clothing, appliances, or personal belongings, where it becomes a useful, friendly interface in our daily lives.


international symposium on wearable computers | 2017

Toward large scale study using smart eyewear

Yuji Uema; Kazutaka Inoue

The tracking of cognitive and physical activity using a wearable device is an emerging research field. While several studies have been performed on large-scale activity tracking using a watch-type wearable device, large-scale activity tracking using an eyewear-type wearable device remains a challenging area owing to the negative effect of such devices on a users appearance. In this paper, we describe the initial result of a large-scale longitudinal study about the concentration level of users using an eyewear-type wearable device. Our approach is to use an eyewear-type wearable device and a predeveloped mobile application to collect data about eye blinks and head posture. The concentration level of users is estimated based on blink rate, blink strength, and head posture. We collected over 40,000 hours of data, and the result shows the change in concentration in a week and with time.


ieee virtual reality conference | 2015

Registration and projection method of tumor region projection for breast cancer surgery

Motoko Kanegae; Jun Morita; Sho Shimamura; Yuji Uema; Maiko Takahashi; Masahiko Inami; Tetsu Hayashida; Maki Sugimoto

This paper introduces a registration and projection method for directly projecting the tumor region for breast cancer surgery assistance based on the breast procedure of our collaborating doctor. We investigated the steps of the breast cancer procedure of our collaborating doctor and how it can be applied for tumor region projection. We propose a novel way of MRI acquisition so we may correlate the MRI coordinates to the patient in the real world. By calculating the transformation matrix from the MRI coordinates and the coordinates from the markers that is on the patient, we are able to register the acquired MRI data to the patient. Our registration and presentation method of the tumor region was then evaluated by medical doctors.


virtual reality international conference | 2014

Virtual slicer: interactive visualizer for tomographic medical images based on position and orientation of handheld device

Sho Shimamura; Motoko Kanegae; Jun Morita; Yuji Uema; Masahiko Inami; Tetsu Hayashida; Hideo Saito; Maki Sugimoto

This paper introduces an interface that helps understand the correspondence between the patient and medical images. Surgeons determine the extent of resection by using tomographic images such as MRI (Magnetic Resonance Imaging) data. However, understanding the relationship between the patient and tomographic images is difficult. This study aims to visualize the correspondence more intuitively. In this paper, we propose an interactive visualizer for medical images based on the relative position and orientation of the handheld device and the patient. We conducted an experiment to verify the performances of the proposed method and several other methods. In the experiment, the proposed method showed the minimum error.

Collaboration


Dive into the Yuji Uema's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katsuma Tanaka

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Koichi Kise

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Shoya Ishimaru

Osaka Prefecture University

View shared research outputs
Researchain Logo
Decentralizing Knowledge