Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yasue Kishino is active.

Publication


Featured researches published by Yasue Kishino.


international conference on pervasive computing | 2010

Object-based activity recognition with heterogeneous sensors on wrist

Takuya Maekawa; Yutaka Yanagisawa; Yasue Kishino; Katsuhiko Ishiguro; Koji Kamei; Yasushi Sakurai; Takeshi Okadome

This paper describes how we recognize activities of daily living (ADLs) with our designed sensor device, which is equipped with heterogeneous sensors such as a camera, a microphone, and an accelerometer and attached to a users wrist. Specifically, capturing a space around the users hand by employing the camera on the wrist mounted device enables us to recognize ADLs that involve the manual use of objects such as making tea or coffee and watering plant. Existing wearable sensor devices equipped only with a microphone and an accelerometer cannot recognize these ADLs without object embedded sensors. We also propose an ADL recognition method that takes privacy issues into account because the camera and microphone can capture aspects of a users private life. We confirmed experimentally that the incorporation of a camera could significantly improve the accuracy of ADL recognition.


international symposium on mixed and augmented reality | 2008

An information layout method for an optical see-through head mounted display focusing on the viewability

Kohei Tanaka; Yasue Kishino; Masakazu Miyamae; Tsutomu Terada; Shojiro Nishio

Accessing information when we are on the move is a key feature if mobile computing environments, and using an optical see-through head mounted display (HMD) is one of the most suitable ways to do this. Although the HMD can display information without interfering with the users view, when the sight behind the display is too complex or too bright, the information displayed can bee very difficult to see. To solve this problem, we have created a way of laying out information for the optical see-through HMD. The ideal area for displaying information is determined by evaluating the sight image behind the HMD captured by a pantoscopic camera mounted on it. Moreover, if there is no suitable area for displaying information, our method select involves using the sight image around users use to the ideal direction and instructing them to face the direction. Our method displays information to ideal areas.


international conference on pervasive computing | 2004

Ubiquitous Chip: A Rule-Based I/O Control Device for Ubiquitous Computing

Tsutomu Terada; Masahiko Tsukamoto; Keisuke Hayakawa; Tomoki Yoshihisa; Yasue Kishino; Atsushi Kashitani; Shojiro Nishio

In this paper, we propose a new framework for ubiquitous computing by rule-based, event-driven I/O (input/output) control devices. Our approach is flexible and autonomous because it employs a behavior-description language based on ECA (Event, Condition, Action) rules with simple I/O control functions. We have implemented a prototype ubiquitous device with connectors and several sensors to show the effectiveness of our approach.


IEEE Pervasive Computing | 2008

Object-Blog System for Environment-Generated Content

Takuya Maekawa; Yutaka Yanagisawa; Yasue Kishino; Koji Kamei; Yasushi Sakurai; Takeshi Okadome

The object-blog service application automatically converts raw sensor data to environment-generated content (EGC), including texts, graphs, and figures. This conversion facilitates data searching and browsing. Generated content can serve several purposes, including memory aids, security, and communication media. In object-blog, personified objects automatically post entries to a Weblog about sensor data obtained from sensors attached to the objects. Feedback thus far from participants working with object-blog in an experimental environment has been positive.


acm symposium on applied computing | 2009

A destination prediction method using driving contexts and trajectory for car navigation systems

Kohei Tanaka; Yasue Kishino; Tsutomu Terada; Shojiro Nishio

Car navigation systems provide the best route to a destination quickly and effectively. However, during daily driving, this information is not necessary since drivers already know the route to the destination very well. In addition, it is time-consuming for drivers to input the destination. Thus, our research group has proposed a new car navigation system that provides information related to the destination by predicting the users destination automatically. We propose the use of a new method that predicts the destination on the basis of the driving trajectory and the contexts in which the user drives. A system that uses our method knows the destination without user interaction and provides information related to the correct destination.


mobile data management | 2006

Design of a Car Navigation System that Predicts User Destination

Tsutomu Terada; Masakazu Miyamae; Yasue Kishino; Kohei Tanaka; Shojiro Nishio; Takashi Nakagawa; Yoshihisa Yamaguchi

Because of advances in information technologies, car navigation systems have come into widespread use as useful tools to guide drivers where they want to go. Conventional car navigation systems present the most suitable route according to a destination input into the system. However, since the required operation to input the destination costs so much, users do not usually use car navigation systems for daily driving. In this paper, to exploit the effective functions of car navigation systems, we propose a new system that automatically predicts user purpose and destination. The proposed car navigation system presents various information based on predicted purpose without interaction from users.


international conference on pervasive computing | 2011

Recognizing the use of portable electrical devices with hand-worn magnetic sensors

Takuya Maekawa; Yasue Kishino; Yasushi Sakurai; Takayuki Suyama

The new method proposed here recognizes the use of portable electrical devices such as digital cameras, cellphones, electric shavers, and video game players with hand-worn magnetic sensors by sensing the magnetic fields emitted by these devices. Because we live surrounded by large numbers of electrical devices and frequently use these devices, we can estimate high-level daily activities by recognizing the use of electrical devices. Therefore, many studies have attempted to recognize the use of electrical devices with such approaches as ubiquitous sensing and infrastructure-mediated sensing. A feature of our method is that we can recognize the use of electrical devices that are not connected to the home infrastructure without the need for any ubiquitous sensors attached to the devices. We evaluated the performance of our recognition method in real home environments, and confirmed that we could achieve highly accurate recognition with small numbers of hand-worn magnetic sensors.


ubiquitous computing | 2013

Activity recognition with hand-worn magnetic sensors

Takuya Maekawa; Yasue Kishino; Yasushi Sakurai; Takayuki Suyama

Activity recognition is a key technology for realizing ambient assisted living applications such as care of the elderly and home automation. This paper proposes a new activity recognition method that employs hand-worn magnetic sensors to recognize a broad range of activities ranging from simple activities that involve hand movements such as walking and running to the use of portable electrical devices such as cell phones and cameras. We sense magnetic fields emitted by electrical devices and the earth with hand-worn sensors, and recognize what a user is doing or which electrical device the user is employing. We frequently use a large number of different electrical devices in our daily lives, and so we can estimate high-level daily activities by recognizing their use. Our approach permits us to recognize a range extending from low-level simple activities to high-level activities that relate to the hands without the need to attach any sensors to the electrical devices.


active media technology | 2005

A rule-based RFID tag system using ubiquitous chips

Tomoki Yoshihisa; Yasue Kishino; Tsutomu Terada; Masahiko Tsukamoto; Ryohei Sagara; Teruki Sukenari; Daigo Taguchi; Shojiro Nishio

Because of the recent development of radio frequency identification (RFID) technologies, various systems for RFID tags have been proposed. Since RFID tags only have a simple function, i.e., sending data, they can be available for various purposes. Accordingly, by customizing RFID tag systems, we can expand their applications. However, previous systems have been usually proposed for one special purpose only. In this paper, we propose a rule-based RFID tag system using ubiquitous chips. Our previously proposed ubiquitous chips are rule-based I/O control devices. By applying rule-based principles, we can easily customize the RFID tag system and can construct flexible, scalable, and easy exploitable systems.


international conference on distributed computing systems workshops | 2003

Realizing a visual marker using LEDs for wearable computing environment

Yasue Kishino; Masahiko Tsukamoto; Yutaka Sakane; Shojiro Nishio

In order to utilize the real world via computers by means of camera images in augmented reality systems, the concise coordinate information of the systems must be identified. Image-based approaches are promising for this purpose since they provide direct mapping from the camera images to the coordinate system for virtual objects, and therefore, no camera calibration is necessary. However, the conventional image-based markers are typically stickers, and the information they deliver is limited and susceptible to change dynamically. In this paper, we propose a location marking method, called VCC (visual computer communication), which uses more intelligent markers. In our method, a marker, consisting of 16 LEDs (light-emitting diodes), keeps blinking to provide both coordinate information and attached information such as an address or a URL.

Collaboration


Dive into the Yasue Kishino's collaboration.

Top Co-Authors

Avatar

Yutaka Yanagisawa

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koji Kamei

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeshi Okadome

Kwansei Gakuin University

View shared research outputs
Top Co-Authors

Avatar

Futoshi Naya

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge