Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuzuko Utsumi is active.

Publication


Featured researches published by Yuzuko Utsumi.


ubiquitous computing | 2013

My reading life: towards utilizing eyetracking on unmodified tablets and phones

Kai Kunze; Shoya Ishimaru; Yuzuko Utsumi; Koichi Kise

As reading is an integral part of our knowledge lives, we should know more about our reading activities. This paper introduces a reading application for smart phone and tablets that aims at giving user more quantified information about their reading habits. We present our work towards building an open library for eye tracking on unmodified tablets and smart phones to support some of the applications advanced functionality. We implemented already several eye tracking algorithms from previous work, unfortunately all seem not to be robust enough for our application case. We give an overview about our challenges and potential solutions.


augmented human international conference | 2013

Who are you?: A wearable face recognition system to support human memory

Yuzuko Utsumi; Yuya Kato; Kai Kunze; Masakazu Iwamura; Koichi Kise

Have you ever experienced that you cannot remember the name of a person you meet again? To circumvent such an awkward situation, it would be great if you had had a system that tells you the name of the person in secret. In this paper, we propose a wearable system of real-time face recognition to support human memory. The contributions of our work are summarized as follows: (1) We discuss the design and implementation details of a wearable system capable of augmenting human memory by vision-based realtime face recognition. (2) We propose a 2 step recognition approach from coarse-to-fine grain to boost the execution time towards the social acceptable limit of 900 [ms]. (3) In experiments, we evaluate the computational time and recognition rate. As results, the proposed system could recognize a face in 238 ms with the the cumulative recognition rate at the 10th rank was 93.3 %. Computational time with the coarse-to-fine search was 668 ms less than that without coarse-to-fine search and the results showed that the proposed system has enough ability to recognize faces in real time.


augmented human international conference | 2014

Haven't we met before?: a realistic memory assistance system to remind you of the person in front of you

Masakazu Iwamura; Kai Kunze; Yuya Kato; Yuzuko Utsumi; Koichi Kise

This paper presents a perceived real-time system for memory augmentation. We propose a realistic approach to realize a memory assistance system, focusing on retrieving the person in front of you. The proposed system is capable of fully automatic indexing and is scalable in the database size. We utilize face recognition to show the user previous encounters with the person they are currently looking at. The system works fast (under 200 ms, perceived real time) with a decent database size (45 videos of 24 people). We also provide evidence in terms of an online questionnaire that our proposed memory augmentation system is useful and would be worn by most of the participants if it can be implemented in an unobtrusive way.


ubiquitous computing | 2014

Daily activity recognition combining gaze motion and visual features

Yuki Shiga; Takumi Toyama; Yuzuko Utsumi; Koichi Kise; Andreas Dengel

Recognition of user activities is a key issue for context-aware computing. We present a method for recognition of user daily activities using gaze motion features and image-based visual features. Gaze motion features dominate for inferring the users egocentric context whereas image-based visual features dominate for recognition of the environments and the target objects. The experimental results show the fusion of those different type of features improves performance of user daily activity recognition.


Ipsj Transactions on Computer Vision and Applications | 2015

Individuality-preserving Silhouette Extraction for Gait Recognition

Yasushi Makihara; Takuya Tanoue; Daigo Muramatsu; Yasushi Yagi; Syunsuke Mori; Yuzuko Utsumi; Masakazu Iwamura; Koichi Kise

Most gait recognition approaches rely on silhouette-based representations due to high recognition accu- racy and computational efficiency, and a key problem for those approaches is how to accurately extract individuality- preserved silhouettes from real scenes, where foreground colors may be similar to background colors and the back- groundis cluttered. We thereforeproposea method of individuality-preservingsilhouetteextractionfor gait recognition using standard gait models (SGMs) composed of clean silhouette sequences of a variety of training subjects as a shape prior. We firstly match the multiple SGMs to a background subtraction sequence of a test subject by dynamic pro- gramming and select the training subject whose SGM fit the test sequence the best. We then formulate our silhouette extraction problem in a well-established graph-cut segmentation framework while considering a balance between the observed test sequence and the matched SGM. More specifically, we define an energy function to be minimized by the following three terms: (1) a data term derived from the observed test sequence, (2) a smoothness term derived from spatio-temporally adjacent edges, and (3) a shape-prior term derived from the matched SGM. We demonstrate that the proposed method successfully extracts individuality-preserved silhouettes and improved gait recognition accuracy through experiments using 56 subjects.


International Workshop on Face and Facial Expression Recognition from Real World Videos | 2014

Scalable Face Retrieval by Simple Classifiers and Voting Scheme

Yuzuko Utsumi; Yuji Sakano; Keisuke Maekawa; Masakazu Iwamura; Koichi Kise

In this paper, we propose a scalable face retrieval method on large data. In order to search a particular person from videos on the Web, face recognition is one of the most effective methods. Needless to say that retrieving faces from videos are more challenging than that from a still image due to inconsistency in imaging conditions such as change of view point, lighting condition and resolution. However, dealing with them is not enough to realize the retrieval on large data. In addition, a face recognition method on the videos should be highly scalable as the number of the videos on the Web is enormous. Existing face recognition methods do not scale. In order to realize scalable face recognition, we propose a novel face recognition method. The proposed method is scalable even if the data is million-scale with high accuracy. The proposed method uses local features for face representation, and an approximate nearest neighbor (ANN) search for feature matching to reduce computational time. A voting scheme is used for recognition to compensate for low accuracy of the ANN search. We created a 5 million database and evaluated the proposed method. As results, the proposed method showed more than thousand times faster than a conventional sublinear method. Moreover, the proposed method recognized face images with a top 1000 cumulative accuracy of 100% in 139 ms recognition time (excluding preprocessing and feature extraction for the query image) per query image on the 5 million face database.


asian conference on pattern recognition | 2013

Where Are You Looking At? - Feature-Based Eye Tracking on Unmodified Tablets

Shoya Ishimaru; Kai Kunze; Yuzuko Utsumi; Masakazu Iwamura; Koichi Kise

This paper introduces our work towards implementing eye tracking on commodity devices. We describe our feature-based approach and the eye tracking system working on a commodity tablet. We recorded the data of 5 subjects following an animation on screen as reference. On the assumption that the position of device and users head is stable, the average distance error between estimated gaze point to actual gaze point is around 12.23 [mm] using user-dependent training.


international conference on robotics and automation | 2012

Cognitive active vision for human identification

Yuzuko Utsumi; Eric Sommerlade; Nicola Bellotto; Ian D. Reid

We describe an integrated, real-time multi-camera surveillance system that is able to find and track individuals, acquire and archive facial image sequences, and perform face recognition. The system is based around an inference engine that can extract high-level information from an observed scene, and generate appropriate commands for a set of pan-tilt-zoom (PTZ) cameras. The incorporation of a reliable facial recognition into the high-level feedback is a main novelty of our work, showing how high-level understanding of a scene can be used to deploy PTZ sensing resources effectively. The system comprises a distributed camera system using SQL tables as virtual communication channels, Situation Graph Trees for knowledge representation, inference and high-level camera control, and a variety of visual processing algorithms including an on-line acquisition of facial images, and on-line recognition of faces by comparing image sets using subspace distance. We provide an extensive evaluation of this method using our system for both acquisition of training data, and later recognition. A set of experiments in a surveillance scenario show the effectiveness of our approach and its potential for real applications of cognitive vision.


international conference on signal and image processing applications | 2009

An efficient branch and bound method for face recognition

Yuzuko Utsumi; Yuta Matsumoto; Yoshio Iwai

Recently, researchers have proposed many face recognition methods with the aim of improving the accuracy rate of face recognition. However, few face recognition methods focus on computational cost. To reduce the computational cost of face recognition, we propose an effective face recognition method using Haar wavelet features and a branch and bound method. Our proposed method extracts features of the Haar wavelet from a normalized face image, and recognizes the face by classifiers learned with the AdaBoost M1 algorithm. To increase the efficiency of the recognition process, we select features according to the accuracy of classification and apply a branch and bound method to the recognition tree into which the classifiers of an individual in the face database are merged. Experimental results show that our proposed method reduces the calculated classifiers in the recognition tree by 72.1% and achieves an overall reduction in the computational cost.


HBU'10 Proceedings of the First international conference on Human behavior understanding | 2010

Face tracking and recognition considering the camera's field of view

Yuzuko Utsumi; Yoshio Iwai; Hiroshi Ishiguro

We propose a method that tracks and recognizes faces simultaneously. In previous methods, features needed to be extracted twice for tracking and recognizing faces in image sequences because the features used for face recognition are different from those used for face tracking. To reduce the computational cost, we propose a probabilistic model for face tracking and recognition and a system that performs face tracking and recognition simultaneously using the same features. The probabilistic model handles any overlap in the cameras field of view, something that is ignored in previous methods. The model thus deals with face tracking and recognition using multiple overlapping image sequences. Experimental results show that the proposed method can track and recognize multiple faces simultaneously.

Collaboration


Dive into the Yuzuko Utsumi's collaboration.

Top Co-Authors

Avatar

Koichi Kise

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Masakazu Iwamura

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shoya Ishimaru

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Misa Katte

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Yuki Shiga

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Yuya Kato

Osaka Prefecture University

View shared research outputs
Researchain Logo
Decentralizing Knowledge