Tatsuyuki Kawamura
Nara Institute of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tatsuyuki Kawamura.
international symposium on wearable computers | 2002
Tatsuyuki Kawamura; Yasuyuki Kono; Masatsugu Kidode
In this paper, we discuss wearable interfaces for a computational memory-aid useful in everyday life. The aim of this study is to develop a video diary system with vision interfaces to aid in memory retrieval. The video diary system provides users with memory retrieval, exchange, and transportation through four types of indexes: (1) through the users location, (2) through real world object(s), (3) through keyword(s) and (4) through the use of a summary or the story of the day. The authors have developed the following two systems to achieve the above indexes: (1) a residual memory system, (2) a ubiquitous memories system. residual memory can index a users location automatically by analyzing a video recorded from a wearable camera for memory retrieval. Ubiquitous memories provides users with the ability to associate augmented memories with real world objects for memory exchange. We have integrated the above two systems for memory transportation. We believe that the above interfaces can be integrated into the video diary system.
human-computer interaction with mobile devices and services | 2003
Takahiro Ueoka; Tatsuyuki Kawamura; Yasuyuki Kono; Masatsugu Kidode
In this paper we propose a wearable vision interface system named “I’m Here!” to support a user’s remembrance of object location in everyday life. The system enables users to retrieve certain information from a video database that has recorded a set of the latest scenes of target objects which were held by the user and were observed from the users’ viewpoint. We propose the object recognition method to associate the video database with the name of objects observed in the video. The offline experiments demonstrate that the system is useful enough to recognize the objects.
pacific rim conference on multimedia | 2001
Tatsuyuki Kawamura; Yasuyuki Kono; Masatsugu Kidode
Our system supports a users location-based recollection of past events with direct input such as in always gazing video data, which allows the user to associate by simply looking at a viewpoint, and providing stable online and real-time video retrieval. We propose three functional methods: image retrieval with motion information, video scene segmentation, and real-time video retrieval. Our experimental results have shown that these functions are effective enough to perform wearable information playing.
international symposium on wearable computers | 2003
Tatsuyuki Kawamura; Yasuyuki Kono; Masatsugu Kidode
In this paper, we propose a feasible wearable system named Nice2CU to manage a person’s augmented memory. For managing a person’s augmented memory, an “easy registration” method and an “automatic update” method are necessary. We have designed a prototype system using a “Card and Mirror” interface.
robot and human interactive communication | 2004
Satoshi Murata; Tatsuyuki Kawamura; Yasuyuki Kono; Masatsugu Kidode
This work newly proposes a wearable system in the ubiquitous memories (UM) environment we have already developed, where users can enclose/disclose their experiences in/from related objects. We have designed an intelligent wearable system to support on-demand experience segmentation so that an object can be enclosed by the following two processes 1) continuously recording the users viewpoint images, and 2) detecting the starting point of an experience the user wants to record just when that experience has ended. We describe design concepts of the proposed system and an experiment to confirm its availability to detect the starting point of a certain period of an experience.
international conference on multimedia and expo | 2004
Takahiro Ueoka; Tatsuyuki Kawamura; Yasuyuki Kono; Masatsugu Kidode
People tend to forget where they placed an object, which is needed to achieve a certain task, in their everyday circumstances. To support a users object-finding tasks, we have proposed a wearable interface system named Im Here!. The system manages Augmented Memories, a video database of the users viewpoint labeled with previously registered objects, to display the last video of the target object held by the user. We evaluate the function of the system with laboratory experiments, and discuss the positive and negative effect of the system with the experimental results.
Archive | 2003
Yasuyuki Kono; Masatsugu Kidode; Takahiro Ueoka; Tatsuyuki Kawamura
Archive | 2003
Yasuyuki Kono; Tatsuyuki Kawamura; Takahiro Ueoka; Satoshi Murata
Journal of Machine Vision and Applications | 2002
Tatsuyuki Kawamura; Norimichi Ukita; Yasuyuki Kono; Masatsugu Kidode
Archive | 2005
Tatsuyuki Kawamura; Takahiro Ueoka; Yasuyuki Kono; Masatsugu Kidode