Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takuichi Nishimura is active.

Publication


Featured researches published by Takuichi Nishimura.


international world wide web conferences | 2006

POLYPHONET: an advanced social network extraction system from the web

Yutaka Matsuo; Junichiro Mori; Masahiro Hamasaki; Keisuke Ishida; Takuichi Nishimura; Hideaki Takeda; Kôiti Hasida; Mitsuru Ishizuka

Social networks play important roles in the Semantic Web: knowledge management, information retrieval, ubiquitous computing, and so on. We propose a social network extraction system called POLYPHONET, which employs several advanced techniques to extract relations of persons, detect groups of persons, and obtain keywords for a person. Search engines, especially Google, are used to measure co-occurrence of information and obtain Web documents.Several studies have used search engines to extract social networks from the Web, but our research advances the following points: First, we reduce the related methods into simple pseudocodes using Google so that we can build up integrated systems. Second, we develop several new algorithms for social networking mining such as those to classify relations into categories, to make extraction scalable, and to obtain and utilize person-to-word relations. Third, every module is implemented in POLYPHONET, which has been used at four academic conferences, each with more than 500 participants. We overview that system. Finally, a novel architecture called Super Social Network Mining is proposed; it utilizes simple modules using Google and is characterized by scalability and Relate-Identify processes: Identification of each entity and extraction of relations are repeated to obtain a more precise social network.


Journal of Web Semantics | 2007

POLYPHONET: An advanced social network extraction system from the Web

Yutaka Matsuo; Junichiro Mori; Masahiro Hamasaki; Takuichi Nishimura; Hideaki Takeda; Kôiti Hasida; Mitsuru Ishizuka

Social networks play important roles in the Semantic Web: knowledge management, information retrieval, ubiquitous computing, and so on. We propose a social network extraction system called POLYPHONET, which employs several advanced techniques to extract relations of persons, to detect groups of persons, and to obtain keywords for a person. Search engines, especially Google, are used to measure co-occurrence of information and obtain Web documents. Several studies have used search engines to extract social networks from the Web, but our research advances the following points: first, we reduce the related methods into simple pseudocodes using Google so that we can build up integrated systems. Second, we develop several new algorithms for social network mining such as those to classify relations into categories, to make extraction scalable, and to obtain and utilize person-to-word relations. Third, every module is implemented in POLYPHONET, which has been used at four academic conferences, each with more than 500 participants. We overview that system. Finally, a novel architecture called Iterative Social Network Mining is proposed. It utilizes simple modules using Google and is characterized by scalability and relate-identify processes: identification of each entity and extraction of relations are repeated to obtain a more precise social network.


international conference on automatic face and gesture recognition | 1996

Spotting recognition of human gestures from time-varying images

Takuichi Nishimura; Ryuichi Oka

We study spotting recognition of human gestures from time-varying images. We propose a feature extraction method and a spotting method. The feature extraction reduces each frame of motion into a frame image of small size, such as 3/spl times/3 pixels. We show that the feature with 3/spl times/3 is the best one to provide robust characteristics to the changes of cloth and background. The spotting recognition rate was about 80% for 8 gesture categories. A new spotting method called non-monotonic continuous DP is proposed for spotting gestures and their variations such as reverse, partial and stop motions. We show the effectiveness of the new spotting method.


human factors in computing systems | 2009

Familial collaborations in a museum

Tom Hope; Yoshiyuki Nakamura; Toru Takahashi; Atsushi Nobayashi; Shota Fukuoka; Masahiro Hamasaki; Takuichi Nishimura

Studies of interactive systems in museums have raised important design considerations, but so far have failed to address sufficiently the particularities of family interaction and co-operation. This paper introduces qualitative video-based observations of Japanese families using an interactive portable guide system in a museum. Results show how unexpected usage can occur through particularities of interaction between family members. The paper highlights the necessity to more fully consider familial relationships in HCI.


Java/Jini technologies and high-performance pervasive computing. Conference | 2002

Compact battery-less information terminal (CoBIT) for location-based support systems

Takuichi Nishimura; Hideo Itoh; Yoshinobu Yamamoto; Hideyuki Nakashima

The target of ubiquitous computing environment is to support users to get necessary information and services in a situation-dependent form. Therefore, we propose a location-based information support system by using Compact Battery-less Information Terminal (CoBIT). A CoBIT can communicate with the environmental system and with the user by only the energy supply from the environment. It has a solar cell and get a modulated light from an environmental optical beam transmitter. The current from the solar cell is directly (or through passive circuit) introduced into an earphone, which generates sound for the user. The current is also used to make vibration, LED signal or electrical stimulus on the skin. The sizes of CoBITs are about 2cm in diameter, 3cm in length, which can be hanged on ears conveniently. The cost of it would be only about 1 dollar if produced massively. The CoBIT also has sheet type corner reflector, which reflect optical beam back in the direction of the light source. Therefore the environmental system can easily detect the terminal position and direction as well as some simple signs from the user by multiple cameras with infra-red LEDs. The system identifies the sign by the modulated patterns of the reflected light, which the user makes by occluding the reflector by hand. The environmental system also recognizes other objects using other sensors and displays video information on a nearby monitor in order to realize situated support.


ubiquitous computing | 2006

Doing community: co-construction of meaning and use with interactive information kiosks

Tom Hope; Masahiro Hamasaki; Yutaka Matsuo; Yoshiyuki Nakamura; Noriyuki Fujimura; Takuichi Nishimura

One of the challenges for ubiquitous computing is to design systems that can be both understood by their users and at the same time understand the users themselves. As information and its meaning becomes more associated with the communities that provide and use it, how will it be possible to build effective systems for these users? We have been examining these issues via ethnographic analysis of the information and community supporting system that we have developed and employed at conference events. This paper presents initial analysis and suggests greater focus on the interaction between members of micro-communities of users in future ubicomp research.


intelligent robots and systems | 1997

Spotting recognition of gestures performed by people from a single time-varying image

Takuichi Nishimura; Toshiro Mukai; Ryuichi Oka

Our purpose is to recognize the human gestures from motion image without using contact type sensors such as data gloves or markers on hands. This paper proposes a method which can extracts good features from a small-sized image of a person and demonstrates the effectiveness using an omni-directional camera which can capture the gestures of more than one person.


New Generation Computing | 2000

A method of model improvement for spotting recognition of gestures using an image sequence

Takuichi Nishimura; Hiroaki Yabe; Ryuichi Oka

We have developed a real-time gesture recognition system whose models can be taught by only one instruction. Therefore the system can adapt to new gesture performer quickly but it can not raise the recognition rates even if we teach gestures many times. That is because the system could not utilize all the teaching data. In order to cope with the problem, averages of teaching data are calculated. First, the best frame correspondence of the teaching data and the model is obtained by Continuous DP. Next the averages and variations are calculated for each frame of the model. We show the effectiveness of our method in the experiments.


international conference on pervasive computing | 2004

A Compact Battery-Less Information Terminal for Real World Interaction

Takuichi Nishimura; Hideo Itoh; Yoshiyuki Nakamura; Yoshinobu Yamamoto; Hideyuki Nakashima

A ubiquitous computing environment is intended to support users in their search for necessary information and services in a situation-dependent form. This paper proposes a location-based information support system using a Compact Battery-less Information Terminal (CoBIT) to support users interactively. A CoBIT can communicate with the environmental system and with the user using only energy supplied from the environmental system and the user. The environmental system has functions to detect the terminal position and direction to realize situational support. This paper newly shows detailed characteristics of information download and upload using the CoBIT system. And it also describes various types of CoBITs and their usage in museums and event shows.


asian conference on computer vision | 1998

Non-monotonic Continuous Dynamic Programming for Spotting Recognition of Hesitated Gestures from Time-Varying Images

Takuichi Nishimura; Toshiharu Mukai; Ryuichi Oka

Continuous Dynamic Programming (CDP) has been proposed to recognize the meanings of human gestures from motion images. And this CDP has been extended to Non-monotonic CDP in order to recognize hesitated gestures. In this paper, we show the character of Non-monotonic CDP in detail.

Collaboration


Dive into the Takuichi Nishimura's collaboration.

Top Co-Authors

Avatar

Masahiro Hamasaki

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshiyuki Nakamura

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ryuichi Oka

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kentaro Watanabe

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hideaki Takeda

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroyasu Miwa

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ken Fukuda

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hideo Itoh

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge