Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toyoaki Nishida is active.

Publication


Featured researches published by Toyoaki Nishida.


Archive | 2007

New Frontiers in Artificial Intelligence

Takao Terano; Yukio Ohsawa; Toyoaki Nishida; Akira Namatame; Syusaku Tsumoto; Takashi Washio

Neg-Raising (NR) verbs form a class of verbs with a clausal complement that show the following behavior: when a negation syntactically attaches to the matrix predicate, it can semantically attach to the embedded predicate. This paper presents an account of NR predicates within Tree Adjoining Grammar (TAG). We propose a lexical semantic interpretation that heavily relies on a Montague-like semantics for TAG and on higher-order types.


intelligent robots and systems | 2009

Unsupervised simultaneous learning of gestures, actions and their associations for Human-Robot Interaction

Yasser F. O. Mohammad; Toyoaki Nishida; Shogo Okada

Human-Robot Interaction using free hand gestures is gaining more importance as more untrained humans are operating robots in home and office environments. The robot needs to solve three problems to be operated by free hand gestures: gesture (command) detection, action generation (related to the domain of the task) and association between gestures and actions.


New Generation Computing | 2009

Constrained Motif Discovery in Time Series

Yasser F. O. Mohammad; Toyoaki Nishida

The goal of motif discovery algorithms is to efficiently find unknown recurring patterns. In this paper, we focus on motif discovery in time series. Most available algorithms cannot utilize domain knowledge in any way which results in quadratic or at least super-linear time and space complexity. In this paper we define the Constrained Motif Discovery problem which enables utilization of domain knowledge into the motif discovery process. The paper then provides two algorithms called MCFull and MCInc for efficiently solving the constrained motif discovery problem. We also show that most unconstrained motif discovery problems be converted into constrained ones using a change-point detection algorithm. A novel change-point detection algorithm called the Robust Singular Spectrum Transform (RSST) is then introduced and compared to traditional Singular Spectrum Transform using synthetic and real-world data sets. The results show that RSST achieves higher specificity and is more adequate for finding constraints to convert unconstrained motif discovery problems to constrained ones that can be solved using MCFull and MCInc. We then compare the combination of RSST and MCFull or MCInc with two state-of-the-art motif discovery algorithms on a large set of synthetic time series. The results show that the proposed algorithms provided four to ten folds increase in speed compared the unconstrained motif discovery algorithms studied without any loss of accuracy. RSST+MCFull is then used in a real world human-robot interaction experiment to enable the robot to learn free hand gestures, actions, and their associations by watching humans and other robots interacting.


Ai & Society | 2009

From observation to simulation: generating culture-specific behavior for interactive systems

Matthias Rehm; Yukiko I. Nakano; Elisabeth André; Toyoaki Nishida; Nikolaus Bee; Birgit Endrass; Michael Wissner; Afia Akhter Lipi; Hung-Hsuan Huang

In this article we present a parameterized model for generating multimodal behavior based on cultural heuristics. To this end, a multimodal corpus analysis of human interactions in two cultures serves as the empirical basis for the modeling endeavor. Integrating the results from this empirical study with a well-established theory of cultural dimensions, it becomes feasible to generate culture-specific multimodal behavior in embodied agents by giving evidence for the cultural background of the agent. Two sample applications are presented that make use of the model and are designed to be applied in the area of coaching intercultural communication.


international conference on knowledge based and intelligent information and engineering systems | 1998

An acquisition of the relation between vision and action using self-organizing map and reinforcement learning

Kazunori Terada; Hideaki Takeda; Toyoaki Nishida

An agent must acquire internal representation appropriate for its task, environment, and sensors. As a learning algorithm, reinforcement learning is often utilized to acquire the relation between sensory input and action. Learning agents in the real world using visual sensors are often confronted with the critical problem of how to build a necessary and sufficient state space for the agent to execute the task. We propose the acquisition of a relation between vision and action using the visual state-action map (VSAM). VSAM is the application of a self-organizing map (SOM). Input image data is mapped on the node of the learned VSAM. Then VSAM outputs the appropriate action for the state. We applied VSAM to a real robot. The experimental result shows that a real robot avoids the wall while moving around the environment.


intelligent virtual agents | 2008

Culture-Specific First Meeting Encounters between Virtual Agents

Matthias Rehm; Yukiko I. Nakano; Elisabeth André; Toyoaki Nishida

We present our concept of integrating culture as a computational parameter for modeling multimodal interactions with virtual agents. As culture is a social rather than a psychological notion, its influence is evident in interactions, where cultural patterns of behavior and interpretations mismatch. Nevertheless, taking culture seriously its influence penetrates most layers of agent behavior planning and generation. In this article we concentrate on a first meeting scenario, present our model of an interactive agent system and identify, where cultural parameters play a role. To assess the viability of our approach, we outline an evaluation study that is set up at the moment.


international conference on computational linguistics | 1992

Reconstructing spatial image from natural language texts

Atsushi Yamada; Tadashi Yamamoto; Hisashi Ikeda; Toyoaki Nishida; Shuji Doshita

This paper describes the understanding process of the spatial descriptions in Japanese. In order to understand the described world, the authors try to reconstruct the geometric model of the global scene from the scenic descriptions drawing a space. It is done by an experimental computer program SPRINT, which takes natural language texts and produces a model of the described world. To reconstruct the model, the authors extract the qualitative spatial constraints from the text, and represent them as the numerical constraints on the spatial attributes of the eutities. This makes it possible to express the vagueness of the spatial concepts and to derive the maximally plausible interpretation from a chunk of information accumulated as the constraints. The interpretation reflects the temporary belief about the world.


intelligent robots and systems | 2010

Learning interaction protocols using Augmented Baysian Networks applied to guided navigation

Yasser F. O. Mohammad; Toyoaki Nishida

Research in robot navigation usually concentrates on implementing navigation algorithms that allow the robot to navigate without human aid. In many real world situations, it is desirable that the robot is able to understand natural gestures from its user or partner and use this understanding to guide its navigation. Some algorithms already exist for learning natural gestures and/or their associated actions but most of these systems does not allow the robot to automatically generate the associated controller that allows it to actually navigate in the real environment. Furthermore, a technique is needed to combine the gestures/actions learned from interacting with multiple users or partners. This paper resolves these two issues and provides a complete system that allows the robot to learn interaction protocols and act upon them using only unsupervised learning techniques and enables it to combine the protocols learned from multiple users/partners. The proposed approach is general and can be applied to other interactive tasks as well. This paper also provides a real world experiment involving 18 subjects and 72 sessions that supports the ability of the proposed system to learn the needed gestures and to improve its knowledge of different gestures and their associations to actions over time.


Ai & Society | 2008

WOZ experiments for understanding mutual adaptation

Yong Xu; Kazuhiro Ueda; Takanori Komatsu; Takeshi Okadome; Takashi Hattori; Yasuyuki Sumi; Toyoaki Nishida

A robot that is easy to teach not only has to be able to adapt to humans but also has to be easily adaptable to. In order to develop a robot with mutual adaptation ability, we believe that it will be beneficial to first observe the mutual adaptation behaviors that occur in human–human communication. In this paper, we propose a human–human WOZ (Wizard-of-Oz) experiment setting that can help us to observe and understand how the mutual adaptation procedure occurs between human beings in nonverbal communication. By analyzing the experimental results, we obtained three important findings: alignment-based action, symbol-emergent learning, and environmental learning.


Lecture Notes in Computer Science | 2001

Social Intelligence Design - An Overview

Toyoaki Nishida

The advent of the Internet and information technology has brought about significant progress in augmenting the way people can interact with each other in a totally new fashion that was not possible in the past. Examples of new technologies include conversational agents that mediate people in getting to know and communicate with each other, a collaborative virtual environment for large-scale discussions, personalized information tools for helping cross-cultural communication, interactive community media for augmenting community awareness and memory, to name just a few.

Collaboration


Dive into the Toyoaki Nishida's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasuyuki Sumi

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar

Hideaki Takeda

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomohiro Fukuhara

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge