Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takanori Komatsu is active.

Publication


Featured researches published by Takanori Komatsu.


Ai & Society | 2008

WOZ experiments for understanding mutual adaptation

Yong Xu; Kazuhiro Ueda; Takanori Komatsu; Takeshi Okadome; Takashi Hattori; Yasuyuki Sumi; Toyoaki Nishida

A robot that is easy to teach not only has to be able to adapt to humans but also has to be easily adaptable to. In order to develop a robot with mutual adaptation ability, we believe that it will be beneficial to first observe the mutual adaptation behaviors that occur in human–human communication. In this paper, we propose a human–human WOZ (Wizard-of-Oz) experiment setting that can help us to observe and understand how the mutual adaptation procedure occurs between human beings in nonverbal communication. By analyzing the experimental results, we obtained three important findings: alignment-based action, symbol-emergent learning, and environmental learning.


International Journal of Social Robotics | 2012

How Does the Difference Between Users' Expectations and Perceptions About a Robotic Agent Affect Their Behavior? An Adaptation Gap Concept for Determining Whether Interactions Between Users and Agents Are Going Well or Not

Takanori Komatsu; Rie Kurosawa; Seiji Yamada

We assumed that the difference between the users’ expectations regarding the functions of an agent and the function that they actually perceived would significantly affect their behavior toward the agent. We then defined this differences as the adaptation gap and experimentally investigated how the adaptation gap signs affected the acceptance rate indicating how many of the agent’s suggestions the participants accepted as their behaviors toward the agent. The results showed that the participants with positive adaptation gap signs had a significantly higher acceptance rate than those with negative ones. This led us to conclude that the adaptation gap signs significantly affected the participants’ behavior toward agents in the way that we expected, and that comprehending these signs will become indispensable for designing interaction between users and agents.


human computer interaction with mobile devices and services | 2013

Voice augmented manipulation: using paralinguistic information to manipulate mobile devices

Daisuke Sakamoto; Takanori Komatsu; Takeo Igarashi

We propose a technique called voice augmented manipulation (VAM) for augmenting user operations in a mobile environment. This technique augments user interactions on mobile devices, such as finger gestures and button pressing, with voice. For example, when a user makes a finger gesture on a mobile phone and voices a sound into it, the operation will continue until stops making the sound or makes another finger gesture. The VAM interface also provides a button-based interface, and the function connected to the button is augmented by voiced sounds. Two experiments verified the effectiveness of the VAM technique and showed that repeated finger gestures significantly decreased compared to current touch-input techniques, suggesting that VAM is useful in supporting user control in a mobile environment.


advanced visual interfaces | 2012

Study of information clouding methods to prevent spoilers of sports match

Satoshi Nakamura; Takanori Komatsu

Seeing the final score of a sports match on the Web often spoils the pleasure of a user who is waiting to watch a recording of this match on TV. This paper proposes four information clouding methods to block spoiling information, and describes implementation of a system using these methods as a browser extension. We then experimentally investigate the usefulness of the methods, taking into account their differences, differences in the variety of content, and differences in the users interest in sports.


human-robot interaction | 2009

Can users react toward an on-screen agent as if they are reacting toward a robotic agent?

Takanori Komatsu; Nozomi Kuki

Our former study showed that users tended not to react to an on-screen agents invitation of a Shiritori game (a last and first game), but did to a robotic agent. Thus, the purpose of this study was to investigate the contributing factors that could make the users react toward an on-screen agent as if they were reacting toward a robotic agent. The results showed that the participants who first accepted the invitation of a robotic agent that was assigned an attractive character reacted toward the on-screen agents as if they were reacting to the robotic one.


affective computing and intelligent interaction | 2005

Toward making humans empathize with artificial agents by means of subtle expressions

Takanori Komatsu

Can we assign attitudes to a computer based on its represented subtle expressions, such as beep sounds and simple animations? If so, which kinds of beep sounds or simple animations are perceived as specific attitudes, such as “disagreement”, “hesitation” or “agreement”? To examine this issue, I carried out two experiments to observe and clarify how participants perceive or assign an attitude to a computer according to beep sounds of different durations and F0 contour’s slopes (Experiment 1) or simple animations of different durations and objects’ velocities (Experiment 2). The results of these two experiments revealed that 1) subtle expressions with increasing intonations (Experiment 1) or velocities (Experiment 2) were perceived by participants as “disagreement”, 2) flat intonations and velocities with longer duration were interpreted as “hesitation”, and 3) decreasing intonations and velocities with shorter duration were taken as “agreement.”


Applied Intelligence | 2012

Formation conditions of mutual adaptation in human-agent collaborative interaction

Yong Xu; Yoshimasa Ohmoto; Shogo Okada; Kazuhiro Ueda; Takanori Komatsu; Takeshi Okadome; Koji Kamei; Yasuyuki Sumi; Toyoaki Nishida

When an adaptive agent works with a human user in a collaborative task, in order to enable flexible instructions to be issued by ordinary people, it is believed that a mutual adaptation phenomenon can enable the agent to handle flexible mapping relations between the human user’s instructions and the agent’s actions. To elucidate the conditions required to induce the mutual adaptation phenomenon, we designed an appropriate experimental environment called “WAITER” (Waiter Agent Interactive Training Experimental Restaurant) and conducted two experiments in this environment. The experimental results suggest that the proposed conditions can induce the mutual adaptation phenomenon.


Journal of Advanced Computational Intelligence and Intelligent Informatics | 2013

Editing Robot Motion Using Phonemic Feature of Onomatopoeias

Junki Ito; Masayoshi Kanoh; Tsuyoshi Nakamura; Takanori Komatsu

∗1Graduate School of Computer and Cognitive Sciences, Chukyo University 101 Tokodachi, Kaizu-cho, Toyota, Aichi 470-0393, Japan E-mail: [email protected] ∗2School of Information Science and Technology, Chukyo University 101 Tokodachi, Kaizu-cho, Toyota, Aichi 470-0393, Japan E-mail: [email protected] ∗3Graduate School of Engineering, Nagoya Institute of Technology Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555, Japan E-mail: [email protected] ∗4Faculty of Textile Science and Technology, Shinshu University 3-15-1 Tokida, Ueda, Nagano 386-8567, Japan E-mail: [email protected]


human factors in computing systems | 2012

Can users live with overconfident or unconfident systems?: a comparison of artificial subtle expressions with human-like expression

Takanori Komatsu; Kazuki Kobayashi; Seiji Yamada; Kotaro Funakoshi; Mikio Nakano

We assume that expressing the levels of confidence using human-like expressions will cause users to have a poorer impression of a system than if artificial subtle expressions (ASEs) were used when the quality of the presented information does not match the expressed level of confidence. We confirmed that this assumption was correct by conducting a psychological experiment.


Archive | 2010

Comparison an On-screen Agent with a Robotic Agent in an Everyday Interaction Style: How to Make Users React Toward an On-screen Agent as if They are Reacting Toward a Robotic Agent

Takanori Komatsu

Communication media terminals, such as PDAs, cell phones, and mobile PCs, are devices that are used globally and are a part of our daily lives. Most people have access to such media terminals. Various interactive agents, such as robotic agents (for example, Gravot et al., 2006; Imai et al., 2003) and embodied conversational agents (ECA) appearing on a computer display (for example, Cassell et al., 2002; Prendinger and Ishizuka, 2004), are now being developed to assist us with our daily tasks. The technologies that these interactive agents can provide will soon be applied to these widespread media terminals. Some researchers have started considering the effects of these different agents appearing on these media terminals on users’ behaviours and impressions, especially for comparisons of on-screen agents appearing in a computer display with robotic agents (Shinozawa et al., 2004; Powers et al., 2007; Wainer et al, 2006). For example, Shinozawa et al. (2004) investigated the effects of a robot’s and on-screen agent’s recommendations on human decision making. Powers et al. (2007) experimentally compared people’s responses in health interview with a computer agent, a collocated robot and a remote robot projected. And Wainer et al. (2006) measured task performance and participants’ impression of robot’s social abilities in a structured task based on the Towers of Hanoi puzzle. Actually, most of these studies reported that most users stated that they felt much more comfortable with the robotic agents and that these agents were much more believable interactive partners compared to on-screen agents. In these studies, the participants were asked to directly face the agents during certain experimental tasks. However, this “face-toface interaction” does not really represent a realistic interaction style with the interactive agents in our daily lives. Imagine that these interactive agents were basically developed to assist us with our daily tasks. It is assumed that the users are engaged in tasks when they need some help from the agents. Thus, it is expected that these users do not look at the agents much but mainly focus on what they are doing. I called this interaction style “an everyday interaction style.” I assumed that this interaction style is much more realistic than the “face-to-face interaction” on which most former studies focused.

Collaboration


Dive into the Takanori Komatsu's collaboration.

Top Co-Authors

Avatar

Seiji Yamada

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Natsuki Oka

Kyoto Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kiyohide Ito

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar

Makoto Okamoto

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar

Takeshi Okadome

Kwansei Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge