Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naoto Yoshida is active.

Publication


Featured researches published by Naoto Yoshida.


robot and human interactive communication | 2015

Crossmodal combination among verbal, facial, and flexion expression for anthropomorphic acceptability

Tomoko Yonezawa; Naoto Yoshida; Jumpei Nishinaka

This paper proposes an effective communication with an agents appearance and reality of the acceptance level of the users order using verbal and nonverbal expressions. The crossmodal combination is expected to enable delicate expressions of the agents internal state especially when the agents decision of the acceptance is different from the agents mind, so the user becomes aware of the difficult situations. We have proposed to adopt the facial expression and the flexion of the word ending as the parameters of the agents internal state. The expressions are attached to the scripts of each acceptance level. The results of the expressions showed the following: 1) the effectiveness of both the facial and flexion expressions, and 2) the crossmodal combinations that generate the agents concealed but real feeling.


human-agent interaction | 2017

Physiological Expression of Robots Enhancing Users' Emotion in Direct and Indirect Communication

Naoto Yoshida; Tomoko Yonezawa

One of the important factors in medical and nursing care recently has been arousing the emotions of patients. Many types of communication robots and pet robots have been developed as communication partners for patients. The users emotions are stimulated during direct communication with the robot, although there is less chance that the robot will approach the user in other situations such as watching TV and listening to music without disturbing him or her.The user feels that the robot is troublesome during the users other tasks. The purpose of this research is to elevate the users emotional experience through emotional expression by physiological expressions of a partner robot in the users daily life. Ambient but emotional expressions of physiological phenomena are perceived by touch, even when the user is concentrating on other tasks. First, we focused on breathing, heartbeat and body temperature as the physiological phenomena. From the results of the evaluations of the robots heartbeat and body temperature, along with our previous results for the breathing, each expression has arousal and pleasure axes of the robots situation. In this paper, we focus on The joint attention of the robot and user to an emotional photograph, and we verified whether the strength of the users own emotional response to the content was changed by the physiological expressions of the robot while they looked at photographs together. The results suggest that the physiological expression of the robot would make the common emotional experience of users the users own emotion in the experience more excited and more relaxed.


international conference on entertainment computing | 2016

Accelerating the Physical Experience of Immersive and Penetrating Music Using Vibration-Motor Array in a Wearable Belt Set

Tomoko Yonezawa; Shota Yanagi; Naoto Yoshida; Yuki Ishikawa

In this research, we aim to create a heightened and physical musical experience by combining electronic sound and time-lagged multiple vibrations that surround the user’s neck, chest, and back. The purpose of the research is to elevate an immersive and extended experiences of music as though the sound source had a physical presence. We developed a wearable interface of a vibration-motor array with separately controlled multiple vibration motors to simulate both strong bass sounds and movement of the physical presence of the sound source. The control of intensity and the time differences among the motors produce not only the illusion of spatial presence but also the physical penetration of the ongoing sound. The results of the evaluations showed the effects of (1) the combination of vibration and sound in the musical experience and (2) time differences of the starting timings between the front and back vibrations when creating the illusion of physical penetration as though the sound had a physical presence.


international conference on computer graphics and interactive techniques | 2016

Virtual ski jump: illusion of slide down the slope and gliding

Naoto Yoshida; Kaede Ueno; Yusuke Naka; Tomoko Yonezawa

We introduce a virtual ski jump system as indoor, experience-based entertainment. The goal of this system is an exciting, enjoyable, safe experience for an inexperienced person. The key of the system is the physical stimuli from below for realistic experience compared to other conventional systems. First, the variation of slope angle creates the illusions of slide down on the arc-slope. Second, the pendant of skis while jumping create a sense of gliding with the difficulty to keep the left-right balance.


international conference on ubiquitous robots and ambient intelligence | 2015

Evaluations of involuntary cross-modal expressions on the skin of a communication robot

Xiaoshun Meng; Naoto Yoshida; Tomoko Yonezawa

In this paper, we introduce a unique combination of multiple involuntary expressions on the skin of a robot; namely, goose bumps, sweats, and shivers, which represent instinctive fears of the anthropomorphic presence. Humans and other living beings express not only voluntary but also involuntary modalities, including physiologic reactions. We expected these fake-involuntary expressions of a robot to realize the life-like or human-alike presence of the robot. In this research, we focused on the unique expression of a strong fear represented not only by the facial expressions but also by the combination of involuntary goose bumps, sweats, and shivers, and so on. Our proposed method utilizes multiple thin rods under the robots skin to generate the goose bumps. A vibration motor and a water tank with a balloon are also attached to the system to create combined expressions. The results of the evaluation showed the effectiveness of the combination of three different involuntary expressions and the different nuances of the expression through a variety of unique combinations.


international conference on distributed ambient and pervasive interactions | 2015

Indirect Monitoring of Cared Person by Onomatopoeic Text of Environmental Sound and User's Physical State

Yusuke Naka; Naoto Yoshida; Tomoko Yonezawa

In this paper, we propose a nonverbal, descriptive method for creating daily life logs, in text format, on behalf of people who require monitoring and/or assistance in taking care of themselves. The users environmental situations are converted into and recorded as onomatopoeic texts in order to preserve their privacy. The users ambient context is detected by the accelerometer, gyro sensor, and microphone in her/his smart device. We propose a soft monitoring system that utilizes nonverbal expressions of both onomatopoeic text logs and symbolic sound expressions that is named Soundgram. We have investigated impressions regarding the monitoring of the elderly and the proposed system via a questionnaire distributed to two groups of potential users, the elderly and middle-aged people, which captures the viewpoint of both the recipient and the caregiver.


human-agent interaction | 2015

Spatial Communication and Recognition in Human-agent Interaction using Motion-parallax-based 3DCG Virtual Agent

Naoto Yoshida; Tomoko Yonezawa

In this paper, we propose spatial communication between a virtual agent and a user through common space in both virtual world and real space. For this purpose, we propose the virtual agent system SCoViA, which renders a synchronized synthesis of the agents appearance corresponding to the users relative position to the monitor based on synchronization with the users motion parallax in order to realize human-agent communication in the real world. In this system, a real-time three-dimensional computer-generated (3DCG) agent is drawn from the changing viewpoint of the user in a virtual space corresponding to the position of the users head as detected by face tracking. We conducted two verifications and discussed the spatial communication between a virtual agent and a user. First, we verified the effect of a synchronized redrawing of the virtual agent for the accurate recognition of a particular object in the real world. Next, we verified the approachability of the agent by reacting to the users eye contact from a diagonal degree to the virtual agent. The results of the evaluations showed that the virtual agents eye contact affected approachability regardless of the users viewpoint and that our proposed system using motion parallax could significantly improve the accuracy of the agents gazing position with each real object. Finally, we discuss the possibility of the real-world human-agent interaction using positional relationship among the agent, real objects, and the user.


human-robot interaction | 2014

Involuntary expression of embodied robot adopting goose bumps

Tomoko Yonezawa; Xiaoshun Meng; Naoto Yoshida; Yukari Nakatani

In this paper, we propose an involuntary expression of embodied robots by adopting goose bumps. The goose bumps are caused by not only external stimuli such as cold temperature but also the internal state of the robot such as fear. For more natural anthropomorphism, the combination of involuntary and voluntary expressions should enable realistic animacy and life-like agency. The bumps on the robots skin are generated by changing lengths of thin rods from each hole. The lengths are controlled by a servo motor which pulls nylon strings connected to the base of thin rods.Categories and Subject DescriptorsH.5 [Information Interfaces and Presentation]: MiscellaneousGeneral TermsDesign


human-agent interaction | 2014

Personal and interactive newscaster agent based on estimation of user's understanding

Naoto Yoshida; Miyuki Yano; Tomoko Yonezawa

In this paper, we propose a virtual agent system that introduces the current news on the Webs depending on the users reactions to the understanding. We focused on the gesture of the user to estimate the users state of understanding based on the head motion. The goal of the research is the users under- standing of the news regardless of the knowledge level. The system controls 1) the level of the news content corresponding to the users gesture of inclining her/his head and 2) the audibility of the agents speech (the volume and the speed) by the users head motion to turn her/his ear to the agent. The results of the evaluations showed the effectiveness of the interactive change of the reading behaviors to be understandable and audible.


human-agent interaction | 2016

Investigating Breathing Expression of a Stuffed-Toy Robot Based on Body-Emotion Model

Naoto Yoshida; Tomoko Yonezawa

Collaboration


Dive into the Naoto Yoshida's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge