Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chaoran Liu is active.

Publication


Featured researches published by Chaoran Liu.


human-robot interaction | 2012

Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction

Chaoran Liu; Carlos Toshinori Ishi; Hiroshi Ishiguro; Norihiro Hagita

Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, “Geminoid F”, a typical humanoid robot with less facial degrees of freedom, “Robovie R2”, and a robot with a 3-axis rotatable neck and movable lips, “Telenoid R2”). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping peoples original motions without gaze information. We also find that an upwards motion of a robots face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping peoples original motions with gaze information in terms of perceived naturalness.


human-robot interaction | 2010

Head motions during dialogue speech and nod timing control in humanoid robots

Carlos Toshinori Ishi; Chaoran Liu; Hiroshi Ishiguro; Norihiro Hagita

Head motion naturally occurs in synchrony with speech and may carry paralinguistic information, such as intention, attitude and emotion, in dialogue communication. With the aim of verifying the relationship between head motion and the dialogue acts carried by speech, analyses were conducted on motion-captured data for several speakers during natural dialogues. The analysis results first confirmed the trends of our previous work, showing that regardless of the speaker, nods frequently occur during speech utterances, not only for expressing dialogue acts such as agreement and affirmation, but also appearing at the last syllable of the phrase, in strong phrase boundaries, especially when the speaker is talking confidently, or expressing interest in the interlocutors talk. Inter-speaker variability indicated that the frequency of head motion may vary according to the speakers age or status, while intra-speaker variability indicated that the frequency of head motion also differs depending on the inter-personal relationship with the interlocutor. A simple model for generating nods based on rules inferred from the analysis results was proposed and evaluated in two types of humanoid robots. Subjective scores showed that the proposed model could generate head motions with naturalness comparable to the original motions.


intelligent robots and systems | 2012

Evaluation of formant-based lip motion generation in tele-operated humanoid robots

Carlos Toshinori Ishi; Chaoran Liu; Hiroshi Ishiguro; Norihiro Hagita

Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operators voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluation indicated that the proposed audio-based method can generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.


human-robot interaction | 2015

Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems

Chaoran Liu; Carlos Toshinori Ishi; Hiroshi Ishiguro

In a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator’s head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator’s head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot’s ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).


intelligent robots and systems | 2016

Hearing support system using environment sensor network

Carlos Toshinori Ishi; Chaoran Liu; Jani Even; Norihiro Hagita

In order to solve the problems of current hearing aid devices, we make use of environment sensor network, and propose a hearing support system, where individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources is reconstructed. The performance of the selective sound separation module was evaluated for different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. In the same noise condition, subjective intelligibility tests were conducted, and an improvement of 65 to 90% word intelligibility rates could be achieved by using the proposed hearing support system.


Archive | 2018

Generation of Head Motion During Dialogue Speech, and Evaluation in Humanoid Robots

Carlos Toshinori Ishi; Chaoran Liu; Hiroshi Ishiguro

Head motion occurs naturally and in synchrony with speech during human dialogue communication and may carry paralinguistic information such as intentions, attitudes, and emotions. Therefore, natural-looking head motion by a robot is important for smooth human–robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, we proposed a model for generating nodding and head tilting and evaluated for different types of humanoid robot. Analysis of subjective scores showed that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people’s original motions without gaze information. We also found that an upward motion of the face can be used by robots that do not have a mouth in order to provide the appearance that an utterance is taking place. Finally, we conducted an experiment in which participants act as visitors to an information desk attended by robots. Evaluation results indicated that our model is equally effective as directly mapping people’s original motions with gaze information in terms of perceived naturalness.


Archive | 2018

Formant-Based Lip Motion Generation and Evaluation in Humanoid Robots

Carlos Toshinori Ishi; Chaoran Liu; Hiroshi Ishiguro; Norihiro Hagita

Generating natural motion in robots is important for improving human–robot interaction. We have developed a teleoperation system in which the lip motion of a remote humanoid robot is automatically controlled by the operator’s voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluations indicate that the proposed audio-based method can generate lip motion with superior naturalness to vision-based and motion capture-based approaches. Partial lip width control is shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.


AVSP | 2011

Speech-driven lip motion generation for tele-operated humanoid robots.

Carlos Toshinori Ishi; Chaoran Liu; Hiroshi Ishiguro; Norihiro Hagita


International Journal of Humanoid Robotics | 2013

GENERATION OF NODDING, HEAD TILTING AND GAZING FOR HUMAN–ROBOT SPEECH INTERACTION

Chaoran Liu; Carlos Toshinori Ishi; Hiroshi Ishiguro; Norihiro Hagita


conference of the international speech communication association | 2012

Evaluation of a formant-based speech-driven lip motion generation.

Carlos Toshinori Ishi; Chaoran Liu; Hiroshi Ishiguro; Norihiro Hagita

Collaboration


Dive into the Chaoran Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jani Even

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge