Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lanbo She is active.

Publication


Featured researches published by Lanbo She.


human-robot interaction | 2014

Collaborative effort towards common ground in situated human-robot dialogue

Joyce Y. Chai; Lanbo She; Rui Fang; Spencer Ottarson; Cody Littley; Changsong Liu; Kenneth Hanson

In situated human-robot dialogue, although humans and robots are co-present in a shared environment, they have significantly mismatched capabilities in perceiving the shared environment. Their representations of the shared world are misaligned. In order for humans and robots to communicate with each other successfully using language, it is important for them to mediate such differences and to establish common ground. To address this issue, this paper describes a dialogue system that aims to mediate a shared perceptual basis during human-robot dialogue. In particular, we present an empirical study that examines the role of the robot’s collaborative effort and the performance of natural language processing modules in dialogue grounding. Our empirical results indicate that in situated human-robot dialogue, a low collaborative effort from the robot may lead its human partner to believe a common ground is established. However, such beliefs may not reflect true mutual understanding. To support truly grounded dialogues, the robot should make an extra effort by making its partner aware of its internal representation of the shared world.


annual meeting of the special interest group on discourse and dialogue | 2014

Back to the Blocks World: Learning New Actions through Situated Human-Robot Dialogue

Lanbo She; Shaohua Yang; Yu Cheng; Yunyi Jia; Joyce Y. Chai; Ning Xi

This paper describes an approach for a robotic arm to learn new actions through dialogue in a simplified blocks world. In particular, we have developed a threetier action knowledge representation that on one hand, supports the connection between symbolic representations of language and continuous sensorimotor representations of the robot; and on the other hand, supports the application of existing planning algorithms to address novel situations. Our empirical studies have shown that, based on this representation the robot was able to learn and execute basic actions in the blocks world. When a human is engaged in a dialogue to teach the robot new actions, step-by-step instructions lead to better learning performance compared to one-shot instructions.


meeting of the association for computational linguistics | 2014

Probabilistic Labeling for Efficient Referential Grounding based on Collaborative Discourse

Changsong Liu; Lanbo She; Rui Fang; Joyce Y. Chai

When humans and artificial agents (e.g. robots) have mismatched perceptions of the shared environment, referential communication between them becomes difficult. To mediate perceptual differences, this paper presents a new approach using probabilistic labeling for referential grounding. This approach aims to integrate different types of evidence from the collaborative referential discourse into a unified scheme. Its probabilistic labeling procedure can generate multiple grounding hypotheses to facilitate follow-up dialogue. Our empirical results have shown the probabilistic labeling approach significantly outperforms a previous graphmatching approach for referential grounding.


robot and human interactive communication | 2014

Teaching Robots New Actions through Natural Language Instructions

Lanbo She; Yu Cheng; Joyce Y. Chai; Yunyi Jia; Shaohua Yang; Ning Xi

Robots often have limited knowledge and need to continuously acquire new knowledge and skills in order to collaborate with its human partners. To address this issue, this paper describes an approach which allows human partners to teach a robot (i.e., a robotic arm) new high-level actions through natural language instructions. In particular, built upon the traditional planning framework, we propose a representation of high-level actions that only consists of the desired goal states rather than step-by-step operations (although these operations may be specified by the human in their instructions). Our empirical results have shown that, given this representation, the robot can reply on automated planning and immediately apply the newly learned action knowledge to perform actions under novel situations.


meeting of the association for computational linguistics | 2016

Incremental acquisition of verb hypothesis space towards physical world interaction

Lanbo She; Joyce Y. Chai

As a new generation of cognitive robots start to enter our lives, it is important to enable robots to follow human commands and to learn new actions from human language instructions. To address this issue, this paper presents an approach that explicitly represents verb semantics through hypothesis spaces of fluents and automatically acquires these hypothesis spaces by interacting with humans. The learned hypothesis spaces can be used to automatically plan for lower-level primitive actions towards physical world interaction. Our empirical results have shown that the representation of a hypothesis space of fluents, combined with the learned hypothesis selection algorithm, outperforms a previous baseline. In addition, our approach applies incremental learning, which can contribute to life-long learning from humans in the future.


international conference on robotics and automation | 2014

Perceptive feedback for natural language control of robotic operations

Yunyi Jia; Ning Xi; Joyce Y. Chai; Yu Cheng; Rui Fang; Lanbo She

A new planning and control scheme for natural language control of robotic operations using the perceptive feedback is presented. Different from the traditional open-loop natural language control, the scheme incorporates the high-level planning and low-level control of the robotic systems and makes the high-level planning become a closed-loop process such that it is able to handle some unexpected events in the robotics system and the environment. The experimental results on a natural language controlled mobile manipulator clearly demonstrate the advantages of the proposed method.


IFAC Proceedings Volumes | 2014

Modelling and Analysis of Natural Language Controlled Robotic Systems

Yu Cheng; Yunyi Jia; Rui Fang; Lanbo She; Ning Xi; Joyce Y. Chai

Abstract Controlling a robotic system through natural language commands can provide great convenience for human users. Many researchers have investigated on high-level planning for natural language controlled robotic systems. Most of the methods designed the planning as open-loop processes and therefore cannot well handle unexpected events for realistic applications. In this paper, a closed-loop method is proposed for task-planning to overcome unexpected events occurred during implementation of assigned tasks. The designed system is modeled and theoretically proved to be able to stabilize the robotic system under unexpected events. Experimental results demonstrate effectiveness and advantages of the proposed method.


conference on automation science and engineering | 2016

Program robots manufacturing tasks by natural language instructions

Yunyi Jia; Lanbo She; Yu Cheng; Jiatong Bao; Joyce Y. Chai; Ning Xi

Robotic systems are traditionally programmed through off-line coding interfaces for manufacturing tasks. These programming methods are usually time-consuming and cost a lot of human efforts. They cannot meet the emerging requirements of robotic systems in many areas such as intelligent manufacturing and customized production. To address this issue, this paper develops a human-friendly approach to on-site program robots manufacturing tasks through human-robot interaction by natural language without need of off-line training data. The new programming approach consists of three processes: human teaching, robot learning and robot execution. Through them, the humans are able to use natural language to on-site program robot manufacturing tasks which can involve operations of both single and groups of objects. The effectiveness of the proposed approach is illustrated through experimental results.


meeting of the association for computational linguistics | 2017

Interactive learning of grounded verb semantics towards human-robot communication

Lanbo She; Joyce Y. Chai

To enable human-robot communication and collaboration, previous works represent grounded verb semantics as the potential change of state to the physical world caused by these verbs. Grounded verb semantics are acquired mainly based on the parallel data of the use of a verb phrase and its corresponding sequences of primitive actions demonstrated by humans. The rich interaction between teachers and students that is considered important in learning new skills has not yet been explored. To address this limitation, this paper presents a new interactive learning approach that allows robots to proactively engage in interaction with human partners by asking good questions to learn models for grounded verb semantics. The proposed approach uses reinforcement learning to allow the robot to acquire an optimal policy for its question-asking behaviors by maximizing the long-term reward. Our empirical results have shown that the interactive learning approach leads to more reliable models for grounded verb semantics, especially in the noisy environment which is full of uncertainties. Compared to previous work, the models acquired from interactive learning result in a 48% to 145% performance gain when applied in new situations.


Ai Magazine | 2017

Collaborative Language Grounding Toward Situated Human-Robot Dialogue

Joyce Y. Chai; Rui Fang; Changsong Liu; Lanbo She

To enable situated human-robot dialogue, techniques to support grounded language communication are essential. One particular challenge is to ground human language to robot internal representation of the physical world. Although copresent in a shared environment, humans and robots have mismatched capabilities in reasoning, perception, and action. Their representations of the shared environment and joint tasks are significantly misaligned. Humans and robots will need to make extra effort to bridge the gap and strive for a common ground of the shared world. Only then, is the robot able to engage in language communication and joint tasks. Thus computational models for language grounding will need to take collaboration into consideration. A robot not only needs to incorporate collaborative effort from human partners to better connect human language to its own representation, but also needs to make extra collaborative effort to communicate its representation in language that humans can understand. To address these issues, the Language and Interaction Research group (LAIR) at Michigan State University has investigated multiple aspects of collaborative language grounding. This article gives a brief introduction to this research effort and discusses several collaborative approaches to grounding language to perception and action.

Collaboration


Dive into the Lanbo She's collaboration.

Top Co-Authors

Avatar

Joyce Y. Chai

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Rui Fang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Yunyi Jia

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Ning Xi

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Changsong Liu

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Yu Cheng

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Shaohua Yang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Qiaozi Gao

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Cody Littley

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Guangyue Xu

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge