Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chao Yu is active.

Publication


Featured researches published by Chao Yu.


australasian joint conference on artificial intelligence | 2015

Hierarchical Learning for Emergence of Social Norms in Networked Multiagent Systems

Chao Yu; Hongtao Lv; Fenghui Ren; Honglin Bao; Jianye Hao

In this paper, a hierarchical learning framework is proposed for emergence of social norms in networked multiagent systems. This framework features a bottom level of agents and several levels of supervisors. Agents in the bottom level interact with each other using reinforcement learning methods, and report their information to their supervisors after each interaction. Supervisors then aggregate the reported information and produce guide policies by exchanging information with other supervisors. The guide policies are then passed down to the subordinate agents in order to adjust their learning behaviors heuristically. Experiments are carried out to explore the efficiency of norm emergence under the proposed framework, and results verify that learning from local interactions integrating hierarchical supervision can be an effective mechanism for emergence of social norms.


Scientific Reports | 2016

Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies.

Chao Yu; Guozhen Tan; Hongtao Lv; Zhen Wang; Jun Meng; Jianye Hao; Fenghui Ren

Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people’s adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics.


ACM Transactions on Autonomous and Adaptive Systems | 2017

Efficient and Robust Emergence of Norms through Heuristic Collective Learning

Jianye Hao; Jun Sun; Guangyong Chen; Zan Wang; Chao Yu; Zhong Ming

In multiagent systems, social norms serves as an important technique in regulating agents’ behaviors to ensure effective coordination among agents without a centralized controlling mechanism. In such a distributed environment, it is important to investigate how a desirable social norm can be synthesized in a bottom-up manner among agents through repeated local interactions and learning techniques. In this article, we propose two novel learning strategies under the collective learning framework, collective learning EV-l and collective learning EV-g, to efficiently facilitate the emergence of social norms. Extensive simulations results show that both learning strategies can support the emergence of desirable social norms more efficiently and be applicable in a wider range of multiagent interaction scenarios compared with previous work. The influence of different topologies is investigated, which shows that the performance of all strategies is robust across different network topologies. The influences of a number of key factors (neighborhood size, actions space, population size, fixed agents and isolated subpopulations) on norm emergence performance are investigated as well.


Archive | 2017

Collective Learning and Information Diffusion for Efficient Emergence of Social Norms

Chao Yu; Zhen Wang; Hongtao Lv; Honglin Bao; Yapeng Li

Social norms are believed to be the main cause of evolution and establishment of many complex systems in human societies, ranging from language lexicon systems to cultural codes of conduct. Revelation of mechanisms behind the emergence of social norms can not only provide us with a better understanding of formation and evolution processes of opinions, conventions and rules in human societies, but more importantly enable us to build and control large-scale complex systems. In this paper, a theoretical framework is proposed to study the emergence of social norms based on agent collective learning and information diffusion in complex relationship networks. In this framework, agents learn collectively from local interactions with their neighbors using multiagent learning methods, and diffuse their learnt information based on their underlying relationships. Extensive experiments are carried out to test the proposed framework in different topological and environmental settings and experimental results show that the framework is effective for emergence of social norms in complex relationship networks. The proposed framework emulates the opinion aggregation and knowledge transfer process in human and the research findings reveal some significant insights into efficient mechanisms of norm emergence in complex relationship networks.


2017 IEEE International Conference on Agents (ICA) | 2017

Neural learning for the emergence of social norms in multiagent systems

Chao Yu; Yatong Chen; Hongtao Lv; Jiankang Ren; Hongwei Ge; Liang Sun

Social norms such as social rules and conventions play a pivotal role in sustaining system order by facilitating coordination and cooperation in multiagent systems. This paper studies the neural basis for the emergence of social norms in multiagent systems by modeling each agent as a spiking neural system with a learning capability through reinforcement of stochastic synaptic transmission. A spiking neural learning model is proposed to encode the interaction information in the input spike train of the neural network, and decode the agents decisions in the output spike train. Learning takes place in the synapses in terms of changing its firing rate, based on the presynaptic spike train, an eligibility trace that records the synaptic actions and the reinforcement feedback from the interactions. Experimental results show that this basic neural level of learning is capable of maintaining emergence of social norms and different learning parameters and encoding methods in the neural system can bring about various macro emergence phenomenon. This paper makes an initial step towards understanding the correlation between neural synaptic activities and global social consistency, and revealing neural mechanisms underlying agent behavioral level of decision making in multiagent systems.


pacific rim international conference on artificial intelligence | 2016

Adaptive learning for efficient emergence of social norms in networked multiagent systems

Chao Yu; Hongtao Lv; Sandip Sen; Fenghui Ren; Guozhen Tan

This paper investigates how norm emergence can be facilitated by agents adaptive learning behaviors in networked multiagent systems. A general learning framework is proposed, in which agents can dynamically adapt their learning behaviors through social learning of their individual learning experience. Extensive verification of the proposed framework is conducted in a variety of situations, using comprehensive evaluation criteria of efficiency, effectiveness and efficacy. Experimental results show that the adaptive learning framework is robust and efficient for evolving stable norms among agents.


pacific rim international conference on artificial intelligence | 2018

Adaptively Shaping Reinforcement Learning Agents via Human Reward

Chao Yu; Dongxu Wang; Tianpei Yang; Wenxuan Zhu; Yuchen Li; Hongwei Ge; Jiankang Ren

The computational complexity of reinforcement learning algorithms increases exponentially with the size of the problem. An effective solution to this problem is to provide reinforcement learning agents with informationally rich human knowledge, so as to expedite the learning process. Various integration methods have been proposed to combine human reward with agent reward in reinforcement learning. However, the essential distinction of these combination methods and their respective advantages and disadvantages are still unclear. In this paper, we propose an adaptive learning algorithm that is capable of selecting the most suitable method from a portfolio of combination methods in an adaptive manner. We show empirically that our algorithm enables better learning performance under various conditions, compared to the approaches using one combination method alone. By analyzing different ways of integrating human knowledge into reinforcement learning, our work provides some important insights into understanding the role and impact of human factors in human-robot collaborative learning.


pacific rim international conference on artificial intelligence | 2018

Decentralized Multiagent Reinforcement Learning for Efficient Robotic Control by Coordination Graphs.

Chao Yu; Dongxu Wang; Jiankang Ren; Hongwei Ge; Liang Sun

Reinforcement learning is widely used to learn complex behaviors for robotics. However, due to the high-dimensional state/action spaces, reinforcement learning usually suffers from slow learning speed in robotic control applications. A feasible solution to this challenge is to utilize structural decomposition of the control problem and resort to decentralized learning methods to expedite the overall learning process. In this paper, a multiagent reinforcement learning approach is proposed to enable decentralized learning of component behaviors for a robot that is decomposed as a coordination graph. By using this approach, all the component behaviors are learned in parallel by some individual reinforcement learning agents and these agents coordinate their behaviors to solve the global control problem. The approach is validated and analyzed in two benchmark robotic control problems. The experimental validation provides evidence that the proposed approach enables better performance than approaches without decomposition.


adaptive agents and multi-agents systems | 2015

Heuristic Collective Learning for Efficient and Robust Emergence of Social Norms

Hao Jianye; Jun Sun; Dongping Huang; Yi Cai; Chao Yu


european conference on artificial intelligence | 2016

Accelerating Norm Emergence Through Hierarchical Heuristic Learning.

Tianpei Yang; Zhaopeng Meng; Jianye Hao; Sandip Sen; Chao Yu

Collaboration


Dive into the Chao Yu's collaboration.

Top Co-Authors

Avatar

Hongtao Lv

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fenghui Ren

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Hongwei Ge

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiankang Ren

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Liang Sun

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Dongping Huang

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Dongxu Wang

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Guozhen Tan

Dalian University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge