Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianye Hao is active.

Publication


Featured researches published by Jianye Hao.


web intelligence | 2012

ABiNeS: An Adaptive Bilateral Negotiating Strategy over Multiple Items

Jianye Hao; Ho-fung Leung

Multi-item negotiations surround our daily life and usually involve two parties that share common or conflicting interests. Effective automated negotiation techniques should enable the agents to adaptively adjust their behaviors depending on the characteristics of their negotiating partners and negotiation scenarios. This is complicated by the fact that the negotiation agents are usually unwilling to reveal their information (strategies and preferences) to avoid being exploited during negotiation. In this paper, we propose an adaptive negotiation strategy, called ABiNeS, which can make effective negotiations against different types of negotiating partners. The ABiNeS agent employs the non-exploitation point to adaptively adjust the appropriate time to stop exploiting the negotiating partner and also predicts the optimal offer for the negotiating partner based on reinforcement-learning based approach. Simulation results show that the ABiNeS agent can perform more efficient exploitations against different negotiating partners, and thus achieve higher overall utilities compared with the state-of-the-art negotiation strategies in different negotiation scenarios.


Engineering Applications of Artificial Intelligence | 2014

An efficient and robust negotiating strategy in bilateral negotiations over multiple items

Jianye Hao; Songzheng Song; Ho-fung Leung; Zhong Ming

Multi-item negotiations surround our daily life and usually involve two parties that share common or conflicting interests. Effective automated negotiation techniques should enable the agents to adaptively adjust their behaviors depending on the characteristics of their negotiating partners and negotiation scenarios. This is complicated by the fact that the negotiation agents are usually unwilling to reveal their information (strategies and preferences) to avoid being exploited during negotiation. In this paper, we propose an adaptive negotiation strategy, called ABiNeS, which can make effective negotiations against different types of negotiating partners. The ABiNeS strategy employs the non-exploitation point to adaptively adjust the appropriate time to stop exploiting the negotiating partner and also predicts the optimal offer for the negotiating partner based on the reinforcement-learning based approach. Simulation results show that the ABiNeS strategy can perform more efficient exploitations against different types of negotiating partners, and thus achieve higher overall payoffs compared with the state-of-the-art strategies under negotiation tournaments. We also provide a detailed analysis of why the ABiNeS strategy can negotiate more efficiently compared with other existing state-of-the-art negotiation strategies focusing on two major components. Lastly, we propose adopting the single-agent best deviation principle to analyze the robustness of different negotiation strategies based on model checking techniques. Through our analysis, the ABiNeS strategy is shown to be very robust against other state-of-the-art strategies under different negotiation contexts.


ACM Transactions on Autonomous and Adaptive Systems | 2015

Multiagent Reinforcement Social Learning toward Coordination in Cooperative Multiagent Systems

Jianye Hao; Ho-fung Leung; Zhong Ming

Most previous works on coordination in cooperative multiagent systems study the problem of how two (or more) players can coordinate on Pareto-optimal Nash equilibrium(s) through fixed and repeated interactions in the context of cooperative games. However, in practical complex environments, the interactions between agents can be sparse, and each agents interacting partners may change frequently and randomly. To this end, we investigate the multiagent coordination problems in cooperative environments under a social learning framework. We consider a large population of agents where each agent interacts with another agent randomly chosen from the population in each round. Each agent learns its policy through repeated interactions with the rest of the agents via social learning. It is not clear a priori if all agents can learn a consistent optimal coordination policy in such a situation. We distinguish two different types of learners depending on the amount of information each agent can perceive: individual action learner and joint action learner. The learning performance of both types of learners is evaluated under a number of challenging deterministic and stochastic cooperative games, and the influence of the information sharing degree on the learning performance also is investigated—a key difference from the learning framework involving repeated interactions among fixed agents.


Engineering Applications of Artificial Intelligence | 2017

The dynamics of reinforcement social learning in networked cooperative multiagent systems

Jianye Hao; Dongping Huang; Yi Cai; Ho-fung Leung

Abstract Multiagent coordination in cooperative multiagent systems, as one of the fundamental problems in multiagent systems, and has been widely studied in the literature. In real environments, the interactions among agents are usually sparse and regulated by their underlying network structure, which, however, has received relatively few attentions in previous work. To this end, we firstly systematically investigate the multiagent coordination problems in cooperative environments under the networked social learning framework under four representative topologies. A networked social learning framework consists of a population of agents where each agent interacts with another agent randomly selected from its neighborhood in each round. Each agent updates its learning policy through repeated interactions with its neighbors via both individual learning and social learning. It is not clear a priori whether all agents are able to learn towards a consistent optimal coordination policy. Two types of learners are proposed: individual action learner and joint action learner . We evaluate the learning performances of both learners extensively in different cooperative (both single-stage and Markov) games. Besides, the influence of different factors (network topologies, different types of games, different topology parameters) is investigated and analyzed and new insights are obtained.


australasian joint conference on artificial intelligence | 2015

Hierarchical Learning for Emergence of Social Norms in Networked Multiagent Systems

Chao Yu; Hongtao Lv; Fenghui Ren; Honglin Bao; Jianye Hao

In this paper, a hierarchical learning framework is proposed for emergence of social norms in networked multiagent systems. This framework features a bottom level of agents and several levels of supervisors. Agents in the bottom level interact with each other using reinforcement learning methods, and report their information to their supervisors after each interaction. Supervisors then aggregate the reported information and produce guide policies by exchanging information with other supervisors. The guide policies are then passed down to the subordinate agents in order to adjust their learning behaviors heuristically. Experiments are carried out to explore the efficiency of norm emergence under the proposed framework, and results verify that learning from local interactions integrating hierarchical supervision can be an effective mechanism for emergence of social norms.


Scientific Reports | 2016

Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies.

Chao Yu; Guozhen Tan; Hongtao Lv; Zhen Wang; Jun Meng; Jianye Hao; Fenghui Ren

Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people’s adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics.


international conference on tools with artificial intelligence | 2011

Learning to Achieve Social Rationality Using Tag Mechanism in Repeated Interactions

Jianye Hao; Ho-fung Leung

In multi-agent system, social rationality is a desirable goal to achieve in terms of maximizing the global efficiency of the system. Using tag to select partners in agent populations has been shown to be successful to promote social rationality among agents in prisoners dilemma game and anti-coordination game, but the results are not quite satisfactory. We develop a tag-based learning framework for a population of agents, in which each agent employs a reinforcement learning based strategy instead of using evolutionary learning as in previous works to make their decisions. We evaluate this learning framework in different games and simulation results show that better performance in terms of coordinating on socially rational outcomes can be achieved compared with that in previous work.


ACM Transactions on Autonomous and Adaptive Systems | 2013

Achieving Socially Optimal Outcomes in Multiagent Systems with Reinforcement Social Learning

Jianye Hao; Ho-fung Leung

In multiagent systems, social optimality is a desirable goal to achieve in terms of maximizing the global efficiency of the system. We study the problem of coordinating on socially optimal outcomes among a population of agents, in which each agent randomly interacts with another agent from the population each round. Previous work [Hales and Edmonds 2003; Matlock and Sen 2007, 2009] mainly resorts to modifying the interaction protocol from random interaction to tag-based interactions and only focus on the case of symmetric games. Besides, in previous work the agents’ decision making processes are usually based on evolutionary learning, which usually results in high communication cost and high deviation on the coordination rate. To solve these problems, we propose an alternative social learning framework with two major contributions as follows. First, we introduce the observation mechanism to reduce the amount of communication required among agents. Second, we propose that the agents’ learning strategies should be based on reinforcement learning technique instead of evolutionary learning. Each agent explicitly keeps the record of its current state in its learning strategy, and learn its optimal policy for each state independently. In this way, the learning performance is much more stable and also it is suitable for both symmetric and asymmetric games. The performance of this social learning framework is extensively evaluated under the testbed of two-player general-sum games comparing with previous work [Hao and Leung 2011; Matlock and Sen 2007]. The influences of different factors on the learning performance of the social learning framework are investigated as well.


pacific rim international conference on artificial intelligence | 2012

Learning to achieve socially optimal solutions in general-sum games

Jianye Hao; Ho-fung Leung

During multi-agent interactions, robust strategies are needed to help the agents to coordinate their actions on efficient outcomes. A large body of previous work focuses on designing strategies towards the goal of Nash equilibrium under self-play, which can be extremely inefficient in many situations. On the other hand, apart from performing well under self-play, a good strategy should also be able to well respond against those opponents adopting different strategies as much as possible. In this paper, we consider a particular class of opponents whose strategies are based on best-response policy and also we target at achieving the goal of social optimality. We propose a novel learning strategy TaFSO which can effectively influence the opponents behavior towards socially optimal outcomes by utilizing the characteristic of best-response learners. Extensive simulations show that our strategy TaFSO achieves better performance than previous work under both self-play and against the class of best-response learners.


international conference on software engineering | 2012

Analyzing multi-agent systems with probabilistic model checking approach

Songzheng Song; Jianye Hao; Yang Liu; Jun Sun; Ho-fung Leung; Jin Song Dong

Multi-agent systems, which are composed of autonomous agents, have been successfully employed as a modeling paradigm in many scenarios. However, it is challenging to guarantee the correctness of their behaviors due to the complex nature of the autonomous agents, especially when they have stochastic characteristics. In this work, we propose to apply probabilistic model checking to analyze multi-agent systems. A modeling language called PMA is defined to specify such kind of systems, and LTL property and logic of knowledge combined with probabilistic requirements are supported to analyze system behaviors. Initial evaluation indicates the effectiveness of our current progress; meanwhile some challenges and possible solutions are discussed as our ongoing work.

Collaboration


Dive into the Jianye Hao's collaboration.

Top Co-Authors

Avatar

Ho-fung Leung

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Siqi Chen

Maastricht University

View shared research outputs
Top Co-Authors

Avatar

Chao Yu

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Karl Tuyls

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge