Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yingke Chen is active.

Publication


Featured researches published by Yingke Chen.


Autonomous Agents and Multi-Agent Systems | 2017

Can bounded and self-interested agents be teammates? Application to planning in ad hoc teams

Muthukumaran Chandrasekaran; Prashant Doshi; Yifeng Zeng; Yingke Chen

Planning for ad hoc teamwork is challenging because it involves agents collaborating without any prior coordination or communication. The focus is on principled methods for a single agent to cooperate with others. This motivates investigating the ad hoc teamwork problem in the context of self-interested decision-making frameworks. Agents engaged in individual decision making in multiagent settings face the task of having to reason about other agents’ actions, which may in turn involve reasoning about others. An established approximation that operationalizes this approach is to bound the infinite nesting from below by introducing level 0 models. For the purposes of this study, individual, self-interested decision making in multiagent settings is modeled using interactive dynamic influence diagrams (I-DID). These are graphical models with the benefit that they naturally offer a factored representation of the problem, allowing agents to ascribe dynamic models to others and reason about them. We demonstrate that an implication of bounded, finitely-nested reasoning by a self-interested agent is that we may not obtain optimal team solutions in cooperative settings, if it is part of a team. We address this limitation by including models at level 0 whose solutions involve reinforcement learning. We show how the learning is integrated into planning in the context of I-DIDs. This facilitates optimal teammate behavior, and we demonstrate its applicability to ad hoc teamwork on several problem domains and configurations.


web intelligence | 2011

Approximating Model Equivalence in Interactive Dynamic Influence Diagrams Using Top K Policy Paths

Yifeng Zeng; Yingke Chen; Prashant Doshi

Interactive dynamic influence diagrams (I-DIDs) are graphical models for sequential decision making in uncertain settings shared by other agents. Algorithms for solving I-DIDs face the challenge of an exponentially growing space of behavioral models ascribed to other agents over time. Previous approaches mainly cluster behaviorally equivalent models to reduce the complexity of I-DID solutions. In this paper, we seek to further reduce the model space by introducing an approximate measure of behavioral equivalence (BE) and using it to group models. Specifically, we focus on


Journal of Artificial Intelligence Research | 2017

Decision-Theoretic Planning Under Anonymity in Agent Populations

Ekhlas Sonu; Yingke Chen; Prashant Doshi

K


Knowledge and Information Systems | 2016

Approximating behavioral equivalence for scaling solutions of I-DIDs

Yifeng Zeng; Prashant Doshi; Yingke Chen; Yinghui Pan; Hua Mao; Muthukumaran Chandrasekaran

most probable paths in the solution of each model and compare these policy paths to determine approximate BE. We discuss the challenges in computing the top


web intelligence | 2015

Interactive Dynamic Influence Diagrams for Relational Agents

Yinghui Pan; Yingke Chen; Jing Tang; Yifeng Zeng

K


international conference on agents and artificial intelligence | 2015

Speeding up Planning in Multiagent Settings Using CPU-GPU Architectures

Fadel Adoe; Yingke Chen; Prashant Doshi

policy paths and experimentally evaluate the performance of this heuristic approach in terms of the scalability and quality of the solution.


adaptive agents and multi agents systems | 2011

Approximating behavioral equivalence of models using top-k policy paths

Yifeng Zeng; Yingke Chen; Prashant Doshi

We study the problem of self-interested planning under uncertainty in settings shared with more than a thousand other agents, each of which plans at its own individual level. We refer to such large numbers of agents as an agent population. The decision-theoretic formalism of interactive partially observable Markov decision process (I-POMDP) is used to model the agents self-interested planning. The first contribution of this article is a method for drastically scaling the finitely-nested I-POMDP to certain agent populations for the first time. Our method exploits two types of structure that is often exhibited by agent populations -- anonymity and context-specific independence. We present a variant called the many-agent I-POMDP that models both these types of structure to plan efficiently under uncertainty in multiagent settings. In particular, the complexity of the belief update and solution in the many-agent I-POMDP is polynomial in the number of agents compared with the exponential growth that challenges the original framework. While exploiting structure helps mitigate the curse of many agents, the well-known curse of history that afflicts I-POMDPs continues to challenge scalability in terms of the planning horizon. The second contribution of this article is an application of the branch-and-bound scheme to reduce the exponential growth of the search tree for look ahead. For this, we introduce new fast-computing upper and lower bounds for the exact value function of the many-agent I-POMDP. This speeds up the look-ahead computations without trading off optimality, and reduces both memory and run time complexity. The third contribution is a comprehensive empirical evaluation of the methods on three new problems domains -- policing large protests, controlling traffic congestion at a busy intersection, and improving the AI for the popular Clash of Clans multiplayer game. We demonstrate the feasibility of exact self-interested planning in these large problems, and that our methods for speeding up the planning are effective. Altogether, these contributions represent a principled and significant advance toward moving self-interested planning under uncertainty to real-world applications.


adaptive agents and multi-agents systems | 2015

Iterative Online Planning in Multiagent Settings with Limited Model Spaces and PAC Guarantees

Yingke Chen; Prashant Doshi; Yifeng Zeng

Interactive dynamic influence diagram (I-DID) is a recognized graphical framework for sequential multiagent decision making under uncertainty. I-DIDs concisely represent the problem of how an individual agent should act in an uncertain environment shared with others of unknown types. I-DIDs face the challenge of solving a large number of models that are ascribed to other agents. A known method for solving I-DIDs is to group models of other agents that are behaviorally equivalent. Identifying model equivalence requires solving models and comparing their solutions generally represented as policy trees. Because the trees grow exponentially with the number of decision time steps, comparing entire policy trees becomes intractable, thereby limiting the scalability of previous I-DID techniques. In this article, our specific approaches focus on utilizing partial policy trees for comparison and determining the distance between updated beliefs at the leaves of the trees. We propose a principled way to determine how much of the policy trees to consider, which trades off solution quality for efficiency. We further improve on this technique by allowing the partial policy trees to have paths of differing lengths. We evaluate these approaches in multiple problem domains and demonstrate significantly improved scalability over previous approaches.


international conference on automated planning and scheduling | 2015

Individual planning in agent populations: exploiting anonymity and frame-action hypergraphs

Ekhlas Sonu; Yingke Chen; Prashant Doshi

Interactive Dynamic Influence Diagrams (I-DIDs) are a general decision making framework for multiple agents that are either collaborative or competitive. The framework allows for agents to plan individually at their own level in the context of other agents acting and observing in a partially observable environment. Most of the I-DID techniques focus on a simple setting of two agents in which one subject agent models the other agent. Extending the approaches to multiple (>2) agents becomes very complicated since the subject agent needs to model all other agents that themselves are modelled as the I-DIDs. In this paper, we exploit potential relations of the modelled agents and avoid to model the other agents individually from the perspective of the subject agent. We show the preliminary results from the investigation and discuss further research development on learning relations between multiple agents for new I-DID solutions.


Autonomous Agents and Multi-Agent Systems | 2011

Approximating Behavioral Equivalence of Models Using Top-K Policy Paths (Extended Abstract)

Yifeng Zeng; Yingke Chen; Doshi Prashant

Planning under uncertainty in multiagent settings is highly intractable because of history and plan space complexities. Probabilistic graphical models exploit the structure of the problem domain to mitigate the computational burden. In this article, we introduce the first parallelization of planning in multiagent settings on a CPU-GPU heterogeneous system. In particular, we focus on the algorithm for exactly solving interactive dynamic influence diagrams, which is a recognized graphical models for multiagent planning. Beyond parallelizing the standard Bayesian inference and the computation of decisions’ expected utilities, we also solve the other agents behavioral models in a parallel manner. The GPU-based approach provides significant speedup on two benchmark problems.

Collaboration


Dive into the Yingke Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yinghui Pan

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge