Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yifeng Zeng is active.

Publication


Featured researches published by Yifeng Zeng.


Autonomous Agents and Multi-Agent Systems | 2009

Graphical models for interactive POMDPs: representations and solutions

Prashant Doshi; Yifeng Zeng; Qiongyu Chen

We develop new graphical representations for the problem of sequential decision making in partially observable multiagent environments, as formalized by interactive partially observable Markov decision processes (I-POMDPs). The graphical models called interactive influence diagrams (I-IDs) and their dynamic counterparts, interactive dynamic influence diagrams (I-DIDs), seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent given its belief as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-IDs and I-DIDs may be applied and demonstrate their usefulness. We also show how the models may be solved using the standard algorithms that are applicable to DIDs. Solving I-DIDs exactly involves knowing the solutions of possible models of the other agents. The space of models grows exponentially with the number of time steps. We present a method of solving I-DIDs approximately by limiting the number of other agents’ candidate models at each time step to a constant. We do this by clustering models that are likely to be behaviorally equivalent and selecting a representative set from the clusters. We discuss the error bound of the approximation technique and demonstrate its empirical performance.


IEEE Transactions on Knowledge and Data Engineering | 2014

Influence Spreading Path and Its Application to the Time Constrained Social Influence Maximization Problem and Beyond

Bo Liu; Gao Cong; Yifeng Zeng; Dong Xu; Yeow Meng Chee

Influence maximization is a fundamental research problem in social networks. Viral marketing, one of its applications, is to get a small number of users to adopt a product, which subsequently triggers a large cascade of further adoptions by utilizing “Word-of-Mouth” effect in social networks. Time plays an important role in the influence spread from one user to another and the time needed for a user to influence another varies. In this paper, we propose the time constrained influence maximization problem. We show that the problem is NP-hard, and prove the monotonicity and submodularity of the time constrained influence spread function. Based on this, we develop a greedy algorithm. To improve the algorithm scalability, we propose the concept of Influence Spreading Path in social networks and develop a set of new algorithms for the time constrained influence maximization problem. We further parallelize the algorithms for achieving more time savings. Additionally, we generalize the proposed algorithms for the conventional influence maximization problem without time constraints. All of the algorithms are evaluated over four public available datasets. The experimental results demonstrate the efficiency and effectiveness of the algorithms for both conventional influence maximization problem and its time constrained version.


web intelligence | 2010

Epsilon-Subjective Equivalence of Models for Interactive Dynamic Influence Diagrams

Prashant Doshi; Muthukumaran Chandrasekaran; Yifeng Zeng

Interactive dynamic influence diagrams (I-DID) are graphical models for sequential decision making in uncertain settings shared by other agents. Algorithms for solving I-DIDs face the challenge of an exponentially growing space of candidate models ascribed to other agents, over time. Pruning behaviorally equivalent models is one way toward minimizing the model set. We seek to further reduce the complexity by additionally pruning models that are approximately subjectively equivalent. Toward this, we define subjective equivalence in terms of the distribution over the subject agents future action-observation paths, and introduce the notion of epsilon-subjective equivalence. We present a new approximation technique that reduces the candidate model space by removing models that are epsilon-subjectively equivalent with representative ones.


agents and data mining interaction | 2009

Auto-Clustering Using Particle Swarm Optimization and Bacterial Foraging

Jakob Rutkowski Olesen; Jorge Cordero; Yifeng Zeng

This paper presents a hybrid approach for clustering based on particle swarm optimization (PSO) and bacteria foraging algorithms (BFA). The new method AutoCPB (Auto-Clustering based on particle bacterial foraging) makes use of autonomous agents whose primary objective is to cluster chunks of data by using simplistic collaboration. Inspired by the advances in clustering using particle swarm optimization, we suggest further improvements. Moreover, we gathered standard benchmark datasets and compared our new approach against the standard K-means algorithm, obtaining promising results. Our hybrid mechanism outperforms earlier PSO-based approaches by using simplistic communication between agents.


Applied Artificial Intelligence | 2009

EXPERIMENTS WITH ONLINE REINFORCEMENT LEARNING IN REAL-TIME STRATEGY GAMES

Kresten Toftgaard Andersen; Yifeng Zeng; Dennis Dahl Christensen; Dung Tran

Real-time strategy (RTS) games provide a challenging platform to implement online reinforcement learning (RL) techniques in a real application. Computer, as one game player, monitors opponents’ (human or other computers) strategies and then updates its own policy using RL methods. In this article, we first examine the suitability of applying the online RL in various computer games. Reinforcement learning application depends on both RL complexity and the game features. We then propose a multi-layer framework for implementing online RL in an RTS game. The framework significantly reduces RL computational complexity by decomposing the state space in a hierarchical manner. We implement an RTS game—Tank General—and perform a thorough test on the proposed framework. We consider three typical profiles of RTS game players and compare two basic RL techniques applied in the game. The results show the effectiveness of our proposed framework and shed light on relevant issues in using online RL in RTS games.


adaptive agents and multi-agents systems | 2007

Graphical models for online solutions to interactive POMDPs

Prashant Doshi; Yifeng Zeng; Qiongyu Chen

We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness.


granular computing | 2009

Classification using Markov blanket for feature selection

Yifeng Zeng; Jian Luo; Shuyuan Lin

Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance.


Expert Systems With Applications | 2016

Maximizing influence under influence loss constraint in social networks

Yifeng Zeng; Xuefeng Chen; Gao Cong; Shengchao Qin; Jing Tang; Yanping Xiang

Formulate a new influence maximization problem in social networks.Propose a new algorithm to solve the problem.Improve the new algorithm to achieve more efficiency.Experiment the methods in four real-world social networks. Influence maximization is a fundamental research problem in social networks. Viral marketing, one of its applications, aims to select a small set of users to adopt a product, so that the word-of-mouth effect can subsequently trigger a large cascade of further adoption in social networks. The problem of influence maximization is to select a set of K nodes from a social network so that the spread of influence is maximized over the network. Previous research on mining top-K influential nodes assumes that all of the selected K nodes can propagate the influence as expected. However, some of the selected nodes may not function well in practice, which leads to influence loss of top-K nodes. In this paper, we study an alternative influence maximization problem which is naturally motivated by the reliability constraint of nodes in social networks. We aim to find top-K influential nodes given a threshold of influence loss due to the failure of a subset of R(


granular computing | 2008

Refinement of Bayesian network structures upon new data

Yifeng Zeng; Yanping Xiang; Saulius Pacekajus

Refinement of Bayesian network structures using new data becomes more and more relevant. Some work has been done there; however, one problem has not been considered yet - what to do when new data has fewer or more attributes than the existing model. In both cases data contains important knowledge and every effort must be made in order to extract it. In this paper, we propose a general merging algorithm to deal with situations when new data has different set of attributes. The merging algorithm updates sufficient statistics when new data is received. It expands the flexibility of Bayesian network structure refinement methods. The new algorithm is evaluated in extensive experiments, and its applications are discussed at length.


knowledge discovery and data mining | 2008

A decomposition algorithm for learning Bayesian network structures from data

Yifeng Zeng; Jorge Cordero Hernandez

It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks.

Collaboration


Dive into the Yifeng Zeng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yinghui Pan

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Yanping Xiang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Xuefeng Chen

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gao Cong

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge