Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Swenson is active.

Publication


Featured researches published by Brian Swenson.


asilomar conference on signals, systems and computers | 2012

Distributed learning in large-scale multi-agent games: A modified fictitious play approach

Brian Swenson; Soummya Kar; João M. F. Xavier

The paper concerns the development of distributed equilibria learning strategies in large-scale multi-agent games with repeated plays. With inter-agent information exchange being restricted to a preassigned communication graph, the paper presents a modified version of the fictitious play algorithm that relies only on local neighborhood information exchange for agent policy update. Under the assumption of identical agent utility functions that are permutation invariant, the proposed distributed algorithm leads to convergence of the networked-averaged empirical play histories to a subset of the Nash equilibria, designated as the consensus equilibria. Applications of the proposed distributed framework to strategy design problems encountered in large-scale traffic networks are discussed.


Siam Journal on Control and Optimization | 2017

Robustness Properties in Fictitious-Play-Type Algorithms

Brian Swenson; Soummya Kar; João M. F. Xavier; David S. Leslie

Fictitious play (FP) is a canonical game-theoretic learning algorithm which has been deployed extensively in decentralized control scenarios. However standard treatments of FP, and of many other game-theoretic models, assume rather idealistic conditions which rarely hold in realistic control scenarios. This paper considers a broad class of best response learning algorithms that we refer to as FP-type algorithms. In such an algorithm, given some (possibly limited) information about the history of actions, each individual forecasts the future play and chooses a (myopic) best response strategy given their forecast. We provide a unified analysis of the behavior of FP-type algorithms under an important class of perturbations, thus demonstrating robustness to deviations from the idealistic operating conditions that have been previously assumed. This robustness result is then used to derive convergence results for two control-relevant relaxations of standard game-theoretic applications: distributed (network-base...


european signal processing conference | 2015

A computationally efficient implementation of fictitious play in a distributed setting

Brian Swenson; Soummya Kar; João M. F. Xavier

The paper deals with distributed learning of Nash equilibria in games with a large number of players. The classical fictitious play (FP) algorithm is impractical in large games due to demanding communication requirements and high computational complexity. A variant of FP is presented that aims to mitigate both issues. Complexity is mitigated by use of a computationally efficient Monte-Carlo based best response rule. Demanding communication problems are mitigated by implementing the algorithm in a network-based distributed setting, in which player-to-player communication is restricted to local subsets of neighboring players as determined by a (possibly sparse, but connected) preassigned communication graph. Results are demonstrated via a simulation example.


international joint conference on neural network | 2016

On the design of phase locked loop oscillatory neural networks: Mitigation of transmission delay effects.

Rongye Shi; Thomas C. Jackson; Brian Swenson; Soummya Kar; Lawrence T. Pileggi

This paper introduces a novel design of phase locked loop (PLL) based oscillatory neural networks (ONNs) to mitigate the frequency clustering phenomenon caused by transmission delays in real systems. Theoretical analysis of the ONN reveals that transmission delays can produce frequency clustering that leads to synchronization and convergence failure. This paper describes the redesign of ONN dynamics and associated system-level architecture to achieve robustness. Specifically, we first demonstrate that using the phase information of zero-crossing points of inputs as the PLL error signal enables the ONN dynamical model to correctly synchronize under uniform transmission delays. A Type-II PLL based ONN architecture is shown via simulation to provide this property in hardware. Furthermore, to accommodate non-uniform transmission delays in hardware, a phase synchronization technique is proposed that is shown to provide the correct synchronization behavior.


IEEE Transactions on Automatic Control | 2017

Single Sample Fictitious Play

Brian Swenson; Soummya Kar; João M. F. Xavier

This paper is concerned with distributed learning and optimization in large-scale settings. The well-known fictitious play (FP) algorithm has been shown to achieve Nash equilibrium learning in certain classes of multiagent games. However, FP can be computationally difficult to implement when the number of players is large. Sampled FP (SFP) is a variant of FP that mitigates the computational difficulties arising in FP by using a Monte Carlo (i.e., sampling based) approach. Despite its computational advantages, a shortcoming of SFP is that the number of samples that must be drawn at each iteration grows without bound as the algorithm progresses. In this paper, we propose single sample FP (SSFP)—A variant of SFP in which only one sample needs to be drawn in each round of the algorithm. Convergence of SSFP to the set of Nash equilibria is proven. Simulation results show the performance of SSFP is comparable to that of SFP, despite drawing far fewer samples.


conference on decision and control | 2016

Learning pure-strategy Nash equilibria in networked multi-agent systems with uncertainty

Ceyhun Eksin; Brian Swenson; Soummya Kar; Alejandro Ribeiro

A multi-agent system with uncertainty entails a set of agents intent on maximizing their local utility functions that depend on the actions of other agents and a state of the world while having partial and different information about actions of other agents and the state of the world. When agents repeatedly have to make decisions in these settings, we propose a general class of decision-making dynamics based on the Fictitious Play (FP) algorithm with inertia. We show convergence of the proposed algorithm to pure Nash equilibria for the class of weakly acyclic games - a structural assumption on local utility functions that guarantees existence of pure Nash equilibria - as long as the predictions of the agents of their local utilities satisfy a mild asymptotic accuracy condition. Using the results on the general dynamics, the paper proposes distributed implementations of the FP algorithm with inertia suited for networked multi-agent systems and shows its convergence to pure Nash equilibria. Numerical examples corroborate the analysis providing insights to convergence time.


asilomar conference on signals, systems and computers | 2016

Computationally efficient learning in large-scale games: Sampled fictitious play revisited

Brian Swenson; Soummya Kar; João M. F. Xavier

Fictitious Play (FP) is a popular algorithm known to achieve Nash equilibrium learning in certain large-scale games. However, for games with many players, the computational demands of the FP algorithm can be prohibitive. Sampled FP (SFP) is a variant of FP that mitigates computational demands via a Monte Carlo approach. While SFP does mitigate the complexity of FP, it can be shown that SFP still uses information in an inefficient manner. The paper generalizes the SFP convergence result and studies a stochastic-approximation-based variant that significantly reduces the complexity of SFP.


asilomar conference on signals, systems and computers | 2014

Game-theoretic learning in a distributed-information setting: Distributed convergence to mean-centric equilibria

Brian Swenson; Soummya Kar; João M. F. Xavier

The paper considers distributed learning in large-scale games via fictitious-play type algorithms. Given a preassigned communication graph structure for information exchange among the players, this paper studies a distributed implementation of the Empirical Centroid Fictitious Play (ECFP) algorithm that is well-suited to large-scale games in terms of complexity and memory requirements. It is shown that the distributed algorithm converges to an equilibrium set denoted as the mean-centric equilibria (MCE) for a reasonably large class of games.


IEEE Transactions on Signal Processing | 2015

Empirical Centroid Fictitious Play: An Approach for Distributed Learning in Multi-Agent Games

Brian Swenson; Soummya Kar; João M. F. Xavier


arXiv: Optimization and Control | 2015

A Computationally Efficient Implementation of Fictitious Play for Large-Scale Games.

Brian Swenson; Soummya Kar; João M. F. Xavier

Collaboration


Dive into the Brian Swenson's collaboration.

Top Co-Authors

Avatar

Soummya Kar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

João M. F. Xavier

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Alejandro Ribeiro

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Ceyhun Eksin

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rongye Shi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Thomas C. Jackson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge