Sam Ganzfried
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sam Ganzfried.
Ai Magazine | 2015
Stefano V. Albrecht; J. Christopher Beck; David L. Buckeridge; Adi Botea; Cornelia Caragea; Chi-Hung Chi; Theodoros Damoulas; Bistra Dilkina; Eric Eaton; Pooyan Fazli; Sam Ganzfried; C. Lee Giles; Sébastien Guillet; Robert C. Holte; Frank Hutter; Thorsten Koch; Matteo Leonetti; Marius Lindauer; Marlos C. Machado; Yuri Malitsky; Gary F. Marcus; Sebastiaan Meijer; Francesca Rossi; Arash Shaban-Nejad; Sylvie Thiébaux; Manuela M. Veloso; Toby Walsh; Can Wang; Jie Zhang; Yu Zheng
We review the 2014 International Planning Competition (IPC-2014), the eighth in a series of competitions starting in 1998. IPC-2014 was held in three separate parts to assess state-of-the-art in three prominent areas of planning research: the deterministic (classical) part (IPCD), the learning part (IPCL), and the probabilistic part (IPPC). Each part evaluated planning systems in ways that pushed the edge of existing planner performance by introducing new challenges, novel tasks, or both. The competition surpassed again the number of competitors than its predecessor, highlighting the competition’s central role in shaping the landscape of ongoing developments in evaluating planning systems.
Games | 2018
Sam Ganzfried; Austin Nowak; Joannier Pinales
Creating strong agents for games with more than two players is a major open problem in AI. Common approaches are based on approximating game-theoretic solution concepts such as Nash equilibrium, which have strong theoretical guarantees in two-player zero-sum games, but no guarantees in non-zero-sum games or in games with more than two players. We describe an agent that is able to defeat a variety of realistic opponents using an exact Nash equilibrium strategy in a three-player imperfect-information game. This shows that, despite a lack of theoretical guarantees, agents based on Nash equilibrium strategies can be successful in multiplayer games after all.
Games | 2017
Sam Ganzfried; Farzana Yusuf
Algorithms for equilibrium computation generally make no attempt to ensure that the computed strategies are understandable by humans. For instance the strategies for the strongest poker agents are represented as massive binary files. In many situations, we would like to compute strategies that can actually be implemented by humans, who may have computational limitations and may only be able to remember a small number of features or components of the strategies that have been computed. For example, a human poker player or military leader may not have access to large precomputed tables when making real-time strategic decisions. We study poker games where private information distributions can be arbitrary (i.e., players are dealt cards from different distributions, which depicts the phenomenon in large real poker games where at some points in the hand players have different distribution of hand strength by applying Bayes’ rule given the history of play in the hand thus far). We create a large training set of game instances and solutions, by randomly selecting the information probabilities, and present algorithms that learn from the training instances to perform well in games with unseen distributions. We are able to conclude several new fundamental rules about poker strategy that can be easily implemented by humans.
Ai Magazine | 2017
Sam Ganzfried
The first human versus computer no-limit Texas hold ‘em competition took place from April 24–May 8, 2015 at River’s Casino in Pittsburgh, PA. In this article I present my thoughts on the competition design, agent architecture, and lessons learned. Several problematic hands from the competition are highlighted that reveal the most glaring weaknesses of the agent. The research underlying the agent is placed within a broader context in the AI research community, and several avenues for future study are mapped out.
advances in computer games | 2011
Sam Ganzfried
We develop a new approach that computes approximate equilibrium strategies in Jotto, a popular word game. Jotto is an extremely large two-player game of imperfect information; its game tree has many orders of magnitude more states than games previously studied, including no-limit Texas Hold’em. To address the fact that the game is so large, we propose a novel strategy representation called oracular form, in which we do not explicitly represent a strategy, but rather appeal to an oracle that quickly outputs a sample move from the strategy’s distribution. Our overall approach is based on an extension of the fictitious play algorithm to this oracular setting. We demonstrate the superiority of our computed strategies over the strategies computed by a benchmark algorithm, both in terms of head-to-head and worst-case performance.
adaptive agents and multi agents systems | 2011
Sam Ganzfried; Tuomas Sandholm
adaptive agents and multi-agents systems | 2015
Noam Brown; Sam Ganzfried; Tuomas Sandholm
adaptive agents and multi agents systems | 2008
Sam Ganzfried; Tuomas Sandholm
adaptive agents and multi-agents systems | 2012
Sam Ganzfried; Tuomas Sandholm; Kevin Waugh
national conference on artificial intelligence | 2014
Sam Ganzfried; Tuomas Sandholm