Amy Greenwald
Brown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amy Greenwald.
Computer Networks | 2000
Jeffrey O. Kephart; James E. Hanson; Amy Greenwald
Abstract We envision a future in which the global economy and the Internet will merge, evolving into an information economy bustling with billions of economically motivated software agents that exchange information goods and services with humans and other agents. Economic software agents will differ in important ways from their human counterparts, and these differences may have significant beneficial or harmful effects upon the global economy. It is therefore important to consider the economic incentives and behaviors of economic software agents, and to use every available means to anticipate their collective interactions. We survey research conducted by the Information Economies group at IBM Research aimed at understanding collective interactions among agents that dynamically price information goods or services. In particular, we study the potential impact of widespread shopbot usage on prices, the price dynamics that may ensue from various mixtures of automated pricing agents (or “pricebots”), the potential use of machine-learning algorithms to improve profits, and more generally the interplay among learning, optimization, and dynamics in agent-based information economies. These studies illustrate both beneficial and harmful collective behaviors that can arise in such systems, suggest possible cures for some of the undesired phenomena, and raise fundamental theoretical issues, particularly in the realms of multi-agent learning and dynamic optimization.
IEEE Internet Computing | 2001
Amy Greenwald; Peter Stone
Designing agents that can bid in online simultaneous auctions is a complex task. The authors describe task-specific details and strategies of agents in a trading agent competition. More specifically, the article describes the task-specific details of and the general motivations behind, the four top-scoring agents. First, we discuss general strategies used by most of the participating agents. We then report on the strategies of the four top-placing agents. We conclude with suggestions for improving the design of future trading agent competitions.
Electronic Markets | 2003
Michael P. Wellman; Amy Greenwald; Peter Stone; Peter R. Wurman
The 2001 Trading Agent Competition was the second in a series of events aiming to shed light on research issues in automating trading strategies. Based on a challenging market scenario in the domain of travel shopping, the competition presents agents with difficult issues in bidding strategy, market prediction, and resource allocation. Entrants in 2001 demonstrated substantial progress over the prior year, with the overall level of competence exhibited suggesting that trading in online markets is a viable domain for highly autonomous agents.
electronic commerce | 1999
Amy Greenwald; Jeffrey O. Kephart; Gerald Tesauro
Shopbots are software agents that automatically query multiple sellers on the Internet to gather information about prices and other attributes of consumer goods and services. Rapidly increasing in number and sophistication, shopbots are helping more and more buyers minimize expenditure and maximize satisfaction. In response at least partly to this trend, it is anticipated that sellers will come to rely on pricebots, automated agents that employ price-setting algorithms in an attempt to maximize pro ts. This paper reaches toward an understanding of strategic pricebot dynamics. More speci cally, this paper is a comparative study of four candidate price-setting strategies that di er in informational and computational requirements: gametheoretic pricing (GT), myoptimalpricing (MY), derivative following (DF), and Q-learning (Q). In an e ort to gain insights into the tradeo s between practicality and pro tability of pricebot algorithms, the dynamic behavior that arises among homogeneous and heterogeneous collections of pricebots and shopbot-assisted buyers is analyzed and simulated. In homogeneous settings | when all pricebots use the same pricing algorithm | DFs outperform MYs and GTs. Investigation of heterogeneous collections of pricebots, however, reveals an incentive for individual DFs to deviate to MY or GT. The Q strategy exhibits superior performance to all the others since it learns to predict and account for the long-term consequences of its actions. Although the current implementation of Q is impractically expensive, techniques for achieving similar performance at greatly reduced computational cost are under investigation.
Sigecom Exchanges | 2004
Michael Benisch; Amy Greenwald; Ioanna Grypari; Roger Lederman; Victor Naroditskiy; Michael Carl Tschantz
The paper describes the design of the agent BOTTICELLI, a finalist in the 2003 Trading Agent Competition in Supply Chain Management (TAC SCM). In TAC SCM, a simulated computer manufacturing scenario, BOTTICELLI competes with other agents to win customer orders and negotiates with suppliers to procure the components necessary to complete its orders. We formalize subproblems that dictate BOTTICELLIs behavior. Stochastic programming approaches to bidding and scheduling are developed in attempt to solve these problems optimally. In addition, we describe greedy methods that yield useful approximations. Test results compare the performance and computational effciency of these two techniques.
electronic commerce | 2001
Amy Greenwald; Justin A. Boyan
This paper introduces RoxyBot, one of the top-scoring agents in the First International Trading Agent Competition. A TAC agent simulates one vision of future travel agents: it represents a set of clients in simultaneous auctions, trading complementary (e.g., airline tickets and hotel reservations) and substitutable (e.g., symphony and theater tickets) goods. RoxyBot faced two key technical challenges in TAC: (i) allocation---assigning purchased goods to clients at the end of a game instance so as to maximize total client utility, and (ii) completion---determining the optimal quantity of each resource to buy and sell given client preferences, current holdings, and market prices. For the dimensions of TAC, an optimal solution to the allocation problem is tractable, and RoxyBot uses a search algorithm based on A* to produce optimal allocations. An optimal solution to the completion problem is also tractable, but in the interest of minimizing bidding cycle time, RoxyBot solves the completion problem using beam search, producing approximately optimal completions. RoxyBots completer relies on an innovative data structure called a priceline.
adaptive agents and multi-agents systems | 2004
Michael Benisch; Amy Greenwald; Ioanna Grypari; Reeva M. Lederman; Victor Naroditskiy; Michael Carl Tschantz
The paper describes the architecture of Brown Universityýs agent, Botticelli, a finalist in the 2003 Trading Agent Competition in Supply Chain Management (TAC SCM). In TAC SCM, a simulated computer manufacturing scenario, Botticelli competes with other agents to win customer orders and negotiates with suppliers to procure the components necessary to complete its orders. In this paper, two subproblems that dictate Botticelliýs behavior are formalized: bidding and scheduling. Mathematical programming approaches are applied in attempt to solve these problems optimally. In addition, greedy methods that yield useful approximations are described. Test results compare the performance and computational efficiency of these alternative techniques.
Ai Magazine | 2003
Amy Greenwald
This article summarizes 16 agent strategies that were designed for the 2002 Trading Agent Competition. Agent architects use numerous general-purpose AI techniques, including machine learning, planning, partially observable Markov decision processes, Monte Carlo simulations, and multi-agent systems. Ultimately, the most successful agents were primarily heuristic based and domain specific.
Games and Economic Behavior | 2001
Amy Greenwald; Eric J. Friedman; Scott Shenker
This paper describes the results of simulation experiments performed on a suite of learning algorithms. We focus on games in {\em network contexts}. These are contexts in which (1) agents have very limited information about the game; users do not know their own (or any other agents) payoff function, they merely observe the outcome of their play. (2) Play can be extremely asynchronous; players update their strategies at very different rates. There are many proposed learning algorithms in the literature. We choose a small sampling of such algorithms and use numerical simulation to explore the nature of asymptotic play. In particular, we explore the extent to which the asymptotic play depends on three factors, namely: limited information, asynchronous play, and the degree of responsiveness of the learning algorithm.
Electronic Commerce Research | 2005
Peter Stone; Amy Greenwald
This article summarizes the bidding algorithms developed for the on-line Trading Agent Competition held in July, 2000 in Boston. At its heart, the article describes 12 of the 22 agent strategies in terms of (i) bidding strategy, (ii) allocation strategy, (iii) special approaches, and (iv) team motivations. The common and distinctive features of these agent strategies are highlighted. In addition, experimental results are presented that give some insights as to why the top-scoring agents’ strategies were most effective.