Samuel Barrett
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Samuel Barrett.
robot soccer world cup | 2012
Samuel Barrett; Katie Genter; Yuchen He; Todd Hester; Piyush Khandelwal; Jacob Menashe; Peter Stone
In 2012, UT Austin Villa claimed Standard Platform League championships at both the US Open and RoboCup 2012 in Mexico City. This paper describes the key contributions that led to the team’s victories. First, UT Austin Villa’s code base was developed on a solid foundation with a flexible architecture that enables easy testing and debugging of code. Next, the vision code was updated this year to take advantage of the dual cameras and better processor of the new V4 Nao robots. To improve localization, a custom localization simulator allowed us to implement and test a full team solution to the challenge of both goals being the same color. The 2012 team made use of Northern Bites’ port of B-Human’s walk engine, combined with novel kicks from the walk. Finally, new behaviors and strategies take advantage of opportunities for the robot to take time to setup for a long kick, but kick very quickly when opponent robots are nearby. The combination of these contributions led to the team’s victories in 2012.
international conference on multimedia and expo | 2009
Samuel Barrett; Ran Chang; Xiaojun Qi
We propose a fuzzy combined learning approach to construct a relevance feedback-based content-based image retrieval (CBIR) system for efficient image search. Our system uses a composite short-term and long-term learning approach to learn the semantics of an image. Specifically, the short-term learning technique applies fuzzy support vector machine (FSVM) learning on user labeled and additional chosen image blocks to learn a more accurate boundary for separating the relevant and irrelevant blocks at each feedback iteration. The long-term learning technique applies a novel semantic clustering technique to adaptively learn and update the semantic concepts at each query session. A predictive algorithm is also applied to find images most semantically related to the query based on the semantic clusters generated in the long-term learning. Our extensive experimental results demonstrate the proposed system outperforms several state-of-the-art peer systems in terms of both retrieval precision and storage space.
intelligent robots and systems | 2014
Patrick MacAlpine; Katie Genter; Samuel Barrett; Peter Stone
As the prevalence of autonomous agents grows, so does the number of interactions between these agents. Therefore, it is desirable for these agents to be capable of banding together with previously unknown teammates towards a common goal: to collaborate without pre-coordination. While past research on ad hoc teamwork has focused mainly on theoretical treatments and empirical studies in relatively simple domains, the long-term vision has been to enable robots and other autonomous agents to exhibit the sort of flexibility and adaptability on complex tasks that people do, for example when they play games of “pick-up” basketball or soccer. This paper introduces a series of pick-up robot soccer experiments that were carried out in three different leagues at the international RoboCup competition in 2013. In all cases, agents from different labs were put on teams with no pre-coordination. This paper introduces the structure of these experiments, describes the strategies used by UT Austin Villa in each challenge, and analyzes the results. The papers main contribution is the introduction of a new large-scale ad hoc teamwork testbed that can serve as a starting point for future experimental ad hoc teamwork research.
robot soccer world cup | 2012
Aijun Bai; Xiaoping Chen; Patrick MacAlpine; Daniel Urieli; Samuel Barrett; Peter Stone
The RoboCup simulation league is traditionally the league with the largest number of teams participating, both at the international competitions and worldwide. 2011 was no exception, with a total of 39 teams entering the 2D and 3D simulation competitions. This paper presents the champions of the competitions, WrightEagle from the University of Science and Technology of China in the 2D competition, and UT Austin Villa from the University of Texas at Austin in the 3D competition.
Journal of intelligent systems | 2011
Xiaojun Qi; Samuel Barrett; Ran Chang
We propose to combine short‐term block‐based fuzzy support vector machine (FSVM) learning and long‐term dynamic semantic clustering (DSC) learning to bridge the semantic gap in content‐based image retrieval. The short‐term learning addresses the small sample problem by incorporating additional image blocks to enlarge the training set. Specifically, it applies the nearest neighbor mechanism to choose additional similar blocks. A fuzzy metric is computed to measure the fidelity of the actual class information of the additional blocks. The FSVM is finally applied on the enlarged training set to learn a more accurate decision boundary for classifying images. The long‐term learning addresses the large storage problem by building dynamic semantic clusters to remember the semantics learned during all query sessions. Specifically, it applies a cluster‐image weighting algorithm to find the images most semantically related to the query. It then applies a DSC technique to adaptively learn and update the semantic categories. Our extensive experimental results demonstrate that the proposed short‐term, long‐term, and collaborative learning methods outperform their peer methods when the erroneous feedback resulting from the inherent subjectivity of judging relevance, user laziness, or maliciousness is involved. The collaborative learning system achieves better retrieval precision and requires significantly less storage space than its peers.
Archive | 2015
Samuel Barrett
This book is devoted to the encounter and interaction of agents such as robots with other agents and describes how they cooperate with their previously unknown teammates, forming an Ad Hoc team. It presents a new algorithm, PLASTIC, that allows agents to quickly adapt to new teammates by reusing knowledge learned from previous teammates. PLASTIC is instantiated in both a model-based approach, PLASTIC-Model and a policy-based approach, PLASTIC-Policy. In addition to reusing knowledge learned from previous teammates, PLASTIC also allows users to provide expert-knowledge and can use transfer learning (such as the new Two Stage Transfer algorithm) to quickly create models of new teammates when it has some information about its new teammates. The effectiveness of the algorithm is demonstrated on three domains, ranging from multi-armed bandits to simulated robot soccer games.
robot soccer world cup | 2013
Samuel Barrett; Katie Genter; Yuchen He; Todd Hester; Piyush Khandelwal; Jacob Menashe; Peter Stone
In 2012, UT Austin Villa claimed the Standard Platform League championships at both the US Open and the 2012 RoboCup competition held in Mexico City. This paper describes the code release associated with the team and discusses the key contributions of the release. This release will enable teams entering the Standard Platform League and researchers using the Naos to have a solid foundation from which to start their work as well as providing useful modules to existing researchers and RoboCup teams. We expect it to be of particular interest because it includes the architecture, logic modules, and debugging tools that led to the team’s success in 2012. This architecture is designed to be flexible and robust while enabling easy testing and debugging of code. The vision code was designed for easy use in creating color tables and debugging problems. A custom localization simulator that is included permits fast testing of full team scenarios. Also included is the kick engine which runs through a number of static joint poses and adapts them to the current location of the ball. This code release will provide a solid foundation for new RoboCup teams and for researchers that use the Naos.
Archive | 2015
Samuel Barrett
This chapter introduces the Planning and Learning to Adapt Swiftly to Teammates to Improve Cooperation (PLASTIC) algorithms that enable an ad hoc team agent to cooperate with a variety of different teammates. One might think that the most appropriate thing for an ad hoc team agent to do is to “fit in” with its team by following the same behavior as its teammates. However, if the teammates’ behaviors are suboptimal, this approach will limit how much the ad hoc agent can help its team. Therefore, in this book, we adopt the approach of learning about different teammates and deciding how to act by leveraging this knowledge. This approach allows an ad hoc agent to reason about how well its knowledge of past teammates predicts its current teammates’ actions as well as to convert this knowledge into the actions it needs to take to accomplish its goals. If the knowledge of prior teammates accurately predicts the current teammates and the ad hoc agent is given enough time to plan, this approach will lead to optimal performance of the ad hoc agent, helping its team achieve the best possible outcome. Note that this may not be the optimal performance of any team, but it is optimal for the ad hoc agent given that the behaviors of its teammates are fixed.
Archive | 2015
Samuel Barrett
Chapter 5 introduced the algorithms used in this book for solving ad hoc teamwork problems. Before moving on to the empirical analysis of these algorithms in Chap. 7, it is useful to first investigate the theoretical attributes of PLASTIC. Our analysis focuses on whether the multi-armed bandit domain described in Sect. 3.2.1 is tractable for PLASTIC–Model. We chose to analyze the bandit domain because of its simplicity, which lends itself to more complete theoretical analysis. In addition, the bandit domain is interesting due to its use of communication, which is an important aspect of ad hoc teamwork that is not explored in the other domains. Note that we do not investigate the model learning aspect of PLASTIC–Model. Instead, we analyze whether the PLASTIC–Model can select from a set of known models (from \(\text {HandCodedKnowledge}\)) and plan its response to these models in polynomial time.
AI Matters | 2014
Peter Stone; Patrick MacAlpine; Katie Genter; Samuel Barrett
first “Drop-in Challenge” games that was held at RoboCup 2013 in Eindhoven, The Netherlands. Typically, RoboCup soccer games involve a team of robots programmed by one university against a team programmed by another. As such, the teamwork strategies can all be “programmed in.” However, as robots and their agents become more capable of long-term autonomy, there will be increasing opportunities and need for “ad hoc teamwork” in which agents need to cooperate without prior coordination. The drop-in challenge at RoboCup provides an opportunity to study ad hoc teamwork by randomly selecting different RoboCup teams to each contribute one robot to a team that plays against another such team. The robots must be programmed to work with previously unknown teammates.