Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James MacGlashan is active.

Publication


Featured researches published by James MacGlashan.


intelligent user interfaces | 2007

Interactive visual clustering

Marie desJardins; James MacGlashan; Julia Ferraioli

Interactive Visual Clustering (IVC) is a novel method that allows a user to explore relational data sets interactively, in order to produce a clustering that satisfies their objectives. IVC combines spring-embedded graph layout with user interaction and constrained clustering. Experimental results on several synthetic and real-world data sets show that IVC yields better clustering performance than alternative methods.


Autonomous Agents and Multi-Agent Systems | 2016

Learning behaviors via human-delivered discrete feedback: modeling implicit feedback strategies to speed up learning

Robert Tyler Loftin; Bei Peng; James MacGlashan; Michael L. Littman; Matthew E. Taylor; Jeff Huang; David L. Roberts

For real-world applications, virtual agents must be able to learn new behaviors from non-technical users. Positive and negative feedback are an intuitive way to train new behaviors, and existing work has presented algorithms for learning from such feedback. That work, however, treats feedback as numeric reward to be maximized, and assumes that all trainers provide feedback in the same way. In this work, we show that users can provide feedback in many different ways, which we describe as “training strategies.” Specifically, users may not always give explicit feedback in response to an action, and may be more likely to provide explicit reward than explicit punishment, or vice versa, such that the lack of feedback itself conveys information about the behavior. We present a probabilistic model of trainer feedback that describes how a trainer chooses to provide explicit reward and/or explicit punishment and, based on this model, develop two novel learning algorithms (SABL and I-SABL) which take trainer strategy into account, and can therefore learn from cases where no feedback is provided. Through online user studies we demonstrate that these algorithms can learn with less feedback than algorithms based on a numerical interpretation of feedback. Furthermore, we conduct an empirical analysis of the training strategies employed by users, and of factors that can affect their choice of strategy.


robotics: science and systems | 2015

Grounding English Commands to Reward Functions.

James MacGlashan; Monica Babeş-Vroman; Marie desJardins; Michael L. Littman; Smaranda Muresan; Shawn Squire; Stefanie Tellex; Dilip Arumugam; Lei Yang

As intelligent robots become more prevalent, methods to make interaction with the robots more accessible are increasingly important. Communicating the tasks that a person wants the robot to carry out via natural language, and training the robot to ground the natural language through demonstration, are especially appealing approaches for interaction, since they do not require a technical background. However, existing approaches map natural language commands to robot command languages that directly express the sequence of actions the robot should execute. This sequence is often specific to a particular situation and does not generalize to new situations. To address this problem, we present a system that grounds natural language commands into reward functions using demonstrations of different natural language commands being carried out in the environment. Because language is grounded to reward functions, rather than explicit actions that the robot can perform, commands can be high-level, carried out in novel environments autonomously, and even transferred to other robots with different action spaces. We demonstrate that our learned model can be both generalized to novel environments and transferred to a robot with a different action space than the action space used during training.


international conference on robotics and automation | 2017

Reducing errors in object-fetching interactions through social feedback

David Whitney; Eric Rosen; James MacGlashan; Lawson L. S. Wong; Stefanie Tellex

Fetching items is an important problem for a social robot. It requires a robot to interpret a persons language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an item-fetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a persons requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the item-delivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our models improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-the-art baselines. Our model is 2.17 seconds faster (25% faster) than a state-of-the-art baseline, while being 2.1% more accurate.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Evolution of flexibility and rigidity in retaliatory punishment

Adam Morris; James MacGlashan; Michael L. Littman; Fiery Cushman

Significance Two aims of behavioral science, often pursued separately, are to model the evolutionary dynamics and cognitive mechanisms governing behavior in social conflicts. Combining these approaches, we show that the dynamics of proximate mechanisms such as reward learning can play a pivotal role in determining evolutionary outcomes. We focus on a widespread feature of human social life: People engage in retributive punishment with surprising disregard for its efficacy, yet they respond to punishment with behavioral flexibility finely tuned to costs and benefits. We explain this pattern and offer a general picture of when evolution favors rigid versus flexible social behaviors. Natural selection designs some social behaviors to depend on flexible learning processes, whereas others are relatively rigid or reflexive. What determines the balance between these two approaches? We offer a detailed case study in the context of a two-player game with antisocial behavior and retaliatory punishment. We show that each player in this game—a “thief” and a “victim”—must balance two competing strategic interests. Flexibility is valuable because it allows adaptive differentiation in the face of diverse opponents. However, it is also risky because, in competitive games, it can produce systematically suboptimal behaviors. Using a combination of evolutionary analysis, reinforcement learning simulations, and behavioral experimentation, we show that the resolution to this tension—and the adaptation of social behavior in this game—hinges on the game’s learning dynamics. Our findings clarify punishment’s adaptive basis, offer a case study of the evolution of social preferences, and highlight an important connection between natural selection and learning in the resolution of social conflicts.


national conference on artificial intelligence | 2014

A strategy-aware technique for learning behaviors from discrete human feedback

Robert Tyler Loftin; James MacGlashan; Bei Peng; Matthew E. Taylor; Michael L. Littman; Jeff Huang; David L. Roberts


adaptive agents and multi-agents systems | 2016

A Need for Speed: Adapting Agent Action Speed to Improve Task Learning from Non-Expert Humans

Bei Peng; James MacGlashan; Robert Tyler Loftin; Michael L. Littman; David L. Roberts; Matthew E. Taylor


international conference on artificial intelligence | 2015

Between imitation and intention learning

James MacGlashan; Michael L. Littman


neural information processing systems | 2016

Showing versus doing: Teaching by demonstration

Mark K. Ho; Michael L. Littman; James MacGlashan; Fiery Cushman; Joseph L. Austerweil


international conference on automated planning and scheduling | 2015

Goal-based action priors

David Abel; D. Ellis Hershkowitz; Gabriel Barth-Maron; Stephen Brawner; Kevin O'Farrell; James MacGlashan; Stefanie Tellex

Collaboration


Dive into the James MacGlashan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bei Peng

Washington State University

View shared research outputs
Top Co-Authors

Avatar

David L. Roberts

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Matthew E. Taylor

Washington State University

View shared research outputs
Top Co-Authors

Avatar

Robert Tyler Loftin

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge