Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles Lee Isbell is active.

Publication


Featured researches published by Charles Lee Isbell.


Artificial Intelligence | 2009

A novel sequence representation for unsupervised analysis of human activities

Raffay Hamid; Siddhartha Maddi; Amos Y. Johnson; Aaron F. Bobick; Irfan A. Essa; Charles Lee Isbell

Formalizing computational models for everyday human activities remains an open challenge. Many previous approaches towards this end assume prior knowledge about the structure of activities, using which explicitly defined models are learned in a completely supervised manner. For a majority of everyday environments however, the structure of the in situ activities is generally not known a priori. In this paper we investigate knowledge representations and manipulation techniques that facilitate learning of human activities in a minimally supervised manner. The key contribution of this work is the idea that global structural information of human activities can be encoded using a subset of their local event subsequences, and that this encoding is sufficient for activity-class discovery and classification. In particular, we investigate modeling activity sequences in terms of their constituent subsequences that we call event n-grams. Exploiting this representation, we propose a computational framework to automatically discover the various activity-classes taking place in an environment. We model these activity-classes as maximally similar activity-cliques in a completely connected graph of activities, and describe how to discover them efficiently. Moreover, we propose methods for finding characterizations of these discovered classes from a holistic as well as a by-parts perspective. Using such characterizations, we present a method to classify a new activity to one of the discovered activity-classes, and to automatically detect whether it is anomalous with respect to the general characteristics of its membership class. Our results show the efficacy of our approach in a variety of everyday environments.


international symposium on wearable computers | 2006

Discovering Characteristic Actions from On-Body Sensor Data

David Minnen; Thad Starner; M. Essa; Charles Lee Isbell

We present an approach to activity discovery, the unsupervised identification and modeling of human actions embedded in a larger sensor stream. Activity discovery can be seen as the inverse of the activity recognition problem. Rather than learn models from hand-labeled sequences, we attempt to discover motifs, sets of similar subsequences within the raw sensor stream, without the benefit of labels or manual segmentation. These motifs are statistically unlikely and thus typically correspond to important or characteristic actions within the activity. The problem of activity discovery differs from typical motif discovery, such as locating protein binding sites, because of the nature of time series data representing human activity. For example, in activity data, motifs will tend to be sparsely distributed, vary in length, and may only exhibit intra-motif similarity after appropriate time warping. In this paper, we motivate the activity discovery problem and present our approach for efficient discovery of meaningful actions from sensor data representing human activity. We empirically evaluate the approach on an exercise data set captured by a wrist-mounted, three-axis inertial sensor. Our algorithm successfully discovers motifs that correspond to the real exercises with a recall rate of 96.3% and overall accuracy of 86.7% over six exercises and 864 occurrences.


adaptive agents and multi-agents systems | 2001

A social reinforcement learning agent

Charles Lee Isbell; Christian R. Shelton; Michael J. Kearns; Satinder P. Singh; Peter Stone

We report on our reinforcement learning work on Cobot, a software agent that resides in the well-known online chat community LambdaMOO. Our initial work on Cobot~\cite{cobotaaai} provided him with the ability to collect {\em social statistics\/} and report them to users in a reactive manner. Here we describe our application of reinforcement learning to allow Cobot to proactively take actions in this complex social environment, and adapt his behavior from multiple sources of human reward. After 5 months of training, Cobot received 3171 reward and punishment events from 254 different Lambda\-MOO users, and learned nontrivial preferences for a number of users. Cobot modifies his behavior based on his current state in an attempt to maximize reward. Here we describe LambdaMOO and the state and action spaces of Cobot, and report the statistical results of the learning experiment.


international conference on data mining | 2007

Detecting Subdimensional Motifs: An Efficient Algorithm for Generalized Multivariate Pattern Discovery

David Minnen; Charles Lee Isbell; M. Essa; Thad Starner

Discovering recurring patterns in time series data is a fundamental problem for temporal data mining. This paper addresses the problem of locating subdimensional motifs in real-valued, multivariate time series, which requires the simultaneous discovery of sets of recurring patterns along with the corresponding relevant dimensions. While many approaches to motif discovery have been developed, most are restricted to categorical data, univariate time series, or multivariate data in which the temporal patterns span all of the dimensions. In this paper, we present an expected linear-time algorithm that addresses a generalization of multivariate pattern discovery in which each motif may span only a subset of the dimensions. To validate our algorithm, we discuss its theoretical properties and empirically evaluate it using several data sets including synthetic data and motion capture data collected by an on-body iner- tial sensor.


IEEE Computer Graphics and Applications | 2006

Declarative optimization-based drama management in interactive fiction

Mark J. Nelson; Michael Mateas; David L. Roberts; Charles Lee Isbell

Our work relates to automatically guiding experiences in large, open-world interactive dramas and story-based experiences where a player interacts with and influences a story. A drama manager (DM) is a system that watches a story as it progresses, reconfiguring the world to fulfill the authors goals. A DM might notice a player doing something that fits poorly with the current story and attempt to dissuade him or her. This is accomplished using soft actions such as having a nonplayer character start a conversation with a player to lure him or her to something else, or by more direct actions such as locking doors. We present work applying search-based drama management (SBDM) to the interactive fiction piece Anchorhead, to further investigate the algorithmic and authorship issues involved. Declarative optimization-based drama management (DODM) guides the player by projecting possible future stories and reconfiguring the story world based on those projections. This approach models stories as a set of possible plot points, and an author-specified evaluation function rates the quality of a particular plot-point sequence


Autonomous Agents and Multi-Agent Systems | 2006

Cobot in LambdaMOO: An Adaptive Social Statistics Agent

Charles Lee Isbell; Michael J. Kearns; Satinder P. Singh; Christian R. Shelton; Peter Stone; David P. Kormann

We describe our development of Cobot, a novel software agent who lives in LambdaMOO, a popular virtual world frequented by hundreds of users. Cobot’s goal was to become an actual part of that community. Here, we present a detailed discussion of the functionality that made him one of the objects most frequently interacted with in LambdaMOO, human or artificial. Cobot’s fundamental power is that he has the ability to collect social statistics summarizing the quantity and quality of interpersonal interactions. Initially, Cobot acted as little more than a reporter of this information; however, as he collected more and more data, he was able to use these statistics as models that allowed him to modify his own behavior. In particular, cobot is able to use this data to “self-program,” learning the proper way to respond to the actions of individual users, by observing how others interact with one another. Further, Cobot uses reinforcement learning to proactively take action in this complex social environment, and adapts his behavior based on multiple sources of human reward. Cobot represents a unique experiment in building adaptive agents who must live in and navigate social spaces.


international conference on machine learning | 2006

Looping suffix tree-based inference of partially observable hidden state

Michael P. Holmes; Charles Lee Isbell

We present a solution for inferring hidden state from sensorimotor experience when the environment takes the form of a POMDP with deterministic transition and observation functions. Such environments can appear to be arbitrarily complex and non-deterministic on the surface, but are actually deterministic with respect to the unobserved underlying state. We show that there always exists a finite history-based representation that fully captures the unobserved world state, allowing for perfect prediction of action effects. This representation takes the form of a looping prediction suffix tree (PST). We derive a sound and complete algorithm for learning a looping PST from a sufficient sample of sensorimotor experience. We also give empirical illustrations of the advantages conferred by this approach, and characterize the approximations to the looping PST that are made by existing algorithms such as Variable Length Markov Models, Utile Suffix Memory and Causal State Splitting Reconstruction.


ubiquitous computing | 2004

From devices to tasks: automatic task prediction for personalized appliance control

Charles Lee Isbell; Olufisayo Omojokun; Jeffrey S. Pierce

One of the driving applications of ubiquitous computing is universal appliance interaction: the ability to use arbitrary mobile devices to interact with arbitrary appliances, such as TVs, printers, and lights. Because of limited screen real estate and the plethora of devices and commands available to the user, a central problem in achieving this vision is predicting which appliances and devices the user wishes to use next in order to make interfaces for those devices available. We believe that universal appliance interaction is best supported through the deployment of appliance user interfaces (UIs) that are personalized to a user’s habits and information needs. In this paper, we suggest that, in a truly ubiquitous computing environment, the user will not necessarily think of devices as separate entities; therefore, rather than focus on which device the user may want to use next, we present a method for automatically discovering the user’s common tasks (e.g., watching a movie, or surfing TV channels), predicting the task that the user wishes to engage in, and generating an appropriate interface that spans multiple devices. We have several results. We show that it is possible to discover and cluster collections of commands that represent tasks and to use history to predict the next task reliably. In fact, we show that moving from devices to tasks is not only a useful way of representing our core problem, but that it is, in fact, an easier problem to solve. Finally, we show that tasks can vary from user to user.


international joint conference on artificial intelligence | 2011

Automatic state abstraction from demonstration

Luis C. Cobo; Peng Zang; Charles Lee Isbell; Andrea Lockerd Thomaz

Learning from Demonstration (LfD) is a popular technique for building decision-making agents from human help. Traditional LfD methods use demonstrations as training examples for supervised learning, but complex tasks can require more examples than is practical to obtain. We present Abstraction from Demonstration (AfD), a novel form of LfD that uses demonstrations to infer state abstractions and reinforcement learning (RL) methods in those abstract state spaces to build a policy. Empirical results show that AfD is greater than an order of magnitude more sample efficient than just using demonstrations as training examples, and exponentially faster than RL alone.


technical symposium on computer science education | 2010

Re)defining computing curricula by (re)defining computing

Charles Lee Isbell; Lynn Andrea Stein; Robb Cutler; Jeffrey M. Forbes; Linda Fraser; John Impagliazzo; Viera K. Proulx; Steve Russ; Richard Thomas; Yan Xu

What is the core of Computing? This paper defines the discipline of computing as centered around the notion of modeling, especially those models that are automatable and automatically manipulable. We argue that this central idea crucially connects models with languages and machines rather than focusing on and around computational artifacts, and that it admits a very broad set of fields while still distinguishing the discipline from mathematics, engineering and science. The resulting computational curriculum focuses on modeling, scales and limits, simulation, abstraction, and automation as key components of a computationalist mindset.

Collaboration


Dive into the Charles Lee Isbell's collaboration.

Top Co-Authors

Avatar

David L. Roberts

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Andrea Lockerd Thomaz

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Peng Zang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael P. Holmes

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Minnen

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Mateas

University of California

View shared research outputs
Top Co-Authors

Avatar

Olufisayo Omojokun

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Sooraj Bhat

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander G. Gray

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge