Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Phillip Odom is active.

Publication


Featured researches published by Phillip Odom.


international conference on data mining | 2013

Guiding Autonomous Agents to Better Behaviors through Human Advice

Gautam Kunapuli; Phillip Odom; Jude W. Shavlik; Sriraam Natarajan

Inverse Reinforcement Learning (IRL) is an approach for domain-reward discovery from demonstration, where an agent mines the reward function of a Markov decision process by observing an expert acting in the domain. In the standard setting, it is assumed that the expert acts (nearly) optimally, and a large number of trajectories, i.e., training examples are available for reward discovery (and consequently, learning domain behavior). These are not practical assumptions: trajectories are often noisy, and there can be a paucity of examples. Our novel approach incorporates advice-giving into the IRL framework to address these issues. Inspired by preference elicitation, a domain expert provides advice on states and actions (features) by stating preferences over them. We evaluate our approach on several domains and show that with small amounts of targeted preference advice, learning is possible from noisy demonstrations, and requires far fewer trajectories compared to simply learning from trajectories alone.


artificial intelligence in medicine in europe | 2015

Extracting Adverse Drug Events from Text Using Human Advice

Phillip Odom; Vishal Bangera; Tushar Khot; David C. Page; Sriraam Natarajan

Adverse drug events (ADEs) are a major concern and point of emphasis for the medical profession, government, and society in general. When methods extract ADEs from observational data, there is a necessity to evaluate these methods. More precisely, it is important to know what is already known in the literature. Consequently, we employ a novel relation extraction technique based on a recently developed probabilistic logic learning algorithm that exploits human advice. We demonstrate on a standard adverse drug events data base that the proposed approach can successfully extract existing adverse drug events from limited amount of training data and compares favorably with state-of-the-art probabilistic logic learning methods.


european conference on machine learning | 2016

Actively Interacting with Experts: A Probabilistic Logic Approach

Phillip Odom; Sriraam Natarajan

Machine learning approaches that utilize human experts combine domain experience with data to generate novel knowledge. Unfortunately, most methods either provide only a limited form of communication with the human expert and/or are overly reliant on the human expert to specify their knowledge upfront. Thus, the expert is unable to understand what the system could learn without their involvement. Allowing the learning algorithm to query the human expert in the most useful areas of the feature space takes full advantage of the data as well as the expert. We introduce active advice-seeking for relational domains. Relational logic allows for compact, but expressive interaction between the human expert and the learning algorithm. We demonstrate our algorithm empirically on several standard relational datasets.


inductive logic programming | 2013

Accelerating Imitation Learning in Relational Domains via Transfer by Initialization

Sriraam Natarajan; Phillip Odom; Saket Joshi; Tushar Khot; Kristian Kersting; Prasad Tadepalli

The problem of learning to mimic a human expert/teacher from training trajectories is called imitation learning. To make the process of teaching easier in this setting, we propose to employ transfer learning (where one learns on a source problem and transfers the knowledge to potentially more complex target problems). We consider multi-relational environments such as real-time strategy games and use functional-gradient boosting to capture and transfer the models learned in these environments. Our experiments demonstrate that our learner learns a very good initial model from the simple scenario and effectively transfers the knowledge to the more complex scenario thus achieving a jump start, a steeper learning curve and a higher convergence in performance.


international conference on data mining | 2015

Transfer Learning via Relational Type Matching

Raksha Kumaraswamy; Phillip Odom; Kristian Kersting; David B. Leake; Sriraam Natarajan

Transfer learning is typically performed between problem instances within the same domain. We consider the problem of transferring across domains. To this effect, we adopt a probabilistic logic approach. First, our approach automatically identifies predicates in the target domain that are similar in their relational structure to predicates in the source domain. Second, it transfers the logic rules and learns the parameters of the transferred rules using target data. Finally, it refines the rules as necessary using theory refinement. Our experimental evidence supports that this transfer method finds models as good or better than those found with state-of-the-art methods, with and without transfer, and in a fraction of the time.


Frontiers in Robotics and AI | 2018

Human-Guided Learning for Probabilistic Logic Models

Phillip Odom; Sriraam Natarajan

Advice-giving has been long explored in the artificial intelligence community to build robust learning algorithms when the data is noisy, incorrect or even insufficient. While logic based systems were effectively used in building expert systems, the role of the human has been restricted to being a “mere labeler” in recent times. We hypothesize and demonstrate that probabilistic logic can provide an effective and natural way for the expert to specify domain advice. Specifically, we consider different types of advice-giving in relational domains where noise could arise due to systematic errors or class-imbalance inherent in the domains. The advice is provided as logical statements or privileged features that are thenexplicitly considered by an iterative learning algorithm at every update. Our empirical evidence shows that human advice can effectively accelerate learning in noisy, structured domains where so far humans have been merely used as labelers or as designers of the (initial or final) structure of the model.


international conference on knowledge capture | 2017

User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams

Alexander L. Hayes; Mayukh Das; Phillip Odom; Sriraam Natarajan

One of the key advantages of Inductive Logic Programming systems is the ability of the domain experts to provide background knowledge as modes that allow for efficient search through the space of hypotheses. However, there is an inherent assumption that this expert should also be an ILP expert to provide effective modes. We relax this assumption by designing a graphical user interface that allows the domain expert to interact with the system using Entity Relationship diagrams. These interactions are used to construct modes for the learning system. We evaluate our algorithm on a probabilistic logic learning system where we demonstrate that the user is able to construct effective background knowledge on par with the expert-encoded knowledge on five data sets.


inductive logic programming | 2016

Learning Through Advice-Seeking via Transfer

Phillip Odom; Raksha Kumaraswamy; Kristian Kersting; Sriraam Natarajan

Experts possess vast knowledge that is typically ignored by standard machine learning methods. This rich, relational knowledge can be utilized to learn more robust models especially in the presence of noisy and incomplete training data. Such experts are often domain but not machine learning experts. Thus, deciding what knowledge to provide is a difficult problem. Our goal is to improve the human-machine interaction by providing the expert with a machine-generated bias that can be refined by the expert as necessary. To this effect, we propose using transfer learning, leveraging knowledge in alternative domains, to guide the expert to give useful advice. This knowledge is captured in the form of first-order logic horn clauses. We demonstrate empirically the value of the transferred knowledge, as well as the contribution of the expert in providing initial knowledge, plus revising and directing the use of the transferred knowledge.


national conference on artificial intelligence | 2015

Knowledge-based probabilistic logic learning

Phillip Odom; Tushar Khot; Reid B. Porter; Sriraam Natarajan


national conference on artificial intelligence | 2015

Active advice seeking for inverse reinforcement learning

Phillip Odom; Sriraam Natarajan

Collaboration


Dive into the Phillip Odom's collaboration.

Top Co-Authors

Avatar

Sriraam Natarajan

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Tushar Khot

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Kristian Kersting

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Mayukh Das

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander L. Hayes

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David C. Page

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Gautam Kunapuli

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge