Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Oates is active.

Publication


Featured researches published by Tim Oates.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2015

A Flexible Multichannel EEG Feature Extractor and Classifier for Seizure Detection

Adam Page; Chris Sagedy; Emily Smith; Nasrin Attaran; Tim Oates; Tinoosh Mohsenin

This brief presents a low-power, flexible, and multichannel electroencephalography (EEG) feature extractor and classifier for the purpose of personalized seizure detection. Various features and classifiers were explored with the goal of maximizing detection accuracy while minimizing power, area, and latency. Additionally, algorithmic and hardware optimizations were identified to further improve performance. The classifiers studied include


european conference on machine learning | 2014

GrammarViz 2.0: a tool for grammar-based pattern discovery in time series

Pavel Senin; Jessica Lin; Xing Wang; Tim Oates; Sunil Gandhi; Arnold P. Boedihardjo; Crystal Chen; Susan Frankenstein; Manfred Lerner

k


international conference on data mining | 2011

Using Modified Multivariate Bag-of-Words Models to Classify Physiological Data

Patricia Ordóñez; Tom Armstrong; Tim Oates; Jim Fackler

-nearest neighbor, support vector machines, naïve Bayes, and logistic regression (LR) . All feature and classifier pairs were able to obtain F1 scores over 80% and onset sensitivity of 100% when tested on ten patients. A fully flexible hardware system was implemented that offers parameters for the number of EEG channels, the number of features, the classifier type, and various word width resolutions. Five seizure detection processors with different classifiers have been fully placed and routed on a Virtex-5 field-programmable gate array and been compared. It was found that five features per channel with LR proved to be the best solution for the application of personalized seizure detection. LR had the best average F1 score of 91%, the smallest area and power footprint, and the lowest latency. The ASIC implementation of the same combination in 65-nm CMOS shows that the processor occupies 0.008 mm2 and dissipates 19 nJ at 484 Hz.


international conference on semantic computing | 2007

Learning by Reading by Learning to Read

Sergei Nirenburg; Tim Oates; Jesse English

The problem of frequent and anomalous patterns discovery in time series has received a lot of attention in the past decade. Addressing the common limitation of existing techniques, which require a pattern length to be known in advance, we recently proposed grammar-based algorithms for efficient discovery of variable length frequent and rare patterns. In this paper we present GrammarViz 2.0, an interactive tool that, based on our previous work, implements algorithms for grammar-driven mining and visualization of variable length time series patterns1.


information reuse and integration | 2012

Finding story chains in newswire articles

Xianshu Zhu; Tim Oates

In this paper we present two novel multivariate time series representations to classify physiological data of different lengths. The representations may be applied to any group of multivariate time series data that examine the state or health of an entity. Multivariate Bag-of-Patterns and Stacked Bags of-Patterns improve on their univariate counterpart, inspired by the bag-of-words model, by using multiple time series and analyzing the data in a multivariate fashion. We also borrow techniques from the natural language processing domain such as term frequency and inverse document frequency to improve classification accuracy. We introduce a technique named inverse frequency and present experimental results on classifying patients who have experienced acute episodes of hypotension.


international conference on machine learning and applications | 2011

Classification of Patients Using Novel Multivariate Time Series Representations of Physiological Data

Patricia Ordóñez; Tom Armstrong; Tim Oates; Jim Fackler

Knowledge-based natural language processing systems learn by reading, i.e., they process texts to extract knowledge. The performance of these systems crucially depends on knowledge about the domain of language itself, such as lexicons and ontologies to ground the semantics of the texts. In this paper we describe the architecture of the GIBRALTAR system, which is based on the OntoSem semantic analyzer, which learns by reading by learning to read. That is, while processing texts GIBRALTAR extracts both knowledge about the topics of the texts and knowledge about language (e.g., new ontological concepts and semantic mappings from previously unknown words to ontological concepts) that enables improved text processing. We present the results of initial experiments with GIBRALTAR and directions for future research.Many recent advances in complex domains such as natural language processing (NLP) have taken a discriminative approach in conjunction with the global application of structural and domain specific constraints. We introduce LBJ, a new modeling language for specifying exact inference systems of this type, combining ideas from machine learning, optimization, first order logic (FOL), and object oriented programming (OOP). Expressive constraints are specified declaratively as arbitrary FOL formulas over functions and objects. The languages run-time library translates them to a mathematical programming representation from which an exact solution is computed. In addition, the compiler leverages an existing OOP language: objects and functions are grounded as the OOP objects and methods that encapsulate the users data.


Applied Artificial Intelligence | 2008

INTRODUCTION: SPECIAL ISSUE ON APPLICATIONS OF GRAMMATICAL INFERENCE

Colin de la Higuera; Tim Oates; Menno van Zaanen

Massive amounts of information about news events are published on the Internet every day in online newspapers, blogs, and social network messages. While search engines like Google help retrieve information using keywords, the large volumes of unstructured search results returned by search engines make it hard to track the evolution of an event. A story chain is composed of a set of news articles that reveal hidden relationships among different events. Traditional keyword-based search engines provide limited support for finding story chains. In this paper, we propose a random walk based algorithm to find story chains. When breaking news happens, many media outlets report the same event. We have two pruning mechanisms in the algorithm to automatically exclude redundant articles from the story chain and to ensure efficiency of the algorithm. Experimental results show that our proposed algorithm can generate coherent story chains without redundancy.


international conference on tools with artificial intelligence | 2013

From Robots to Reinforcement Learning

Tongchun Du; Michael T. Cox; Donald Perlis; Jared Shamwell; Tim Oates

In this paper we present two novel multivariate time series representations to classify physiological data of different lengths. The representations may be applied to any group of multivariate time series data that examine the state or health of an entity. Multivariate Bag-of-Patterns and Stacked Bags of-Patterns improve on their univariate counterpart, inspired by the bag-of-words model, by using multiple time series and analyzing the data in a multivariate fashion. We also borrow techniques from the natural language processing domain such as term frequency and inverse document frequency to improve classification accuracy. We introduce a technique named inverse frequency and present experimental results on classifying patients who have experienced acute episodes of hypotension.


international conference on machine learning and applications | 2014

Time Warping Symbolic Aggregation Approximation with Bag-of-Patterns Representation for Time Series Classification

Zhiguang Wang; Tim Oates

This special issue of Applied Artificial Intelligence contains articles on applications of grammar induction (GI)—a research area concerned, not surprisingly, with learning grammars from examples. A grammar is a rule-based, generative model of the elements in a possibly infinite set, where these elements are typically complex, structured objects like strings, trees, and graphs. The GI problem is to identify a grammar given some of the elements in the set it generates (and possibly some elements that are not in that set). In the context of GI, the most familiar grammars are those for formal languages that generate sets of strings. In the early days of the GI field, researchers focused on inferring regular grammars, those at the lowest level of the Chomsky hierarchy. Negative learnability results showed that even this problem is computationally hard (Gold, 1967; Pitt and Warmuth, 1989). However, since those early days, research in GI has produced deep theoretical insights into the learnability of grammars at all levels of the Chomsky hierarchy, resulting in powerful, efficient algorithms for inferring a wide variety of grammars, including many that are not represented in the hierarchy at all. Grammatical representations of sets of structured objects have a number of advantages, perhaps foremost among them being explicit representation


Information Systems Frontiers | 2014

Finding story chains in newswire articles using random walks

Xianshu Zhu; Tim Oates

In this paper, we review recent advances in Reinforcement Learning (RL) in light of potential applications to robotics, introduce the basic concepts of RL and Markov Decision Process (MDP), and compare different RL algorithms such as Q-learning, Temporal Difference learning, the Actor Critic, and the Natural Actor Critic. We conclude that policy gradient methods are more suitable for solving continuous state/action MDP problems than RL with lookup tables or general function approximators. Further, natural policy gradient methods can efficiently converge to locally optimal solutions. Some simulation results are given to support our arguments. We also present a brief overview of our approach to developing an autonomous robot agent that can perceive, learn from and interact with the environment, and reason about and handle unexpected problems using its knowledge base.

Collaboration


Dive into the Tim Oates's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Page

University of Maryland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xianshu Zhu

University of Maryland

View shared research outputs
Top Co-Authors

Avatar

Jim Fackler

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnold P. Boedihardjo

United States Army Corps of Engineers

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge