Khashayar Rohanimanesh
University of Massachusetts Amherst
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Khashayar Rohanimanesh.
international conference on machine learning | 2004
Charles A. Sutton; Khashayar Rohanimanesh; Andrew McCallum
In sequence modeling, we often wish to represent complex interaction between labels, such as when performing multiple, cascaded labeling tasks on the same sequence, or when long-range dependencies exist. We present dynamic conditional random fields (DCRFs), a generalization of linear-chain conditional random fields (CRFs) in which each time slice contains a set of state variables and edges---a distributed state representation as in dynamic Bayesian networks (DBNs)---and parameters are tied across slices. Since exact inference can be intractable in such models, we perform approximate inference using several schedules for belief propagation, including tree-based reparameterization (TRP). On a natural-language chunking task, we show that a DCRF performs better than a series of linear-chain CRFs, achieving comparable performance using only half the training data.
knowledge discovery and data mining | 2008
Michael L. Wick; Khashayar Rohanimanesh; Karl Schultz; Andrew McCallum
The automatic consolidation of database records from many heterogeneous sources into a single repository requires solving several information integration tasks. Although tasks such as coreference, schema matching, and canonicalization are closely related, they are most commonly studied in isolation. Systems that do tackle multiple integration problems traditionally solve each independently, allowing errors to propagate from one task to another. In this paper, we describe a discriminatively-trained model that reasons about schema matching, coreference, and canonicalization jointly. We evaluate our model on a real-world data set of people and demonstrate that simultaneously solving these tasks reduces errors over a cascaded or isolated approach. Our experiments show that a joint model is able to improve substantially over systems that either solve each task in isolation or with the conventional cascade. We demonstrate nearly a 50% error reduction for coreference and a 40% error reduction for schema matching.
international conference on machine learning | 2005
Khashayar Rohanimanesh; Sridhar Mahadevan
We study an approach for performing concurrent activities in Markov decision processes (MDPs) based on the coarticulation framework. We assume that the agent has multiple degrees of freedom (DOF) in the action space which enables it to perform activities simultaneously. We demonstrate that one natural way for generating concurrency in the system is by coarticulating among the set of learned activities available to the agent. In general due to the multiple DOF in the system, often there exists a redundant set of admissible sub-optimal policies associated with each learned activity. Such flexibility enables the agent to concurrently commit to several subgoals according to their priority levels, given a new task defined in terms of a set of prioritized subgoals. We present efficient approximate algorithms for computing such policies and for generating concurrent plans. We also evaluate our approach in a simulated domain.
intelligent user interfaces | 2018
Michał Dereziński; Khashayar Rohanimanesh; Aamer Hydrie
User experiences can be made more engaging by incorporating surprise. For example, online shoppers may like to view unique products. In this paper we propose an approach for detecting surprising documents, such as product titles. As the concept of surprise is subjective, there is currently no principled method for measuring the surprisingness score of a document. We present such a method; an unsupervised approach for automatically discovering surprising documents in an unlabeled corpus. Our approach is based on a probabilistic model of surprise, and a construction of effective distributional word embeddings, which can be adapted to the semantic context in which the word appears. As the performance of our model does not degrade with the length of the document, it is particularly well suited for very short documents (even a single sentence). We evaluate our model both in supervised and unsupervised settings, demonstrating its state-of-the-art performance on two real-world data sets: a collection of e-commerce products from eBay, and a corpus of NSF proposals. These experiments show that our surprisingness score exhibits high correlation with human annotated labels.
Journal of Machine Learning Research | 2007
Charles A. Sutton; Andrew McCallum; Khashayar Rohanimanesh
international conference on robotics and automation | 2001
Georgios Theocharous; Khashayar Rohanimanesh; Sridhar Mahadevan
international conference on machine learning | 2011
Khashayar Rohanimanesh; Kedar Bellare; Aron Culotta; Andrew McCallum; Michael L. Wick
siam international conference on data mining | 2009
Michael L. Wick; Aron Culotta; Khashayar Rohanimanesh; Andrew McCallum
Archive | 2003
Andrew McCallum; Khashayar Rohanimanesh; Charles Sutton
uncertainty in artificial intelligence | 2001
Khashayar Rohanimanesh; Sridhar Mahadevan