Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tejaswini Deoskar is active.

Publication


Featured researches published by Tejaswini Deoskar.


international conference on computational linguistics | 2008

Re-estimation of Lexical Parameters for Treebank PCFGs

Tejaswini Deoskar

We present procedures which pool lexical information estimated from unlabeled data via the Inside-Outside algorithm, with lexical information from a treebank PCFG. The procedures produce substantial improvements (up to 31.6% error reduction) on the task of determining subcategorization frames of novel verbs, relative to a smoothed Penn Treebank-trained PCFG. Even with relatively small quantities of unlabeled training data, the re-estimated models show promising improvements in labeled bracketing f-scores on Wall Street Journal parsing, and substantial benefit in acquiring the subcategorization preferences of low-frequency verbs.


conference of the european chapter of the association for computational linguistics | 2014

Improving Dependency Parsers using Combinatory Categorial Grammar

Bharat Ram Ambati; Tejaswini Deoskar; Mark Steedman

Subcategorization information is a useful feature in dependency parsing. In this paper, we explore a method of incorporating this information via Combinatory Categorial Grammar (CCG) categories from a supertagger. We experiment with two popular dependency parsers (Malt and MST) for two languages: English and Hindi. For both languages, CCG categories improve the overall accuracy of both parsers by around 0.3-0.5% in all experiments. For both parsers, we see larger improvements specifically on dependencies at which they are known to be weak: long distance dependencies for Malt, and verbal arguments for MST. The result is particularly interesting in the case of the fast greedy parser (Malt), since improving its accuracy without significantly compromising speed is relevant for large scale applications such as parsing the web.


north american chapter of the association for computational linguistics | 2015

An Incremental Algorithm for Transition-based CCG Parsing

Bharat Ram Ambati; Tejaswini Deoskar; Mark Johnson; Mark Steedman

Incremental parsers have potential advantages for applications like language modeling for machine translation and speech recognition. We describe a new algorithm for incremental transition-based Combinatory Categorial Grammar parsing. As English CCGbank derivations are mostly right branching and non-incremental, we design our algorithm based on the dependencies resolved rather than the derivation. We introduce two new actions in the shift-reduce paradigm based on the idea of ‘revealing’ (Pareschi and Steedman, 1987) the required information during parsing. On the standard CCGbank test data, our algorithm achieved improvements of 0.88% in labeled and 2.0% in unlabeled F-score over a greedy non-incremental shift-reduce parser.


north american chapter of the association for computational linguistics | 2016

Shift-Reduce CCG Parsing using Neural Network Models

Bharat Ram Ambati; Tejaswini Deoskar; Mark Steedman

We present a neural network based shift- reduce CCG parser, the first neural-network based parser for CCG. We also study the im- pact of neural network based tagging mod- els, and greedy versus beam-search parsing, by using a structured neural network model. Our greedy parser obtains a labeled F-score of 83.27%, the best reported result for greedy CCG parsing in the literature (an improve- ment of 2.5% over a perceptron based greedy parser) and is more than three times faster. With a beam, our structured neural network model gives a labeled F-score of 85.57% which is 0.6% better than the perceptron based counterpart.


Journal of Logic and Computation | 2014

Learning structural dependencies of words in the Zipfian Tail

Tejaswini Deoskar; Markos Mylonakis; Khalil Sima'an

Using semi-supervised EM, we learn finegrained but sparse lexical parameters of a generative parsing model (a PCFG) initially estimated over the Penn Treebank. Our lexical parameters employ supertags, which encode complex structural information at the pre-terminal level, and are particularly sparse in labeled data -- our goal is to learn these for words that are unseen or rare in the labeled data. In order to guide estimation from unlabeled data, we incorporate both structural and lexical priors from the labeled data. We get a large error reduction in parsing ambiguous structures associated with unseen verbs, the most important case of learning lexico-structural dependencies. We also obtain a statistically significant improvement in labeled bracketing score of the treebank PCFG, the first successful improvement via semi-supervised EM of a generative structured model already trained over large labeled data.


meeting of the association for computational linguistics | 2013

Using CCG categories to improve Hindi dependency parsing

Bharat Ram Ambati; Tejaswini Deoskar; Mark Steedman


language resources and evaluation | 2008

Induction of Treebank-Aligned Lexical Resources.

Tejaswini Deoskar; Mats Rooth


conference of the european chapter of the association for computational linguistics | 2014

Generalizing a Strongly Lexicalized Parser using Unlabeled Data

Tejaswini Deoskar; Christos Christodoulopoulos; Alexandra Birch; Mark Steedman


international workshop/conference on parsing technologies | 2011

Learning Structural Dependencies of Words in the Zipfian Tail

Tejaswini Deoskar; Markos Mylonakis; Khalil Sima'an


international workshop/conference on parsing technologies | 2011

Simple Semi-Supervised Learning for Prepositional Phrase Attachment

Gregory F. Coppola; Alexandra Birch; Tejaswini Deoskar; Mark Steedman

Collaboration


Dive into the Tejaswini Deoskar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge