Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haitao Mi is active.

Publication


Featured researches published by Haitao Mi.


empirical methods in natural language processing | 2008

Forest-based Translation Rule Extraction

Haitao Mi; Liang Huang

Translation rule extraction is a fundamental problem in machine translation, especially for linguistically syntax-based systems that need parse trees from either or both sides of the bi-text. The current dominant practice only uses 1-best trees, which adversely affects the rule set quality due to parsing errors. So we propose a novel approach which extracts rules from a packed forest that compactly encodes exponentially many parses. Experiments show that this method improves translation quality by over 1 BLEU point on a state-of-the-art tree-to-string system, and is 0.5 points better than (and twice as fast as) extracting on 30-best parses. When combined with our previous work on forest-based decoding, it achieves a 2.5 BLEU points improvement over the base-line, and even outperforms the hierarchical system of Hiero by 0.7 points.


international joint conference on natural language processing | 2009

Joint Decoding with Multiple Translation Models

Yang Liu; Haitao Mi; Yang Feng; Qun Liu

Current SMT systems usually decode with single translation models and cannot benefit from the strengths of other models in decoding phase. We instead propose joint decoding, a method that combines multiple translation models in one decoder. Our joint decoder draws connections among multiple models by integrating the translation hypergraphs they produce individually. Therefore, one model can share translations and even derivations with other models. Comparable to the state-of-the-art system combination technique, joint decoding achieves an absolute improvement of 1.5 BLEU points over individual decoding.


empirical methods in natural language processing | 2009

Lattice-based System Combination for Statistical Machine Translation

Yang Feng; Yang Liu; Haitao Mi; Qun Liu; Yajuan L"u

Current system combination methods usually use confusion networks to find consensus translations among different systems. Requiring one-to-one mappings between the words in candidate translations, confusion networks have difficulty in handling more general situations in which several words are connected to another several words. Instead, we propose a lattice-based system combination model that allows for such phrase alignments and uses lattices to encode all candidate translations. Experiments show that our approach achieves significant improvements over the state-of-the-art baseline system on Chinese-to-English translation test sets.


north american chapter of the association for computational linguistics | 2015

Shift-Reduce Constituency Parsing with Dynamic Programming and POS Tag Lattice.

Haitao Mi; Liang Huang

We present the first dynamic programming (DP) algorithm for shift-reduce constituency parsing, which extends the DP idea of Huang and Sagae (2010) to context-free grammars. To alleviate the propagation of errors from part-of-speech tagging, we also extend the parser to take a tag lattice instead of a fixed tag sequence. Experiments on both English and Chinese treebanks show that our DP parser significantly improves parsing quality over non-DP baselines, and achieves the best accuracies among empirical linear-time parsers.


meeting of the association for computational linguistics | 2009

Sub-Sentence Division for Tree-Based Machine Translation

Hao Xiong; Wenwen Xu; Haitao Mi; Yang Liu; Qun Liu

Tree-based statistical machine translation models have made significant progress in recent years, especially when replacing 1-best trees with packed forests. However, as the parsing accuracy usually goes down dramatically with the increase of sentence length, translating long sentences often takes long time and only produces degenerate translations. We propose a new method named sub-sentence division that reduces the decoding time and improves the translation quality for tree-based translation. Our approach divides long sentences into several sub-sentences by exploiting tree structures. Large-scale experiments on the NIST 2008 Chinese-to-English test set show that our approach achieves an absolute improvement of 1.1 BLEU points over the baseline system in 50% less time.


conference on computational natural language learning | 2016

Semi-supervised Clustering for Short Text via Deep Representation Learning

Zhiguo Wang; Haitao Mi; Abraham Ittycheriah

In this work, we propose a semi-supervised method for short text clustering, where we represent texts as distributed vectors with neural networks, and use a small amount of labeled data to specify our intention for clustering. We design a novel objective to combine the representation learning process and the k-means clustering process together, and optimize the objective with both labeled data and unlabeled data iteratively until convergence through three steps: (1) assign each short text to its nearest centroid based on its representation from the current neural networks; (2) re-estimate the cluster centroids based on cluster assignments from step (1); (3) update neural networks according to the objective by keeping centroids and cluster assignments fixed. Experimental results on four datasets show that our method works significantly better than several other text clustering methods.


meeting of the association for computational linguistics | 2014

Hierarchical MT Training using Max-Violation Perceptron

Kai Zhao; Liang Huang; Haitao Mi; Abraham Ittycheriah

Large-scale discriminative training has become promising for statistical machine translation by leveraging the huge training corpus; for example the recent effort in phrase-based MT (Yu et al., 2013) significantly outperforms mainstream methods that only train on small tuning sets. However, phrase-based MT suffers from limited reorderings, and thus its training can only utilize a small portion of the bitext due to the distortion limit. To address this problem, we extend Yu et al. (2013) to syntax-based MT by generalizing their latent variable “violation-fixing” perceptron from graphs to hypergraphs. Experiments confirm that our method leads to up to +1.2 BLEU improvement over mainstream methods such as MERT and PRO.


meeting of the association for computational linguistics | 2008

Forest-Based Translation

Haitao Mi; Liang Huang; Qun Liu


empirical methods in natural language processing | 2011

A novel dependency-to-string model for statistical machine translation

Jun Xie; Haitao Mi; Qun Liu


international conference on computational linguistics | 2008

Word Lattice Reranking for Chinese Word Segmentation and Part-of-Speech Tagging

Wenbin Jiang; Haitao Mi; Qun Liu

Collaboration


Dive into the Haitao Mi's collaboration.

Top Co-Authors

Avatar

Qun Liu

Dublin City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liang Huang

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Yang Feng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hao Xiong

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yajuan Lü

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jun Xie

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenbin Jiang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge