Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lemao Liu is active.

Publication


Featured researches published by Lemao Liu.


meeting of the association for computational linguistics | 2015

Neural Network Transduction Models in Transliteration Generation

Andrew M. Finch; Lemao Liu; Xiaolin Wang; Eiichiro Sumita

In this paper we examine the effectiveness of neural network sequence-to-sequence transduction in the task of transliteration generation. In this year’s shared evaluation we submitted two systems into all tasks. The primary system was based on the system used for the NEWS 2012 workshop, but was augmented with an additional feature which was the generation probability from a neural network. The secondary system was the neural network model used on its own together with a simple beam search algorithm. Our results show that adding the neural network score as a feature into the phrase-based statistical machine transliteration system was able to increase the performance of the system. In addition, although the neural network alone was not able to match the performance of our primary system (which exploits it), it was able to deliver a respectable performance for most language pairs which is very promising considering the recency of this technique.


Proceedings of the Sixth Named Entity Workshop | 2016

Target-Bidirectional Neural Models for Machine Transliteration

Andrew M. Finch; Lemao Liu; Xiaolin Wang; Eiichiro Sumita

Our purely neural network-based system represents a paradigm shift away from the techniques based on phrase-based statistical machine translation we have used in the past. The approach exploits the agreement between a pair of target-bidirectional LSTMs, in order to generate balanced targets with both good suffixes and good prefixes. The evaluation results show that the method is able to match and even surpass the current state-of-the-art on most language pairs, but also exposes weaknesses on some tasks motivating further study. The Janus toolkit that was used to build the systems used in the evaluation is publicly available at https://github.com/lemaoliu/Agtarbidir.


north american chapter of the association for computational linguistics | 2016

Agreement on Target-bidirectional Neural Machine Translation

Lemao Liu; Masao Utiyama; Andrew M. Finch; Eiichiro Sumita

Neural machine translation (NMT) with recurrent neural networks, has proven to be an effective technique for end-to-end machine translation. However, in spite of its promising advances over traditional translation methods, it typically suffers from an issue of unbalanced outputs, that arise from both the nature of recurrent neural networks themselves, and the challenges inherent in machine translation. To overcome this issue, we propose an agreement model for neural machine translation and show its effectiveness on large-scale Japaneseto-English and Chinese-to-English translation tasks. Our results show the model can achieve improvements of up to 1.4 BLEU over the strongest baseline NMT system. With the help of an ensemble technique, this new end-to-end NMT approach finally outperformed phrasebased and hierarchical phrase-based Moses baselines by up to 5.6 BLEU points.


ACM Transactions on Asian Language Information Processing | 2014

Discriminative Training for Log-Linear Based SMT: Global or Local Methods

Lemao Liu; Tiejun Zhao; Taro Watanabe; Hailong Cao; Conghui Zhu

In statistical machine translation, the standard methods such as MERT tune a single weight with regard to a given development data. However, these methods suffer from two problems due to the diversity and uneven distribution of source sentences. First, their performance is highly dependent on the choice of a development set, which may lead to an unstable performance for testing. Second, the sentence level translation quality is not assured since tuning is performed on the document level rather than on sentence level. In contrast with the standard global training in which a single weight is learned, we propose novel local training methods to address these two problems. We perform training and testing in one step by locally learning the sentence-wise weight for each input sentence. Since the time of each tuning step is unnegligible and learning sentence-wise weights for the entire test set means many passes of tuning, it is a great challenge for the efficiency of local training. We propose an efficient two-phase method to put the local training into practice by employing the ultraconservative update. On NIST Chinese-to-English translation tasks with both medium and large scales of training data, our local training methods significantly outperform standard methods with the maximal improvements up to 2.0 BLEU points, meanwhile their efficiency is comparable to that of the standard methods.


international conference on natural computation | 2012

Softmax-margin training for statistical machine translation

Wenwen Zhang; Lemao Liu; Hailong Cao; Tiejun Zhao

The training procedure is very important in statistical machine translation (SMT). It has a great influence on the final performance of a translation system. The widely used method in SMT is the minimum error rate training (MERT). It is effective to estimate the feature function weights. However, MERT does not use regularization and has been observed to over-fit. In this paper, we describe a method named softmax-margin, which is a modification of the max-margin training. This approach is simple, efficient, and easy to implement. We conduct our work using data sets from the WMT shared tasks. The results of experiment on small scale French-English translation task reach a competitive performance compared to MERT.


meeting of the association for computational linguistics | 2013

Additive Neural Networks for Statistical Machine Translation

Lemao Liu; Taro Watanabe; Eiichiro Sumita; Tiejun Zhao


empirical methods in natural language processing | 2012

Locally Training the Log-Linear Model for SMT

Lemao Liu; Hailong Cao; Taro Watanabe; Tiejun Zhao; Mo Yu; Conghui Zhu


international conference on computational linguistics | 2016

Neural Machine Translation with Supervised Attention.

Lemao Liu; Masao Utiyama; Andrew M. Finch; Eiichiro Sumita


national conference on artificial intelligence | 2017

Translation Prediction with Source Dependency-Based Context Representation.

Kehai Chen; Tiejun Zhao; Muyun Yang; Lemao Liu


empirical methods in natural language processing | 2017

Instance Weighting for Neural Machine Translation Domain Adaptation

Rui Wang; Masao Utiyama; Lemao Liu; Kehai Chen; Eiichiro Sumita

Collaboration


Dive into the Lemao Liu's collaboration.

Top Co-Authors

Avatar

Tiejun Zhao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eiichiro Sumita

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Hailong Cao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Conghui Zhu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew M. Finch

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Masao Utiyama

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Kehai Chen

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Taro Watanabe

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Rui Wang

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge