Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yingce Xia is active.

Publication


Featured researches published by Yingce Xia.


international joint conference on artificial intelligence | 2017

Dual Inference for Machine Learning

Yingce Xia; Jiang Bian; Tao Qin; Nenghai Yu; Tie-Yan Liu

Recent years have witnessed the rapid development of machine learning in solving artificial intelligence (AI) tasks in many domains, including translation, speech, image, etc. Within these domains, AI tasks are usually not independent. As a specific type of relationship, structural duality does exist between many pairs of AI tasks, such as translation from one language to another vs. its opposite direction, speech recognition vs. speech synthetization, image classification vs. image generation, etc. The importance of such duality has been magnified by some recent studies, which revealed that it can boost the learning of two tasks in the dual form. However, there has been little investigation on how to leverage this invaluable relationship into the inference stage of AI tasks. In this paper, we propose a general framework of dual inference which can take advantage of both existing models from two dual tasks, without re-training, to conduct inference for one individual task. Empirical studies on three pairs of specific dual tasks, including machine translation, sentiment analysis, and image processing have illustrated that dual inference can significantly improve the performance of each of individual tasks.


Neurocomputing | 2017

Finite budget analysis of multi-armed bandit problems

Yingce Xia; Tao Qin; Wenkui Ding; Haifang Li; Xudong Zhang; Nenghai Yu; Tie-Yan Liu

Abstract In the budgeted multi-armed bandit (MAB) problem, a player receives a random reward and needs to pay a cost after pulling an arm, and he cannot pull any more arm after running out of budget. In this paper, we give an extensive study of the upper confidence bound based algorithms and a greedy algorithm for budgeted MABs. We perform theoretical analysis on the proposed algorithms, and show that they all enjoy sublinear regret bounds with respect to the budget B. Furthermore, by carefully choosing the parameters, they can even achieve log linear regret bounds. We also prove that the asymptotic lower bound for budgeted Bernoulli bandits is Ω(ln B). Our proof technique can be used to improve the theoretical results for fractional KUBE [26] and Budgeted Thompson Sampling [30].


european conference on machine learning | 2017

Sequence Generation with Target Attention

Yingce Xia; Fei Tian; Tao Qin; Nenghai Yu; Tie-Yan Liu

Source-target attention mechanism (briefly, source attention) has become one of the key components in a wide range of sequence generation tasks, such as neural machine translation, image caption, and open-domain dialogue generation. In these tasks, the attention mechanism, typically in control of information flow from the encoder to the decoder, enables to generate every component in the target sequence relying on different source components. While source attention mechanism has attracted many research interests, few of them turn eyes to if the generation of target sequence can additionally benefit from attending back to itself, which however is intuitively motivated by the nature of attention. To investigate the question, in this paper, we propose a new target-target attention mechanism (briefly, target attention). Along the progress of generating target sequence, target attention mechanism takes into account the relationship between the component to generate and its preceding context within the target sequence, such that it can better keep the coherent consistency and improve the readability of the generated sequence. Furthermore, it complements the information from source attention so as to further enhance semantic adequacy. After designing an effective approach to incorporate target attention in encoder-decoder framework, we conduct extensive experiments on both neural machine translation and image caption. Experimental results clearly demonstrate the effectiveness of our design of integrating both source and target attention for sequence generation tasks.


international joint conference on artificial intelligence | 2018

Finite Sample Analysis of LSTD with Random Projections and Eligibility Traces

Haifang Li; Yingce Xia; Wensheng Zhang

Policy evaluation with linear function approximation is an important problem in reinforcement learning. When facing high-dimensional feature spaces, such a problem becomes extremely hard considering the computation efficiency and quality of approximations. We propose a new algorithm, LSTD(


neural information processing systems | 2016

Dual Learning for Machine Translation.

Di He; Yingce Xia; Tao Qin; Liwei Wang; Nenghai Yu; Tie-Yan Liu; Wei-Ying Ma

\lambda


international conference on artificial intelligence | 2015

Thompson sampling for budgeted multi-armed bandits

Yingce Xia; Haifang Li; Tao Qin; Nenghai Yu; Tie-Yan Liu

)-RP, which leverages random projection techniques and takes eligibility traces into consideration to tackle the above two challenges. We carry out theoretical analysis of LSTD(


arXiv: Computation and Language | 2018

Achieving Human Parity on Automatic Chinese to English News Translation.

Hany Hassan; Anthony Aue; Chang Chen; Vishal Chowdhary; Jonathan Clark; Christian Federmann; Xuedong Huang; Marcin Junczys-Dowmunt; William D. Lewis; Mu Li; Shujie Liu; Tie-Yan Liu; Renqian Luo; Arul Menezes; Tao Qin; Frank Seide; Xu Tan; Fei Tian; Lijun Wu; Shuangzhi Wu; Yingce Xia; Dongdong Zhang; Zhirui Zhang; Ming Zhou

\lambda


asian conference on machine learning | 2015

Budgeted Bandit Problems with Continuous Random Costs

Yingce Xia; Wenkui Ding; Xudong Zhang; Nenghai Yu; Tao Qin

)-RP, and provide meaningful upper bounds of the estimation error, approximation error and total generalization error. These results demonstrate that LSTD(


international conference on machine learning | 2017

Dual Supervised Learning.

Yingce Xia; Tao Qin; Wei Chen; Jiang Bian; Nenghai Yu; Tie-Yan Liu

\lambda


international joint conference on artificial intelligence | 2016

Budgeted multi-armed bandits with multiple plays

Yingce Xia; Tao Qin; Weidong Ma; Nenghai Yu; Tie-Yan Liu

)-RP can benefit from random projection and eligibility traces strategies, and LSTD(

Collaboration


Dive into the Yingce Xia's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nenghai Yu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Fei Tian

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Haifang Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Lijun Wu

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Jiang Bian

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jianxin Lin

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge