Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhongqiang Huang is active.

Publication


Featured researches published by Zhongqiang Huang.


international conference on machine learning | 2005

VACE multimodal meeting corpus

Lei Chen; R. Travis Rose; Ying Qiao; Irene Kimbara; Fey Parrill; Haleema Welji; Tony X. Han; Jilin Tu; Zhongqiang Huang; Mary P. Harper; Francis K. H. Quek; Yingen Xiong; David McNeill; Ronald F. Tuttle; Thomas S. Huang

In this paper, we report on the infrastructure we have developed to support our research on multimodal cues for understanding meetings. With our focus on multimodality, we investigate the interaction among speech, gesture, posture, and gaze in meetings. For this purpose, a high quality multimodal corpus is being produced.


empirical methods in natural language processing | 2009

Self-Training PCFG Grammars with Latent Annotations Across Languages

Zhongqiang Huang; Mary P. Harper

We investigate the effectiveness of self-training PCFG grammars with latent annotations (PCFG-LA) for parsing languages with different amounts of labeled training data. Compared to Charniaks lexicalized parser, the PCFG-LA parser was more effectively adapted to a language for which parsing has been less well developed (i.e., Chinese) and benefited more from self-training. We show for the first time that self-training is able to significantly improve the performance of the PCFG-LA parser, a single generative parser, on both small and large amounts of labeled training data. Our approach achieves state-of-the-art parsing accuracies for a single parser on both English (91.5%) and Chinese (85.2%).


international conference on machine learning | 2006

A multimodal analysis of floor control in meetings

Lei Chen; Mary P. Harper; Amy Franklin; R. Travis Rose; Irene Kimbara; Zhongqiang Huang; Francis K. H. Quek

The participant in a human-to-human communication who controls the floor bears the burden of moving the communication process along. Change in control of the floor can happen through a number of mechanisms, including interruptions, delegation of the floor, and so on. This paper investigates floor control in multiparty meetings that are both audio and video taped; hence, we are able to analyze patterns not only of speech (e.g., discourse markers) but also of visual cues (e.g, eye gaze exchanges) that are commonly involved in floor control changes. Identifying who has control of the floor provides an important focus for information retrieval and summarization of meetings. Additionally, without understanding who has control of the floor, it is impossible to identify important events such as challenges for the floor. In this paper, we analyze multimodal cues related to floor control in two different meetings involving five participants each.


north american chapter of the association for computational linguistics | 2009

Improving A Simple Bigram HMM Part-of-Speech Tagger by Latent Annotation and Self-Training

Zhongqiang Huang; Vladimir Eidelman; Mary P. Harper

In this paper, we describe and evaluate a bigram part-of-speech (POS) tagger that uses latent annotations and then investigate using additional genre-matched unlabeled data for self-training the tagger. The use of latent annotations substantially improves the performance of a baseline HMM bigram tagger, outperforming a trigram HMM tagger with sophisticated smoothing. The performance of the latent tagger is further enhanced by self-training with a large set of unlabeled data, even in situations where standard bigram or trigram taggers do not benefit from self-training when trained on greater amounts of labeled training data. Our best model obtains a state-of-the-art Chinese tagging accuracy of 94.78% when evaluated on a representative test set of the Penn Chinese Treebank 6.0.


spoken language technology workshop | 2006

IMPACT OF AUTOMATIC COMMA PREDICTION ON POS/NAME TAGGING OF SPEECH

Dustin Hillard; Zhongqiang Huang; Heng Ji; Ralph Grishman; Dilek Hakkani-Tür; Mary P. Harper; Mari Ostendorf; Wen Wang

This work looks at the impact of automatically predicted commas on part-of-speech (POS) and name tagging of speech recognition transcripts of Mandarin broadcast news. There is a significant gain in both POS and name tagging accuracy due to using automatically predicted commas over sentence boundary prediction alone. One difference between Mandarin and English is that there are two types of commas, and experiments here show that, while they can be reliably distinguished in automatic prediction, the distinction does not give a clear benefit for POS or name tagging.


international conference on acoustics, speech, and signal processing | 2007

Semi-Supervised Learning for Part-of-Speech Tagging of Mandarin Transcribed Speech

Wen Wang; Zhongqiang Huang; Mary P. Harper

In this paper, we investigate bootstrapping part-of-speech (POS) taggers for Mandarin broadcast news (BN) transcripts using co-training, by iteratively retraining two competitive POS taggers from a small set of labeled training data and a large set of unlabeled data. We compare co-training with self-training and our results show that the performance using co-training is significantly better than that from self-training and these semi-supervised learning methods significantly improve tagging accuracy over training only on the small labeled seed corpus. We also investigate a variety of example selection approaches for co-training and find that the computationally expensive, agreement-based selection approach and a more efficient selection approach based on maximizing training utility produce comparable tagging performance from resulting POS taggers. By applying co-training, we are able to build effective POS taggers for Mandarin transcribed speech with the tagging accuracy comparable to that obtained on newswire text.


international conference on multimodal interfaces | 2006

Using maximum entropy (ME) model to incorporate gesture cues for SU detection

Lei Chen; Mary P. Harper; Zhongqiang Huang

Accurate identification of sentence units (SUs) in spontaneous speech has been found to improve the accuracy of speech recognition, as well as downstream applications such as parsing. In recent multimodal investigations, gestur]al features were utilized, in addition to lexical and prosodic cues from the speech channel, for detecting SUs in conversational interactions using a hidden Markov model (HMM) approach. Although this approach is computationally efficient and provides a convenient way to modularize the knowledge sources, it has two drawbacks for our SU task. First, standard HMM training methods maximize the joint probability of observations and hidden events, as opposed to the posterior probability of a hidden event given observations, a criterion more closely related to SU classification error. A second challenge for integrating gestural features is that their absence sanctions neither SU events nor non-events; it is only the co-timing of gestures with the speech channel that should impact our model. To address these problems, a Maximum Entropy (ME) model is used to combine multimodal cues for SU estimation. Experiments carried out on VACE multi-party meetings confirm that the ME modeling approach provides a solid framework for multimodal integration.


empirical methods in natural language processing | 2007

Mandarin Part-of-Speech Tagging and Discriminative Reranking

Zhongqiang Huang; Mary P. Harper; Wen Wang


empirical methods in natural language processing | 2010

Self-Training with Products of Latent Variable Grammars

Zhongqiang Huang; Mary P. Harper; Slav Petrov


empirical methods in natural language processing | 2010

Soft Syntactic Constraints for Hierarchical Phrase-Based Translation Using Latent Syntactic Distributions

Zhongqiang Huang; Martin Cmejrek; Bowen Zhou

Collaboration


Dive into the Zhongqiang Huang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Franklin

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dustin Hillard

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge