Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter A. Chew is active.

Publication


Featured researches published by Peter A. Chew.


Proceedings of the Workshop on Multilingual Language Resources and Interoperability | 2006

Evaluation of the Bible as a Resource for Cross-Language Information Retrieval

Peter A. Chew; Steve Verzi; Travis L. Bauer; Jonathan T. McClain

An area of recent interest in cross-language information retrieval (CLIR) is the question of which parallel corpora might be best suited to tasks in CLIR, or even to what extent parallel corpora can be obtained or are necessary. One proposal, which in our opinion has been somewhat overlooked, is that the Bible holds a unique value as a multilingual corpus, being (among other things) widely available in a broad range of languages and having a high coverage of modern-day vocabulary. In this paper, we test empirically whether this claim is justified through a series of validation tests on various information retrieval tasks. Our results appear to indicate that our methodology may significantly outperform others recently proposed.


international conference on computational linguistics | 2008

Enhancing Multilingual Latent Semantic Analysis with Term Alignment Information

Brett W. Bader; Peter A. Chew

Latent Semantic Analysis (LSA) is based on the Singular Value Decomposition (SVD) of a term-by-document matrix for identifying relationships among terms and documents from cooccurrence patterns. Among the multiple ways of computing the SVD of a rectangular matrix X, one approach is to compute the eigenvalue decomposition (EVD) of a square 2 x 2 composite matrix consisting of four blocks with X and XT in the off-diagonal blocks and zero matrices in the diagonal blocks. We point out that significant value can be added to LSA by filling in some of the values in the diagonal blocks (corresponding to explicit term-to-term or document-to-document associations) and computing a term-by-concept matrix from the EVD. For the case of multilingual LSA, we incorporate information on cross-language term alignments of the same sort used in Statistical Machine Translation (SMT). Since all elements of the proposed EVD-based approach can rely entirely on lexical statistics, hardly any price is paid for the improved empirical results. In particular, the approach, like LSA or SMT, can still be generalized to virtually any language(s); computation of the EVD takes similar resources to that of the SVD since all the blocks are sparse; and the results of EVD are just as economical as those of SVD.


Natural Language Engineering | 2011

An information-theoretic, vector-space-model approach to cross-language information retrieval*

Peter A. Chew; Brett W. Bader; Stephen Helmreich; Ahmed Abdelali; Stephen J. Verzi

In this article, we demonstrate several novel ways in which insights from information theory (IT) and computational linguistics (CL) can be woven into a vector-space-model (VSM) approach to information retrieval (IR). Our proposals focus, essentially, on three areas: pre-processing (morphological analysis), term weighting, and alternative geometrical models to the widely used term-by-document matrix. The latter include (1) PARAFAC2 decomposition of a term-by-document-by-language tensor, and (2) eigenvalue decomposition of a term-by-term matrix (inspired by Statistical Machine Translation). We evaluate all proposals, comparing them to a ???standard??? approach based on Latent Semantic Analysis, on a multilingual document clustering task. The evidence suggests that proper consideration of IT within IR is indeed called for: in all cases, our best results are achieved using the information-theoretic variations upon the standard approach. Furthermore, we show that different information-theoretic options can be combined for still better results. A key function of language is to encode and convey information, and contributions of IT to the field of CL can be traced back a number of decades. We think that our proposals help bring IR and CL more into line with one another. In our conclusion, we suggest that the fact that our proposals yield empirical improvements is not coincidental given that they increase the theoretical transparency of VSM approaches to IR; on the contrary, they help shed light on why aspects of these approaches work as they do.


international conference on computational linguistics | 2008

Latent Morpho-Semantic Analysis: Multilingual Information Retrieval with Character N-Grams and Mutual Information

Peter A. Chew; Brett W. Bader; Ahmed Abdelali

We describe an entirely statistics-based, unsupervised, and language-independent approach to multilingual information retrieval, which we call Latent Morpho-Semantic Analysis (LMSA). LMSA overcomes some of the shortcomings of related previous approaches such as Latent Semantic Analysis (LSA). LMSA has an important theoretical advantage over LSA: it combines well-known techniques in a novel way to break the terms of LSA down into units which correspond more closely to morphemes. Thus, it has a particular appeal for use with morphologically complex languages such as Arabic. We show through empirical results that the theoretical advantages of LMSA can translate into significant gains in precision in multilingual information retrieval tests. These gains are not matched either when a standard stemmer is used with LSA, or when terms are indiscriminately broken down into n-grams.


Proceedings of the Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics | 2009

Using DEDICOM for Completely Unsupervised Part-of-Speech Tagging

Peter A. Chew; Brett W. Bader; Alla Rozovskaya

A standard and widespread approach to part-of-speech tagging is based on Hidden Markov Models (HMMs). An alternative approach, pioneered by Schutze (1993), induces parts of speech from scratch using singular value decomposition (SVD). We introduce DEDICOM as an alternative to SVD for part-of-speech induction. DEDICOM retains the advantages of SVD in that it is completely unsupervised: no prior knowledge is required to induce either the tagset or the associations of types with tags. However, unlike SVD, it is also fully compatible with the HMM framework, in that it can be used to estimate emission- and transition-probability matrices which can then be used as the input for an HMM. We apply the DEDICOM method to the CONLL corpus (CONLL 2000) and compare the output of DEDICOM to the part-of-speech tags given in the corpus, and find that the correlation (almost 0.5) is quite high. Using DEDICOM, we also estimate part-of-speech ambiguity for each type, and find that these estimates correlate highly with part-of-speech ambiguity as measured in the original corpus (around 0.88). Finally, we show how the output of DEDICOM can be evaluated and compared against the more familiar output of supervised HMM-based tagging.


knowledge discovery and data mining | 2007

Cross-language information retrieval using PARAFAC2

Peter A. Chew; Brett W. Bader; Tamara G. Kolda; Ahmed Abdelali


meeting of the association for computational linguistics | 2007

Benefits of the `Massively Parallel Rosetta Stone': Cross-Language Information Retrieval with over 30 Languages

Peter A. Chew; Ahmed Abdelali


Archive | 2009

Technique for Information Retrieval Using Enhanced Latent Semantic Analysis

Peter A. Chew; Brett W. Bader


international joint conference on natural language processing | 2008

(Paper) The Effects of Language Relatedness on Multilingual Information Retrieval: A Case Study With Indo-European and Semitic Languages.

Peter A. Chew; Ahmed Abdelali


international conference on data mining | 2011

Multilingual Sentiment Analysis Using Latent Semantic Indexing and Machine Learning

Brett W. Bader; W. Philip Kegelmeyer; Peter A. Chew

Collaboration


Dive into the Peter A. Chew's collaboration.

Top Co-Authors

Avatar

Brett W. Bader

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ahmed Abdelali

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar

Stephen Helmreich

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar

Stephen J. Verzi

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jonathan T. McClain

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Steve Verzi

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Tamara G. Kolda

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Travis L. Bauer

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

W. Philip Kegelmeyer

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge