Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peixian Chen is active.

Publication


Featured researches published by Peixian Chen.


european conference on machine learning | 2014

Hierarchical latent tree analysis for topic detection

Tengfei Liu; Nevin Lianwen Zhang; Peixian Chen

In the LDA approach to topic detection, a topic is determined by identifying the words that are used with high frequency when writing about the topic. However, high frequency words in one topic may be also used with high frequency in other topics. Thus they may not be the best words to characterize the topic. In this paper, we propose a new method for topic detection, where a topic is determined by identifying words that appear with high frequency in the topic and low frequency in other topics. We model patterns of word co- occurrence and co-occurrences of those patterns using a hierarchy of discrete latent variables. The states of the latent variables represent clusters of documents and they are interpreted as topics. The words that best distinguish a cluster from other clusters are selected to characterize the topic. Empirical results show that the new method yields topics with clearer thematic characterizations than the alternative approaches.


Machine Learning | 2015

Greedy learning of latent tree models for multidimensional clustering

Tengfei Liu; Nevin Lianwen Zhang; Peixian Chen; April Hua Liu; Leonard K. M. Poon; Yi Wang

Real-world data are often multifaceted and can be meaningfully clustered in more than one way. There is a growing interest in obtaining multiple partitions of data. In previous work we learnt from data a latent tree model (LTM) that contains multiple latent variables (Chen et al. 2012). Each latent variable represents a soft partition of data and hence multiple partitions result in. The LTM approach can, through model selection, automatically determine how many partitions there should be, what attributes define each partition, and how many clusters there should be for each partition. It has been shown to yield rich and meaningful clustering results.Our previous algorithm EAST for learning LTMs is only efficient enough to handle data sets with dozens of attributes. This paper proposes an algorithm called BI that can deal with data sets with hundreds of attributes. We empirically compare BI with EAST and other more efficient LTM learning algorithms, and show that BI outperforms its competitors on data sets with hundreds of attributes. In terms of clustering results, BI compares favorably with alternative methods that are not based on LTMs.


Artificial Intelligence | 2017

Latent tree models for hierarchical topic detection

Peixian Chen; Nevin Lianwen Zhang; Tengfei Liu; Leonard K. M. Poon; Zhourong Chen; Farhan Khawar

We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence/absence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word co-occurrence patterns and those at higher levels representing co-occurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies.


computer vision and pattern recognition | 2015

Bayesian adaptive matrix factorization with automatic model selection

Peixian Chen; Naiyan Wang; Nevin Lianwen Zhang; Dit Yan Yeung

Low-rank matrix factorization has long been recognized as a fundamental problem in many computer vision applications. Nevertheless, the reliability of existing matrix factorization methods is often hard to guarantee due to challenges brought by such model selection issues as selecting the noise model and determining the model capacity. We address these two issues simultaneously in this paper by proposing a robust non-parametric Bayesian adaptive matrix factorization (AMF) model. AMF proposes a new noise model built on the Dirichlet process Gaussian mixture model (DP-GMM) by taking advantage of its high flexibility on component number selection and capability of fitting a wide range of unknown noise. AMF also imposes an automatic relevance determination (ARD) prior on the low-rank factor matrices so that the rank can be determined automatically without the need for enforcing any hard constraint. An efficient variational method is then devised for model inference. We compare AMF with state-of-the-art matrix factorization methods based on data sets ranging from synthetic data to real-world application data. From the results, AMF consistently achieves better or comparable performance.


probabilistic graphical models | 2014

A Study of Recently Discovered Equalities about Latent Tree Models Using Inverse Edges

Nevin Lianwen Zhang; Xiaofei Wang; Peixian Chen

Interesting equalities have recently been discovered about latent tree models. They relate distributions of two or three observed variables with joint distributions of four or more observed variables, and with model parameters that depend on latent variables. The equations are derived by using matrix and tensor decompositions. This paper sheds new light on the equalities by offering an alternative derivation in terms of variable elimination and structure manipulations. The key technique is the introduction of inverse edges.


web age information management | 2017

Topic Browsing System for Research Papers Based on Hierarchical Latent Tree Analysis

Leonard K. M. Poon; Chun Fai Leung; Peixian Chen; Nevin Lianwen Zhang

New academic papers appear rapidly in the literature nowadays. This poses a challenge for researchers who are trying to keep up with a given field, especially those who are new to a field and may not know where to start from. To address this kind of problems, we have developed a topic browsing system for research papers where the papers have been automatically categorized by a probabilistic topic model. Rather than using Latent Dirichlet Allocation (LDA) for topic modeling, we use a recently proposed method called hierarchical latent tree analysis, which has been shown to perform better than some state-of-the-art LDA-based methods. The resulting topic model contains a hierarchy of topics so that users can browse topics at different levels. The topic model contains a manageable number of general topics at the top level and allows thousands of fine-grained topics at the bottom level.


national conference on artificial intelligence | 2016

Progressive EM for latent tree models and hierarchical topic detection

Peixian Chen; Nevin Lianwen Zhang; Leonard K. M. Poon; Zhourong Chen


national conference on artificial intelligence | 2016

Sparse Boltzmann Machines with Structure Learning as Applied to Text Analysis.

Zhourong Chen; Nevin Lianwen Zhang; Dit Yan Yeung; Peixian Chen


Chinese Medicine | 2016

Identification of Chinese medicine syndromes in persistent insomnia associated with major depressive disorder: a latent tree analysis.

Wing-Fai Yeung; Ka-Fai Chung; Nevin Lianwen Zhang; Shi-Ping Zhang; Kam-Ping Yung; Peixian Chen; Yan-Yee Ho


Archive | 2014

An Evidence-Based Approach to Patient Classification in Traditional Chinese Medicine based on Latent Tree Analysis.

Nevin Lianwen Zhang; Chen Fu; Tengfei Liu; Kin Man Poon; Peixian Chen; Bao Xin Chen; Yunling Zhang

Collaboration


Dive into the Peixian Chen's collaboration.

Top Co-Authors

Avatar

Nevin Lianwen Zhang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tengfei Liu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhourong Chen

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dit Yan Yeung

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

April Hua Liu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chun Fai Leung

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Farhan Khawar

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ka-Fai Chung

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge