Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiting Hu is active.

Publication


Featured researches published by Zhiting Hu.


ACM Transactions on Knowledge Discovery From Data | 2015

Modeling Location-Based User Rating Profiles for Personalized Recommendation

Hongzhi Yin; Bin Cui; Ling Chen; Zhiting Hu; Chengqi Zhang

This article proposes LA-LDA, a location-aware probabilistic generative model that exploits location-based ratings to model user profiles and produce recommendations. Most of the existing recommendation models do not consider the spatial information of users or items; however, LA-LDA supports three classes of location-based ratings, namely spatial user ratings for nonspatial items, nonspatial user ratings for spatial items, and spatial user ratings for spatial items. LA-LDA consists of two components, ULA-LDA and ILA-LDA, which are designed to take into account user and item location information, respectively. The component ULA-LDA explicitly incorporates and quantifies the influence from local public preferences to produce recommendations by considering user home locations, whereas the component ILA-LDA recommends items that are closer in both taste and travel distance to the querying users by capturing item co-occurrence patterns, as well as item location co-occurrence patterns. The two components of LA-LDA can be applied either separately or collectively, depending on the available types of location-based ratings. To demonstrate the applicability and flexibility of the LA-LDA model, we deploy it to both top-k recommendation and cold start recommendation scenarios. Experimental evidence on large-scale real-world data, including the data from Gowalla (a location-based social network), DoubanEvent (an event-based social network), and MovieLens (a movie recommendation system), reveal that LA-LDA models user profiles more accurately by outperforming existing recommendation models for top-k recommendation and the cold start problem.


meeting of the association for computational linguistics | 2016

Harnessing Deep Neural Networks with Logic Rules

Zhiting Hu; Xuezhe Ma; Zhengzhong Liu; Eduard H. Hovy; Eric P. Xing

Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems.


international conference on management of data | 2015

Community Level Diffusion Extraction

Zhiting Hu; Junjie Yao; Bin Cui; Eric P. Xing

How does online content propagate on social networks? Billions of users generate, consume, and spread tons of information every day. This unprecedented scale of dynamics becomes invaluable to reflect our zeitgeist. However, most present diffusion extraction works have only touched individual user level and cannot obtain comprehensive clues. This paper introduces a new approach, i.e., COmmunity Level Diffusion (COLD), to uncover and explore temporal diffusion. We model topics and communities in a unified latent framework, and extract inter-community influence dynamics. With a well-designed multi-component model structure and a parallel inference implementation on GraphLab, the COLD method is expressive while remaining efficient. The extracted community level patterns enable diffusion exploration from a new perspective. We leverage the compact yet robust representations to develop new prediction and analysis applications. Extensive experiments on large social datasets show significant improvement in prediction accuracy. We can also find communities play very different roles in diffusion processes depending on their interest. Our method guarantees high scalability with increasing data size.


international joint conference on natural language processing | 2015

Entity Hierarchy Embedding

Zhiting Hu; Po-Yao Huang; Yuntian Deng; Yingkai Gao; Eric P. Xing

Existing distributed representations are limited in utilizing structured knowledge to improve semantic relatedness modeling. We propose a principled framework of embedding entities that integrates hierarchical information from large-scale knowledge bases. The novel embedding model associates each category node of the hierarchy with a distance metric. To capture structured semantics, the entity similarity of context prediction are measured under the aggregated metrics of relevant categories along all inter-entity paths. We show that both the entity vectors and category distance metrics encode meaningful semantics. Experiments in entity linking and entity search show superiority of the proposed method.


empirical methods in natural language processing | 2016

Deep Neural Networks with Massive Learned Knowledge

Zhiting Hu; Zichao Yang; Ruslan Salakhutdinov; Eric P. Xing

Regulating deep neural networks (DNNs) with human structured knowledge has shown to be of great benefit for improved accuracy and interpretability. We develop a general framework that enables learning knowledge and its confidence jointly with the DNNs, so that the vast amount of fuzzy knowledge can be incorporated and automatically optimized with little manual efforts. We apply the framework to sentence sentiment analysis, augmenting a DNN with massive linguistic constraints on discourse and polarity structures. Our model substantially enhances the performance using less training data, and shows improved interpretability. The principled framework can also be applied to posterior regularization for regulating other statistical models.


meeting of the association for computational linguistics | 2017

Adversarial Connective-exploiting Networks for Implicit Discourse Relation Classification.

Lianhui Qin; Zhisong Zhang; Hai Zhao; Zhiting Hu; Eric P. Xing

Implicit discourse relation classification is of great challenge due to the lack of connectives as strong linguistic cues, which motivates the use of annotated implicit connectives to improve the recognition. We propose a feature imitation framework in which an implicit relation network is driven to learn from another neural network with access to connectives, and thus encouraged to extract similarly salient features for accurate classification. We develop an adversarial model to enable an adaptive imitation scheme through competition between the implicit network and a rival feature discriminator. Our method effectively transfers discriminability of connectives to the implicit features, and achieves state-of-the-art performance on the PDTB benchmark.


meeting of the association for computational linguistics | 2016

Learning Concept Taxonomies from Multi-modal Data

Hao Zhang; Zhiting Hu; Yuntian Deng; Mrinmaya Sachan; Zhicheng Yan; Eric P. Xing

We study the problem of automatically building hypernym taxonomies from textual and visual data. Previous works in taxonomy induction generally ignore the increasingly prominent visual data, which encode important perceptual semantics. Instead, we propose a probabilistic model for taxonomy induction by jointly leveraging text and images. To avoid hand-crafted feature engineering, we design end-to-end features based on distributed representations of images and words. The model is discriminatively trained given a small set of existing ontologies and is capable of building full taxonomies from scratch for a collection of unseen conceptual label items with associated images. We evaluate our model and features on the WordNet hierarchies, where our system outperforms previous approaches by a large gap.


knowledge discovery and data mining | 2017

Efficient Correlated Topic Modeling with Topic Embedding

Junxian He; Zhiting Hu; Taylor Berg-Kirkpatrick; Ying Huang; Eric P. Xing

Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling. In this paper, we propose a new model which learns compact topic embeddings and captures topic correlations through the closeness between the topic vectors. Our method enables efficient inference in the low-dimensional embedding space, reducing previous cubic or quadratic time complexity to linear w.r.t the topic size. We further speedup variational inference with a fast sampler to exploit sparsity of topic occurrence. Extensive experiments show that our approach is capable of handling model and data scales which are several orders of magnitude larger than existing correlation results, without sacrificing modeling quality by providing competitive or superior performance in document classification and retrieval.


knowledge discovery and data mining | 2013

LCARS: a location-content-aware recommender system

Hongzhi Yin; Yizhou Sun; Bin Cui; Zhiting Hu; Ling Chen


international conference on management of data | 2014

A temporal context-aware model for user behavior modeling in social media systems

Hongzhi Yin; Bin Cui; Ling Chen; Zhiting Hu; Zi Huang

Collaboration


Dive into the Zhiting Hu's collaboration.

Top Co-Authors

Avatar

Eric P. Xing

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Xiaodan Liang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zichao Yang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongzhi Yin

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Hao Zhang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yuntian Deng

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xuezhe Ma

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge