Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fei Hu is active.

Publication


Featured researches published by Fei Hu.


database systems for advanced applications | 2017

Memory-Enhanced Latent Semantic Model: Short Text Understanding for Sentiment Analysis

Fei Hu; Xiaofei Xu; Jingyuan Wang; Zhanbo Yang; Li Li

Short texts, such as tweets and reviews, are not easy to be processed using conventional methods because of the short length, the irregular syntax and the lack of statistical signals. Term dependencies can be used to relax the problem, and to mine latent semantics hidden in short texts. And Long Short-Term Memory networks (LSTMs) can capture and remember term dependencies in a long distance. LSTMs have been widely used to mine semantics of short texts. At the same time, by analyzing the text, we find that a number of key words contribute greatly to the semantics of the text. In this paper, we propose a LSTM based model (MLSM) to enhance the memory of the key words in the short text. The proposed model is evaluated with two datasets: IMDB and SemEval2016, respectively. Experimental results demonstrate that the proposed method is effective with significant performance enhancement over the baseline LSTM and several other latent semantic models.


web age information management | 2017

Efficient Stance Detection with Latent Feature

Xiaofei Xu; Fei Hu; Peiwen Du; Jingyuan Wang; Li Li

Social platforms, such as Twitter, are becoming more and more popular. However it is hard to identify the sentimental stance from those social media. In this paper, an approach is proposed to identify the stance of opinion. Digging out the latent factors of the given rough processed information is essential because it has the potential to reveal different aspects of the known information, which eventually contributes to the advancement of stance analysis. Generally, we take a very large number of articles from Chinese wikipedia as the corpus. The latent feature vectors are generated by word2vec. The HowNet sentiment dictionary (with positive and negative words) are applied to divide the items in the corpus into two parts. The two parts with sentiment polarity are used as the training set for SVM model. Experimentation on NLPCC 2016 Stance Detection dataset demonstrates that the proposed approach can outperform the baselines by about 10% in the term of precision.


international conference on neural information processing | 2017

Deep Bi-directional Long Short-Term Memory Model for Short-Term Traffic Flow Prediction

Jingyuan Wang; Fei Hu; Li Li

Short-term traffic flow prediction plays an important role in intelligent transportation system. Numerous researchers have paid much attention to it in the past decades. However, the performance of traditional traffic flow prediction methods is not satisfactory, for those methods cannot describe the complicated nonlinearity and uncertainty of the traffic flow precisely. Neural networks were used to deal with the issues, but most of them failed to capture the deep features of traffic flow and be sensitive enough to the time-aware traffic flow data. In this paper, we propose a deep bi-directional long short-term memory (DBL) model by introducing long short-term memory (LSTM) recurrent neural network, residual connections, deeply hierarchical networks and bi-directional traffic flow. The proposed model is able to capture the deep features of traffic flow and take full advantage of time-aware traffic flow data. Additionally, we introduce the DBL model, regression layer and dropout training method into a traffic flow prediction architecture. We evaluate the prediction architecture on the dataset from Caltrans Performance Measurement System (PeMS). The experiment results demonstrate that the proposed model for short-term traffic flow prediction obtains high accuracy and generalizes well compared with other models.


Mathematical Problems in Engineering | 2017

Batch Image Encryption Using Generated Deep Features Based on Stacked Autoencoder Network

Fei Hu; Jingyuan Wang; Xiaofei Xu; Changjiu Pu; Tao Peng

Chaos-based algorithms have been widely adopted to encrypt images. But previous chaos-based encryption schemes are not secure enough for batch image encryption, for images are usually encrypted using a single sequence. Once an encrypted image is cracked, all the others will be vulnerable. In this paper, we proposed a batch image encryption scheme into which a stacked autoencoder (SAE) network was introduced to generate two chaotic matrices; then one set is used to produce a total shuffling matrix to shuffle the pixel positions on each plain image, and another produces a series of independent sequences of which each is used to confuse the relationship between the permutated image and the encrypted image. The scheme is efficient because of the advantages of parallel computing of SAE, which leads to a significant reduction in the run-time complexity; in addition, the hybrid application of shuffling and confusing enhances the encryption effect. To evaluate the efficiency of our scheme, we compared it with the prevalent “logistic map,” and outperformance was achieved in running time estimation. The experimental results and analysis show that our scheme has good encryption effect and is able to resist brute-force attack, statistical attack, and differential attack.


Journal of Computer Science and Technology | 2017

Emphasizing Essential Words for Sentiment Classification Based on Recurrent Neural Networks

Fei Hu; Li Li; Zili Zhang; Jingyuan Wang; Xiaofei Xu

With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets: IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1%~2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.


Neural Computing and Applications | 2018

Opinion extraction by distinguishing term dependencies and digging deep text features

Fei Hu; Li Li; Xiaofei Xu; Jingyuan Wang; Jinjing Zhang

Opinion extraction of user reviews has been playing an important role in the academic and industrial fields, and a lot of progresses were achieved by recurrent neural networks (RNNs). Compared with conventional bag-of-word-based models, RNNs can capture dependencies among words, able to remember contextual information for long periods of time. However, RNNs resort to assign a uniform weighted dependency between pairwise words. It is against the fact that people pay attention to different words in varying degrees when reading a text. In this paper, we develop a deeply hierarchical bi-directional key-word emphasis model (DHBK) by introducing term dependencies, human distinguishing memory mechanism, residual connections, deeply hierarchical networks and bi-directional information flow. This model is able to capture weighted term dependencies according to different words, and mine deep text features, and then better extract opinions of the user. Furthermore, we introduce the DHBK and the dropout training method into an opinion extraction task and advocate two novel frameworks: DHBK based on LSTM (DKL) and DHBK based on GRU (DKG). We evaluate the frameworks on two real-world datasets, respectively, IMDB and SemEval-2016 Task 4 Subtask A. Experimental results demonstrate that the improvements are effective with a significant performance enhancement.


Neural Computing and Applications | 2018

An adaptive mechanism to achieve learning rate dynamically

Jinjing Zhang; Fei Hu; Li Li; Xiaofei Xu; Zhanbo Yang; Yanbin Chen

Gradient descent is prevalent for large-scale optimization problems in machine learning; especially it nowadays plays a major role in computing and correcting the connection strength of neural networks in deep learning. However, many gradient-based optimization methods contain more sensitive hyper-parameters which require endless ways of configuring. In this paper, we present a novel adaptive mechanism called adaptive exponential decay rate (AEDR). AEDR uses an adaptive exponential decay rate rather than a fixed and preconfigured one, and it can allow us to eliminate one otherwise tuning sensitive hyper-parameters. AEDR also can be used to calculate exponential decay rate adaptively by employing the moving average of both gradients and squared gradients over time. The mechanism is then applied to Adadelta and Adam; it reduces the number of hyper-parameters of Adadelta and Adam to only a single one to be turned. We use neural network of long short-term memory and LeNet to demonstrate how learning rate adapts dynamically. We show promising results compared with other state-of-the-art methods on four data sets, the IMDB (movie reviews), SemEval-2016 (sentiment analysis in twitter) (IMDB), CIFAR-10 and Pascal VOC-2012.


Neural Computing and Applications | 2018

A fast pseudo-stochastic sequential cipher generator based on RBMs

Fei Hu; Xiaofei Xu; Tao Peng; Changjiu Pu; Li Li

Based on Restricted Boltzmann machines, an improved pseudo-stochastic sequential cipher generator is proposed. It is effective and efficient because of the two advantages: this generator includes a stochastic neural network that can perform the calculation in parallel, that is to say, all elements are calculated simultaneously; unlimited number of sequential ciphers can be generated simultaneously for multiple encryption schemas. The periodicity and the correlation of the output sequential ciphers meet requirements for the design of encrypting sequential data. In the experiment, the generated sequential cipher is used to encrypt images, and better performance is achieved in terms of the key space analysis, the correlation analysis, the sensitivity analysis and the differential attack. To evaluate the efficiency of our method, a comparative study is performed with a prevalent method called “logistic map.” Our approach achieves a better performance on running time estimation. The experimental results are promising as the proposed method could promote the development of image protection in computer security.


web information systems engineering | 2017

Modeling Complementary Relationships of Cross-Category Products for Personal Ranking

Wenli Yu; Li Li; Fei Hu; Fan Li; Jinjing Zhang

The category of the product acts as the label of the product. It also exemplifies users various needs and tastes. In the existing recommender systems, the focus is on similar products recommendation with little or no intention to investigate the cross-category and the complementary relationship between categories and products. In this paper, a novel method based on Bayesian Personalized Ranking (BPR) is proposed to integrate the complementary information between categories and the latent features of both users and items for better recommendation. By considering category dimensions explicitly, the model can alleviate the cold start issue and give the recommendation not only considering traditional similarity measure but complementary relationships between products as well. The method is evaluated comprehensively and the experimental results illustrate that our work optimized ranking significantly (with high recommendation performance).


knowledge science, engineering and management | 2017

Exploring Latent Bundles from Social Behaviors for Personalized Ranking

Wenli Yu; Li Li; Fan Li; Jinjing Zhang; Fei Hu

Users in social networks usually have different interpersonal relationships and various social roles. It is common that a user will synthesize all of his/her roles before taking any action. Understanding how products relate to each other is crucial in Recommender Systems (RSs). Predicting personalized sequential behaviors which are influenced by users’ various social roles and product bundle relationships is one of key tasks for the success of RSs. In this paper, a novel method combining social roles and sequential patterns is proposed to explore the latent bundle dimensions from the perspective of user’s sequential pattern and his/her social roles as well. The extracted vector represents the most distinctive features of interpersonal relationships for users. The proposed method tries to explore the latent bundle relationship by learning personal dynamics which is influenced by the user’s social roles. The method is evaluated on Amazon datasets and demonstrates our framework outperforms alternative baseline by providing top k recommendations.

Collaboration


Dive into the Fei Hu's collaboration.

Top Co-Authors

Avatar

Li Li

Southwest University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Changjiu Pu

University of Education

View shared research outputs
Top Co-Authors

Avatar

Tao Peng

University of Education

View shared research outputs
Top Co-Authors

Avatar

Fan Li

Southwest University

View shared research outputs
Top Co-Authors

Avatar

Wenli Yu

Southwest University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge