Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingyuan Wang is active.

Publication


Featured researches published by Jingyuan Wang.


database systems for advanced applications | 2017

Memory-Enhanced Latent Semantic Model: Short Text Understanding for Sentiment Analysis

Fei Hu; Xiaofei Xu; Jingyuan Wang; Zhanbo Yang; Li Li

Short texts, such as tweets and reviews, are not easy to be processed using conventional methods because of the short length, the irregular syntax and the lack of statistical signals. Term dependencies can be used to relax the problem, and to mine latent semantics hidden in short texts. And Long Short-Term Memory networks (LSTMs) can capture and remember term dependencies in a long distance. LSTMs have been widely used to mine semantics of short texts. At the same time, by analyzing the text, we find that a number of key words contribute greatly to the semantics of the text. In this paper, we propose a LSTM based model (MLSM) to enhance the memory of the key words in the short text. The proposed model is evaluated with two datasets: IMDB and SemEval2016, respectively. Experimental results demonstrate that the proposed method is effective with significant performance enhancement over the baseline LSTM and several other latent semantic models.


web age information management | 2017

Efficient Stance Detection with Latent Feature

Xiaofei Xu; Fei Hu; Peiwen Du; Jingyuan Wang; Li Li

Social platforms, such as Twitter, are becoming more and more popular. However it is hard to identify the sentimental stance from those social media. In this paper, an approach is proposed to identify the stance of opinion. Digging out the latent factors of the given rough processed information is essential because it has the potential to reveal different aspects of the known information, which eventually contributes to the advancement of stance analysis. Generally, we take a very large number of articles from Chinese wikipedia as the corpus. The latent feature vectors are generated by word2vec. The HowNet sentiment dictionary (with positive and negative words) are applied to divide the items in the corpus into two parts. The two parts with sentiment polarity are used as the training set for SVM model. Experimentation on NLPCC 2016 Stance Detection dataset demonstrates that the proposed approach can outperform the baselines by about 10% in the term of precision.


international conference on neural information processing | 2017

Deep Bi-directional Long Short-Term Memory Model for Short-Term Traffic Flow Prediction

Jingyuan Wang; Fei Hu; Li Li

Short-term traffic flow prediction plays an important role in intelligent transportation system. Numerous researchers have paid much attention to it in the past decades. However, the performance of traditional traffic flow prediction methods is not satisfactory, for those methods cannot describe the complicated nonlinearity and uncertainty of the traffic flow precisely. Neural networks were used to deal with the issues, but most of them failed to capture the deep features of traffic flow and be sensitive enough to the time-aware traffic flow data. In this paper, we propose a deep bi-directional long short-term memory (DBL) model by introducing long short-term memory (LSTM) recurrent neural network, residual connections, deeply hierarchical networks and bi-directional traffic flow. The proposed model is able to capture the deep features of traffic flow and take full advantage of time-aware traffic flow data. Additionally, we introduce the DBL model, regression layer and dropout training method into a traffic flow prediction architecture. We evaluate the prediction architecture on the dataset from Caltrans Performance Measurement System (PeMS). The experiment results demonstrate that the proposed model for short-term traffic flow prediction obtains high accuracy and generalizes well compared with other models.


Mathematical Problems in Engineering | 2017

Batch Image Encryption Using Generated Deep Features Based on Stacked Autoencoder Network

Fei Hu; Jingyuan Wang; Xiaofei Xu; Changjiu Pu; Tao Peng

Chaos-based algorithms have been widely adopted to encrypt images. But previous chaos-based encryption schemes are not secure enough for batch image encryption, for images are usually encrypted using a single sequence. Once an encrypted image is cracked, all the others will be vulnerable. In this paper, we proposed a batch image encryption scheme into which a stacked autoencoder (SAE) network was introduced to generate two chaotic matrices; then one set is used to produce a total shuffling matrix to shuffle the pixel positions on each plain image, and another produces a series of independent sequences of which each is used to confuse the relationship between the permutated image and the encrypted image. The scheme is efficient because of the advantages of parallel computing of SAE, which leads to a significant reduction in the run-time complexity; in addition, the hybrid application of shuffling and confusing enhances the encryption effect. To evaluate the efficiency of our scheme, we compared it with the prevalent “logistic map,” and outperformance was achieved in running time estimation. The experimental results and analysis show that our scheme has good encryption effect and is able to resist brute-force attack, statistical attack, and differential attack.


Journal of Computer Science and Technology | 2017

Emphasizing Essential Words for Sentiment Classification Based on Recurrent Neural Networks

Fei Hu; Li Li; Zili Zhang; Jingyuan Wang; Xiaofei Xu

With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets: IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1%~2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.


pacific-asia conference on knowledge discovery and data mining | 2018

A Locally Adaptive Multi-Label k-Nearest Neighbor Algorithm

Dengbao Wang; Jingyuan Wang; Fei Hu; Li Li; Xiuzhen Zhang

In the field of multi-label learning, ML-kNN is the first lazy learning approach and one of the most influential approaches. The main idea of it is to adapt k-NN method to deal with multi-label data, where maximum a posteriori rule is utilized to adaptively adjust decision boundary for each unseen instance. In ML-kNN, all test instances which get the same number of votes among k nearest neighbors have the same probability to be assigned a label, which may cause improper decision since it ignores the local difference of samples. Actually, in real world data sets, the instances with (or without) label l from different locations may have different numbers of neighbors with the label l. In this paper, we propose a locally adaptive Multi-Label k-Nearest Neighbor method to address this problem, which takes the local difference of samples into account. We show how a simple modification to the posterior probability expression, previously used in ML-kNN algorithm, allows us to take the local difference into account. Experimental results on benchmark data sets demonstrate that our approach has superior classification performance with respect to other kNN-based algorithms.


knowledge science, engineering and management | 2018

P-DBL: A Deep Traffic Flow Prediction Architecture Based on Trajectory Data

Jingyuan Wang; Xiaofei Xu; Jun He; Li Li

Predicting large-scale transportation network traffic flow has become an important and challenging topic in recent decades. However, accurate traffic flow prediction is still hard to realize. Weather factors such as precipitation in residential areas and tourist destinations affect traffic flow on the surrounding roads. In this paper, we attempt to take precipitation impact into consideration when predicting traffic flow. To realize this idea, we propose a deep traffic flow prediction architecture by introducing a deep bi-directional long short-term memory model, precipitation information, residual connection, regression layer and dropout training method. The proposed model has good ability to capture the deep features of traffic flow. Besides, it can take full advantage of time-aware traffic flow data and additional precipitation data. We evaluate the prediction architecture on taxi trajectory dataset in Chongqing and taxi trajectory dataset in Beijing with corresponding precipitation data from China Meteorological Data Service Center (CMDC). The experiment results demonstrate that the proposed model for traffic flow prediction obtains high accuracy compared with other models.


database systems for advanced applications | 2018

Extracting Label Importance Information for Multi-label Classification

Dengbao Wang; Li Li; Jingyuan Wang; Fei Hu; Xiuzhen Zhang

Existing multi-label learning approaches assume all labels in a dataset are of the same importance. However, the importance of each label is generally different in real world. In this paper, we introduce multi-label importance (MLI) which measures label importance from two perspectives: label predictability and label effects. Specifically, label predictability and label effects can be extracted from training data before building models for multi-label learning. After that, the multi-label importance information can be used in existing approaches to improve the performance of multi-label learning. To prove this, we propose a classifier chain algorithm based on multi-label importance ranking and a improved kNN-based algorithm which takes both feature distance and label distance into consideration. We apply our algorithms on benchmark datasets demonstrating efficient multi-label learning by exploiting multi-label importance. It is also worth mentioning that our experiments show the strong positive correlation between label predictability and label effects.


Neural Computing and Applications | 2018

Opinion extraction by distinguishing term dependencies and digging deep text features

Fei Hu; Li Li; Xiaofei Xu; Jingyuan Wang; Jinjing Zhang

Opinion extraction of user reviews has been playing an important role in the academic and industrial fields, and a lot of progresses were achieved by recurrent neural networks (RNNs). Compared with conventional bag-of-word-based models, RNNs can capture dependencies among words, able to remember contextual information for long periods of time. However, RNNs resort to assign a uniform weighted dependency between pairwise words. It is against the fact that people pay attention to different words in varying degrees when reading a text. In this paper, we develop a deeply hierarchical bi-directional key-word emphasis model (DHBK) by introducing term dependencies, human distinguishing memory mechanism, residual connections, deeply hierarchical networks and bi-directional information flow. This model is able to capture weighted term dependencies according to different words, and mine deep text features, and then better extract opinions of the user. Furthermore, we introduce the DHBK and the dropout training method into an opinion extraction task and advocate two novel frameworks: DHBK based on LSTM (DKL) and DHBK based on GRU (DKG). We evaluate the frameworks on two real-world datasets, respectively, IMDB and SemEval-2016 Task 4 Subtask A. Experimental results demonstrate that the improvements are effective with a significant performance enhancement.


web age information management | 2017

A Joint Model for Water Scarcity Evaluation

Jingyuan Wang; Li Li

To make a real difference for our thirsty planet, we establish the water demand-supply model and the Advanced Water Poverty Index (AWPI). First, we develop a dynamic demand-supply model to measure the ability of a region to satisfy its water consumption. On the demand side, we fit agricultural and industrial water needs by the Grey Verhulst prediction model, then we consider domestic needs through the Logistic Growth model of total population and the Regression model of residential needs per capita. On the supply side, we estimate the impacts of multiple factors such as utilized internal river and rainfall, desalinated seawater and purified sewage. In the experiments, we use the sensor data from the World Bank. Also, the stability of our model has been proved by the evaluation. Second, we analyze the types of water scarcity by improving the Water Poverty Index to the Advanced Water Poverty Index, and we creatively add population as the sixth key component. The prediction can be used as an important indicator for the government to take some specific intervention measures to help alleviate the severe water shortage and achieve sustainable development of water resources.

Collaboration


Dive into the Jingyuan Wang's collaboration.

Top Co-Authors

Avatar

Li Li

Southwest University

View shared research outputs
Top Co-Authors

Avatar

Fei Hu

Southwest University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenli Yu

Southwest University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun He

Southwest University

View shared research outputs
Researchain Logo
Decentralizing Knowledge