Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wenjuan Gong is active.

Publication


Featured researches published by Wenjuan Gong.


Software - Practice and Experience | 2017

Resource requests prediction in the cloud computing environment with a deep belief network

Weishan Zhang; Pengcheng Duan; Laurence T. Yang; Feng Xia; Zhongwei Li; Qinghua Lu; Wenjuan Gong; Su Yang

Accurate resource requests prediction is essential to achieve optimal job scheduling and load balancing for cloud Computing. Existing prediction approaches fall short in providing satisfactory accuracy because of high variances of cloud metrics. We propose a deep belief network (DBN)‐based approach to predict cloud resource requests. We design a set of experiments to find the most influential factors for prediction accuracy and the best DBN parameter set to achieve optimal performance. The innovative points of the proposed approach is that it introduces analysis of variance and orthogonal experimental design techniques into the parameter learning of DBN. The proposed approach achieves high accuracy with mean square error of [10−6,10−5], approximately 72% reduction compared with the traditional autoregressive integrated moving average predictor, and has better prediction accuracy compared with the state‐of‐art fractal modeling approach. Copyright


EURASIP Journal on Advances in Signal Processing | 2012

Human action recognition based on estimated weak poses

Wenjuan Gong; Jordi Gonzàlez; Francesc Xavier Roca

We present a novel method for human action recognition (HAR) based on estimated poses from image sequences. We use 3D human pose data as additional information and propose a compact human pose representation, called a weak pose, in a low-dimensional space while still keeping the most discriminative information for a given pose. With predicted poses from image features, we map the problem from image feature space to pose space, where a Bag of Poses (BOP) model is learned for the final goal of HAR. The BOP model is a modified version of the classical bag of words pipeline by building the vocabulary based on the most representative weak poses for a given action. Compared with the standard k-means clustering, our vocabulary selection criteria is proven to be more efficient and robust against the inherent challenges of action recognition. Moreover, since for action recognition the ordering of the poses is discriminative, the BOP model incorporates temporal information: in essence, groups of consecutive poses are considered together when computing the vocabulary and assignment. We tested our method on two well-known datasets: HumanEva and IXMAS, to demonstrate that weak poses aid to improve action recognition accuracies. The proposed method is scene-independent and is comparable with the state-of-art method.


iberian conference on pattern recognition and image analysis | 2011

On Importance of Interactions and Context in Human Action Recognition

Nataliya Shapovalova; Wenjuan Gong; Marco Pedersoli; Francesc Xavier Roca; Jordi Gonzàlez

This paper is focused on the automatic recognition of human events in static images. Popular techniques use knowledge of the human pose for inferring the action, and the most recent approaches tend to combine pose information with either knowledge of the scene or of the objects with which the human interacts. Our approach makes a step forward in this direction by combining the human pose with the scene in which the human is placed, together with the spatial relationships between humans and objects. Based on standard, simple descriptors like HOG and SIFT, recognition performance is enhanced when these three types of knowledge are taken into account. Results obtained in the PASCAL 2010 Action Recognition Dataset demonstrate that our technique reaches state-of-the-art results using simple descriptors and classifiers.


Sensors | 2016

Human Pose Estimation from Monocular Images: A Comprehensive Survey

Wenjuan Gong; Xuena Zhang; Jordi Gonzàlez; Andrews Sobral; Thierry Bouwmans; Changhe Tu; El-hadi Zahzah

Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.


international conference on computer vision | 2011

Modeling vs. learning approaches for monocular 3D human pose estimation

Wenjuan Gong; Jürgen Brauer; Michael Arens; Jordi Gonzàlez

We tackle the problem of 3D human pose estimation based on monocular images from which 2D pose estimates are available. A large number of approaches have been proposed for this task. Some of them avoid to model the mapping from 2D poses to 3D poses explicitly but learn the mapping using training samples. In contrast, there also exist methods that try to use some knowledge about the connection between 2D and 3D poses to model the mapping from 2D to 3D explicitly. Surprisingly, up to now there is no experimental comparison of these two classes of approaches that uses exactly the same data sources and thereby carves out the advantages and disadvantages of both methods. In this paper we present such a comparison for the most commonly used learning approach for 3D pose estimation - the Gaussian process regressor - with the most used modeling approach - the geometric reconstruction of 3D poses. The results show that the learning based approach outperforms the modeling approach when there are no big changes in viewpoint or action types compared to the training data. In contrast, modeling approaches show advantages over learning approaches when there are big differences between training and application data.


IEEE Transactions on Industrial Informatics | 2016

A Load-Aware Pluggable Cloud Framework for Real-Time Video Processing

Weishan Zhang; Pengcheng Duan; Wenjuan Gong; Qinghua Lu; Su Yang

A large number of video applications require real-time response. The high-speed video processing then requires a distributed and parallelized framework utilizing all possible computing resources, i.e., both Central Processing Unit (CPU) and Graphics Processing Unit (GPU) at their best. The CPU-GPU collaboration may cause resource imbalance where GPU-based jobs consume less computing resources while occupying more memory compared with CPU-based jobs. In this paper, we propose a load-aware pluggable cloud framework for real-time video processing where CPU-GPU switching based on workload status can be performed at runtime. Furthermore, we design aspect-oriented monitors to collect framework metrics and propose a distance coverage algorithm to detect performance degradation in order to make sure that the framework runs optimally to achieve good performance when a load-aware task switching is made. We have comprehensively evaluated the framework and the evaluation results show that the proposed framework has good performance, reusability, pluggability, and scalability.


International Journal of Distributed Sensor Networks | 2015

A genetic-algorithm-based approach for task migration in pervasive clouds

Weishan Zhang; Shouchao Tan; Qinghua Lu; Xin Liu; Wenjuan Gong

Pervasive computing is converging with cloud computing which becomes pervasive cloud computing as an emerging computing paradigm. Users can run their applications or tasks in pervasive cloud environment in order to gain better execution efficiency and performance leveraging powerful computing and storage capacities of pervasive clouds through task migration. During task migration, there are possibly a number of conflicting objectives to be considered when making migration decisions, such as less energy consumption and quick response, in order to find an optimal migration path. In this paper, we propose a genetic algorithms- (GAs-) based approach which is effective in addressing multiobjective optimization problems. We have performed some preliminary evaluations of the proposed approach which shows quite promising results, using one of the classical genetic algorithms. The conclusion is that GAs can be used for decision making in task migrations in pervasive clouds.


articulated motion and deformable objects | 2012

A new image dataset on human interactions

Wenjuan Gong; Jordi Gonzàlez; João Manuel R. S. Tavares; F. Xavier Roca

This article describes a new collection of still image dataset which are dedicated to interactions between people. Human action recognition from still images have been a hot topic recently, but most of them are actions performed by a single person, like running, walking, riding bikes, phoning and so on and there is no interactions between people in one image. The dataset collected in this paper are concentrating on human interaction between two people aiming to explore this new topic in the research area of action recognition from still images.


international conference on computer vision | 2011

On the effect of temporal information on monocular 3d human pose estimation

Jürgen Brauer; Wenjuan Gong; Jordi Gonzàlez; Michael Arens

We address the task of estimating 3D human poses from monocular camera sequences. Many works make use of multiple consecutive frames for the estimation of a 3D pose in a frame. Although such an approach should ease the pose estimation task substantially since multiple consecutive frames allow to solve for 2D projection ambiguities in principle, it has not yet been investigated systematically how much we can improve the 3D pose estimates when using multiple consecutive frames opposed to single frame information. In this paper we analyze the difference in quality of 3D pose estimates based on different numbers of consecutive frames from which 2D pose estimates are available. We validate the use of temporal information on two major different approaches for human pose estimation - modeling and learning approaches. The results of our experiments show that both learning and modeling approaches benefit from using multiple frames opposed to single frame input but that the benefit is small when the 2D pose estimates show a high quality in terms of precision.


International Journal of Distributed Sensor Networks | 2015

Enhanced asymmetric bilinear model for face recognition

Wenjuan Gong; Weishan Zhang; Jordi Gonzàlez; Yan Ren; Zhen Li

Bilinear models have been successfully applied to separate two factors, for example, pose variances and different identities in face recognition problems. Asymmetric model is a type of bilinear model which models a system in the most concise way. But seldom there are works exploring the applications of asymmetric bilinear model on face recognition problem with illumination changes. In this work, we propose enhanced asymmetric model for illumination-robust face recognition. Instead of initializing the factor probabilities randomly, we initialize them with nearest neighbor method and optimize them for the test data. Above that, we update the factor model to be identified. We validate the proposed method on a designed data sample and extended Yale B dataset. The experiment results show that the enhanced asymmetric models give promising results and good recognition accuracies.

Collaboration


Dive into the Wenjuan Gong's collaboration.

Top Co-Authors

Avatar

Jordi Gonzàlez

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Weishan Zhang

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar

F. Xavier Roca

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Francesc Xavier Roca

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Qinghua Lu

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar

Pengcheng Duan

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Ren

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar

Marco Pedersoli

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Adela Barbulescu

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge