Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peiyi Shen is active.

Publication


Featured researches published by Peiyi Shen.


International Journal of Communication Systems | 2014

GI/Geom/1 queue based on communication model for mesh networks

W. Wei; Qingzheng Xu; Lei Wang; Xinhong Hei; Peiyi Shen; W. Shi; L. Shan

In mesh networks architecture, it should be permitted to visit the mobile client points. Whereas in mesh networks environment, the main throughput flows usually communicate with the conventional wired network. The so-called gateway nodes can link directly to traditional Ethernet, depending on these mesh nodes, and can obtain access to data sources that are related to the Ethernet. In wireless mesh networks WMNs, the quantities of gateways are limited. The packet-processing ability of settled wireless nodes is limited. Consequently, throughput loads of mesh nodes highly affect the network performance. In this paper, we propose a queuing system that relied on traffic model for WMNs. On the basis of the intelligent adaptivenes, the model considers the influences of interference. Using this intelligent model, service stations with boundless capacity are defined as between gateway and common nodes based on the largest hop count from the gateways, whereas the other nodes are modeled as service stations with certain capacity. Afterwards, we analyze the network throughput, mean packet loss ratio, and packet delay on each hop node with the adaptive model proposed. Simulations show that the intelligent and adaptive model presented is precise in modeling the features of traffic loads in WMNs. Copyright


IEEE Transactions on Visualization and Computer Graphics | 2007

RTcams: A New Perspective on Nonphotorealistic Rendering from Photographs

Peter M. Hall; John P. Collomosse; Yi-Zhe Song; Peiyi Shen; Chuan Li

We introduce a simple but versatile camera model that we call the rational tensor camera (RTcam). RTcams are well principled mathematically and provably subsume several important contemporary camera models in both computer graphics and vision; their generality is one contribution. They can be used alone or compounded to produce more complicated visual effects. In this paper, we apply RTcams to generate synthetic artwork with novel perspective effects from real photographs. Existing nonphotorealistic rendering from photographs (NPRP) is constrained to the projection inherent in the source photograph, which is most often linear. RTcams lift this restriction and so contribute to NPRP via multiperspective projection. This paper describes RTcams, compares them to contemporary alternatives, and discusses how to control them in practice. Illustrative examples are provided throughout.


international conference on pattern recognition | 2016

Large-scale Isolated Gesture Recognition using pyramidal 3D convolutional networks

Guangming Zhu; Liang Zhang; Lin Mei; Jie Shao; Juan Song; Peiyi Shen

Human gesture recognition is one of the central research fields of computer vision, and effective gesture recognition is still challenging up to now. In this paper, we present a pyramidal 3D convolutional network framework for large-scale isolated human gesture recognition. 3D convolutional networks are utilized to learn the spatiotemporal features from gesture video files. Pyramid input is proposed to preserve the multi-scale contextual information of gestures, and each pyramid segment is uniformly sampled with temporal jitter. Pyramid fusion layers are inserted into the 3D convolutional networks to fuse the features of pyramid input. This strategy makes the networks recognize human gestures from the entire video files, not just from segmented clips independently. We present the experiment results on the 2016 ChaLearn LAP Large-scale Isolated Gesture Recognition Challenge, in which we placed third.


Signal Processing-image Communication | 2016

Human action recognition using multi-layer codebooks of key poses and atomic motions

Guangming Zhu; Liang Zhang; Peiyi Shen; Juan Song

Taking fully into consideration the fact that one human action can be intuitively considered as a sequence of key poses and atomic motions in a particular order, a human action recognition method using multi-layer codebooks of key poses and atomic motions is proposed in this paper. Inspired by the dynamics models of human joints, normalized relative orientations are computed as features for each limb of human body. In order to extract key poses and atomic motions precisely, feature sequences are segmented into pose feature segments and motion feature segments dynamically, based on the potential differences of feature sequences. Multi-layer codebooks of each human action are constructed with the key poses extracted from pose feature segments and the atomic motions extracted from motion feature segments associated with each two key poses. The multi-layer codebooks represent action patterns of each human action, which can be used to recognize human actions with the proposed pattern-matching method. Three classification methods are employed for action recognition based on the multi-layer codebooks. Two public action datasets, i.e., CAD-60 and MSRC-12 datasets, are used to demonstrate the advantages of the proposed method. The experimental results show that the proposed method can obtain a comparable or better performance compared with the state-of-the-art methods. Human actions are modeled by a sequence of key poses and atomic motions.Normalized relative orientations are computed as features for each limb.Feature sequences are segmented into pose and motion feature segments dynamically.Multi-layer codebooks which constructed with extracted key poses and atomic motions.A pattern-matching method is proposed and integrated with traditional classifiers.


international conference on image processing | 2016

Depth enhancement with improved exemplar-based inpainting and joint trilateral guided filtering

Liang Zhang; Peiyi Shen; Shu'e Zhang; Juan Song; Guangming Zhu

Heavy noises and large amounts of holes exist in depth images captured by sensors, such as Kinect, which would severely hinder the application of depth information. In this paper, a novel depth enhancement algorithm with improved exemplar-based inpainting and joint trilateral guided filtering is proposed. The improved examplar-based inpainting method is applied to fill the holes in the depth images, in which the level set distance component is introduced in the priority evaluation function. Then a joint trilateral guided filter is adopted to denoise and smooth the inpainted results. Experimental results reveal that the proposed algorithm can achieve better enhancement results compared with the existing methods in terms of subjective and objective quality measurements.


robotics and biomimetics | 2015

Human action recognition using key poses and atomic motions

Guangming Zhu; Liang Zhang; Peiyi Shen; Juan Song; Lukui Zhi; Kang Yi

Human action recognition is a fundamental skill for personal assistive robotics to observe and automatically react to humans daily activities. Generally, one human activity can be intuitively considered as a sequence of key poses and atomic motions. Thus, a human action recognition algorithm based on key poses and atomic motions is proposed in this paper. Firstly, the normalized relative orientations of human joints are computed as the skeletal features. Secondly, the skeletal feature sequences are segmented into static segments and dynamic segments based on the kinetic energy. Then, the codebook of key poses is constructed from the static segments using clustering algorithms, and the codebook of atomic motions is constructed from the associated dynamic segments with any two key poses. Lastly, the activity patterns are constructed and the Naïve Bayes Nearest Neighbor algorithm is utilized to classify human activities based on the training and testing activity pattern matching. The Cornell CAD-60 dataset is used to test the proposed algorithm. The experimental results show that the proposed algorithm can obtain a better performance than the state-of-the-art algorithms.


international conference on information and automation | 2010

A license plate recognition system based on tamura texture in complex conditions

Xiangdong Zhang; Peiyi Shen; Jinrong Gao; Dp Qi; Liang Zhang; Ax Xue; Xl Liang; X Chen

License plate location plays an important role in vehicle license plate recognition. This paper presents a real time method of license plate location. First, run length is used for finding the candidate horizontal regions of license plate. Second, we locate the license plates exactly using tamura texture features. In the experiment on locating license plates, 1400 images were taken in various backgrounds and complex conditions. Experimental results showed the robustness and efficiency of the method presented in this paper.


IEEE Access | 2017

Multimodal Gesture Recognition Using 3-D Convolution and Convolutional LSTM

Guangming Zhu; Liang Zhang; Peiyi Shen; Juan Song

Gesture recognition aims to recognize meaningful movements of human bodies, and is of utmost importance in intelligent human–computer/robot interactions. In this paper, we present a multimodal gesture recognition method based on 3-D convolution and convolutional long-short-term-memory (LSTM) networks. The proposed method first learns short-term spatiotemporal features of gestures through the 3-D convolutional neural network, and then learns long-term spatiotemporal features by convolutional LSTM networks based on the extracted short-term spatiotemporal features. In addition, fine-tuning among multimodal data is evaluated, and we find that it can be considered as an optional skill to prevent overfitting when no pre-trained models exist. The proposed method is verified on the ChaLearn LAP large-scale isolated gesture data set (IsoGD) and the Sheffield Kinect gesture (SKIG) data set. The results show that our proposed method can obtain the state-of-the-art recognition accuracy (51.02% on the validation set of IsoGD and 98.89% on SKIG).


Neurocomputing | 2012

Robust visual tracking using structural region hierarchy and graph matching

Yi-Zhe Song; Chuan Li; Liang Wang; Peter M. Hall; Peiyi Shen

Visual tracking aims to match objects of interest in consecutive video frames. This paper proposes a novel and robust algorithm to address the problem of object tracking. To this end, we investigate the fusion of state-of-the-art image segmentation hierarchies and graph matching. More specifically, (i) we represent the object to be tracked using a hierarchy of regions, each of which is described with a combined feature set of SIFT descriptors and color histograms; (ii) we formulate the tracking process as a graph matching problem, which is solved by minimizing an energy function incorporating appearance and geometry contexts; and (iii) more importantly, an effective graph updating mechanism is proposed to adapt to the object changes over time for ensuring the tracking robustness. Experiments are carried out on several challenging sequences and results show that our method performs well in terms of object tracking, even in the presence of variations of scale and illumination, moving camera, occlusion, and background clutter.


international conference on information and automation | 2010

License plate-location using AdaBoost Algorithm

Xiangdong Zhang; Peiyi Shen; Yuli Xiao; Bo Li; Yang Hu; Dongpo Qi; Xiao Xiao; Liang Zhang

License Plate Recognition(LPR) is a very important research topic in computer vision of ITS. License plate location is the key step of LPR. Though numerous of techniques have been developed, most approaches work only under restricted conditions such as fixed illumination, limited vehicle license plates,and simple backgrounds. This paper attempts to use the AdaBoost algorithm to build up classifiers based on various features. Combining the classifiers using different features, we obtain a cascade classifier. Then the cascade classifier which consist of many layers of strong classifiers is implemented to locate the license plate.The training speed of the traditional AdaBoost Algorithm is slow. In order to increase the training speed, different features like derivative, texture are included. The classifiers based on the features we selected decrease the complexity of the system. The encouraging training speed is achieved in the experiments. Compared with other LPR method, for instance, color-based processing methods, our algorithm can detect the license plates with accurate sizes, positions and more complex backgrounds.

Collaboration


Dive into the Peiyi Shen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Wei

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Mohammed Bennamoun

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Syed Afaq Ali Shah

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Yi-Zhe Song

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge