Ying Weng
Bangor University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ying Weng.
Image and Vision Computing | 2006
Jianmin Jiang; Ying Weng; PengJie Li
Abstract As most of MPEG and JPEG compression standards are DCT-based, we propose a simple, low-cost and fast algorithm to extract dominant colour features directly in DCT domain without involving full decompression to access the pixel data. As dominant colour is one of the colour features used in constructing MPEG-7 dominant colour descriptors, the proposed algorithm would provide useful technique for those already compressed videos and images, where MPEG-7 dominant colour descriptors are needed. While the proposed algorithm presents advantages in terms of computing efficiency, i.e. eliminating the need of IDCT for compressed videos and images, extensive experiments also support that the proposed algorithm achieves competitive performances in extracting dominant colour features when compared with the pixel domain extraction described in MPEG-7.
IEEE Transactions on Broadcasting | 2011
Jianmin Jiang; J. Kohler; Carmen MacWilliams; J. Zaletelj; G. Guntner; H. Horstmann; Jinchang Ren; J. Loffler; Ying Weng
In this paper, we report recent research activities under the integrated project, Live Staging of Media Events (LIVE), which is funded under European Framework-6 Programme, and illustrate how a new LIVE TV broadcasting and content production concept can be introduced to improve the existing TV broadcasting services. In comparison with existing TV content production technologies, we show case that LIVE TV broadcasting format could achieve a range of significant advantages, which can be highlighted as: (i) real-time interaction between TV content production team and viewers to ensure the best possible entertainment experiences and to allow viewers not only to view but also to participate, to influence and to control; (ii) the traditional concept of TV content director can be changed to a TV content conductor, where live TV broadcasting can be conducted on real-time bases and existing content materials can be conducted into multi-streams of TV broadcasting, providing variety of choices to allow the audience to make their own preferences in watching TV programs; (iii) introduction of significant intelligence in terms of content analysis and behavior analysis etc. to further improve the quality of services based on the fundamental concept of LIVE project. When tested under field trials for 2008 Olympic Games by ORF, the developed LIVE system has illustrated great potential as well as significant promises to improve the TV broadcasting services not only in terms of the interaction formats, but also in terms of viewing experiences as well as the style of content productions and preparations.
IEEE Transactions on Vehicular Technology | 2016
Gaofeng Pan; Chaoqing Tang; X. V. Zhang; Tingting Li; Ying Weng; Yunfei Chen
This paper investigates the performances of secure communications over non-small-scale fading channels. Specifically, considering three different cases when the main and eavesdropper channels experience independent lognormal fading, correlated lognormal fading, or independent composite fading, we study the average secrecy capacity and secrecy outage [including the probability of nonzero secrecy capacity (PNSC) and secure outage probability (SOP)] for these channel conditions. The approximated closed-form expressions for the average secrecy capacity, PNSC, and SOP are derived for these three different types of non-small fading channels. The accuracy of our performance analysis is verified by simulation.
cyberworlds | 2009
Aamer Mohamed; F. Khellfi; Ying Weng; Jianmin Jiang; Stan Ipson
This paper proposes a new simple method of Discrete Cosine Transform (DCT) feature extraction that is used to accelerate the speed and decrease the storage needed in the image retrieving process. Image features are accessed and extracted directly from JPEG compressed domain. This method extracts and constructs a feature vector of histogram quantization from partial DCT coefficient in order to count the number of coefficients that have the same DCT coefficient over all image blocks. The database image and query image is equally divided into a non overlapping 8X8 block pixel, each of which is associated with a feature vector of histogram quantization derived directly from discrete cosine transform DCT. Users can select any query as the main theme of the query image. The retrieved images are those from the database that bear close resemblance with the query image and the similarity is ranked according to the closest similar measures computed by the Euclidean distance. The experimental results are significant and promising and show that our approach can easily identify main objects while to some extent reducing the influence of background in the image and in this way improves the performance of image retrieval.
international conference on intelligent and advanced systems | 2007
Aamer Mohamed; Ying Weng; Stan Ipson; Jianmin Jiang
Face detection is one of the challenging problems in the image processing. A novel face detection system is presented in this paper. The approach relies on skin based color, while features extracted from two dimentional discreate cosine transfer (DCT) and neural networks. which can be used to detect faces by using skin color from DCT coefficient of Cb and Cr feature vectors. This system contains the skin color which is the main feature of faces for detection and then the skin face candidate is examined by using the neural networks, which learns from the feature of faces to classify whether the original image includes a face or not. The processing stage is based on normalization and discreate cosine transfer ( DCT ). Finally the classification based on neural networks approach. The experiments results on upright frontal color face images from the internet show an a excellent detection rate.
international multi-conference on systems, signals and devices | 2008
Aamer Mohamed; Ying Weng; Jianmin Jiang; Stan Ipson
This paper proposes a robust schema for face detection system via Gaussian mixture model to segment image based on skin color. After skin and non skin face candidatespsila selection, features are extracted directly from discrete cosine transform (DCT) coefficients computed from these candidates. Moreover, the back-propagation neural networks are used to train and classify faces based on DCT feature coefficients in Cb and Cr color spaces. This schema utilizes the skin color information, which is the main feature of face detection. DCT feature values of faces, representing the data set of skin/non-skin face candidates obtained from Gaussian mixture model are fed into the back-propagation neural networks to classify whether the original image includes a face or not. Experimental results shows that the proposed schema is reliable for face detection, and pattern features are detected and classified accurately by the backpropagation neural networks.
IEEE Transactions on Circuits and Systems for Video Technology | 2004
Jianmin Jiang; Ying Weng
As existing video processing technology is primarily developed in the pixel domain yet digital video is stored in compressed format, any application of those techniques to compressed videos would require decompression. For discrete cosine transform (DCT)-based MPEG compressed videos, the computing cost of standard row-by-row and column-by-column inverse DCT (IDCT) transforms for a block of 8/spl times/8 elements requires 4096 multiplications and 4032 additions, although practical implementation only requires 1024 multiplications and 896 additions. In this paper, we propose a new algorithm to extract videos directly from MPEG compressed domain (DCT domain) without full IDCT, which is described in three extraction schemes: 1) video extraction in 2/spl times/2 blocks with four coefficients; 2) video extraction in 4/spl times/4 blocks with four DCT coefficients; and 3) video extraction in 4/spl times/4 blocks with nine DCT coefficients. The computing cost incurred only requires 8 additions and no multiplication for the first scheme, 2 multiplication and 28 additions for the second scheme, and 47 additions (no multiplication) for the third scheme. Extensive experiments were carried out, and the results reveal that: 1) the extracted video maintains competitive quality in terms of visual perception and inspection and 2) the extracted videos preserve the content well in comparison with those fully decompressed ones in terms of histogram measurement. As a result, the proposed algorithm will provide useful tools in bridging the gap between pixel domain and compressed domain to facilitate content analysis with low latency and high efficiency such as those applications in surveillance videos, interactive multimedia, and image processing.
australasian telecommunication networks and applications conference | 2014
Xv Zhang; Gaofeng Pan; Chaoqing Tang; Tingting Li; Ying Weng
In this paper, we analyze the performance of the physical layer security for the classic Wyners three-node model (including an eavesdropper) over independent/correlated lognormal fading channels. Considering the cases where the main and eavesdropper channels experience independent/correlated log-normal fading, we study the average secrecy capacity and secrecy outage (including the probability of non-zero secrecy capacity and secure outage probability), respectively. The approximated closed-form expressions for the average secrecy capacity, the probability of non-zero secrecy capacity, and secure outage probability have been derived for independent/correlated lognormal fading channels, respectively. Finally, the accuracy of our performance analysis is verified by simulation results.
visual communications and image processing | 2013
Long Xu; Lin Ma; King Ngi Ngan; Weisi Lin; Ying Weng
The visual quality assessment (VQA) becomes prevailing in the studies of image and video coding. It assesses the quality of image or video more accurately than mean square error (MSE) with respect to the human visual system (HVS). Toward perceptual video coding, MSE is weighted spatially and temporally to simulate the HVS response to visual signal in this paper. Firstly, the image content is depicted by edge strength to compose spatial weighting factors. Secondly, the motion strength calculated from motion vector of each block gives temporal weighting factors. Thirdly, the motion trajectory based saliency map for video signal is integrated as another weighting factor of MSE. The proposed VQM not only efficiently model HVS but also relate to quantization parameter (QP) capable of guiding perceptual video coding. A perceptual rate distortion optimization (RDO) is established on the proposed VQM. The experimental results indicate that the proposed VQM is consistent well with HVS. In addition, the better rate-distortion efficiency and accurate bit rate control can be achieved by the proposed visual quality control algorithm.
IEEE Transactions on Consumer Electronics | 2011
Ying Weng; Jianmin Jiang
In this paper, we propose a new algorithm for fast estimation of camera motion directly in MPEG compressed domain, which starts from an existing and commonly used camera motion model to characterize the relationship between corresponding points of the neighboring frames, and then exploits the MPEG motion estimation and compensation techniques to derive closed-form estimation of all the three camera motion parameters, including pan, tilt, and zoom. The proposed algorithm features in extremely low computing cost and high processing speed with 50% improvement over the representative existing algorithms. It is suitable for implementation on hardware, such as Pan-Tilt-Zoom (PTZ) video camera. Furthermore, the three separated processing units (pan, tilt, and zoom) can be integrated into one simplified Fast PTZ Processing Unit. Comparative experiments also reveal that the proposed algorithm achieves similar estimation precision for all the three camera motion parameters.