Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Songyu Yu is active.

Publication


Featured researches published by Songyu Yu.


Pattern Recognition Letters | 2009

CamShift guided particle filter for visual tracking

Zhaowen Wang; Xiaokang Yang; Yi Xu; Songyu Yu

Particle filter and mean shift are two important methods for tracking object in video sequence, and they are extensively studied by researchers. As their strength complements each other, some effort has been initiated in [1] to combine these two algorithms, on which the advantage of computational efficiency is focused. In this paper, we extend this idea by exploring even more intrinsic relationship between mean shift and particle filter, and propose a new algorithm, CamShift guided particle filter (CAMSGPF). In CAMSGPF, two basic algorithms - CamShift and particle filter - can work cooperatively and benefit from each other, so that the overall performance is improved and some redundancy in algorithms can be removed. Experimental results show that the proposed method can track objects robustly in various environments, and is much faster than the existing methods.


IEEE Transactions on Broadcasting | 2005

Research on an iterative algorithm of LS channel estimation in MIMO OFDM systems

Yantao Qiao; Songyu Yu; Pengcheng Su; Lijun Zhang

An iterative least square (LS) channel estimation algorithm for MIMO OFDM systems is proposed. Compared with the common LS channel estimation, this algorithm can highly improve the estimation accuracy. Moreover, the low-pass filtering in the time domain reduces AWGN and ICI significantly. This algorithm enables MIMO OFDM systems to work well in mobile situations. Simulation results confirmed good MSE performance of this algorithm.


Signal Processing-image Communication | 2008

Contourlet-based image adaptive watermarking

Haohao Song; Songyu Yu; Xiaokang Yang; Li Song; Chen Wang

In the contourlet transform (CT), the Laplacian pyramid (LP) decomposes an image into a low-frequency (LF) subband and a high-frequency (HF) subband. The LF subband is created by filtering the original image with 2-D low-pass filter. However, the HF subband is created by subtracting the synthesized LF subband from the original image but not by 2-D high-pass filtering the original image. In this paper, we propose a contourlet-based image adaptive watermarking (CIAW) scheme, in which the watermark is embedded into the contourlet coefficients of the largest detail subbands of the image. The transform structure of the LP makes the embedded watermark spread out into all subbands likely in which the LF subbands are included when we reconstruct the watermarked image based on the watermarked contourlet coefficients. Since both the LF subbands and the HF subbands contain watermarking components, our watermarking scheme is expected to be robust against both the LF image processing and the HF image processing attacks. The corresponding watermarking detection algorithm is proposed to decide whether the watermark is present or not by exploiting the unique transform structure of LP. With the new proposed concept of spread watermark, the watermark is detected by computing the correlation between the spread watermark and the watermarked image in all contourlet subbands fully. The proposed CIAW scheme is particularly superior to the conventional watermarking schemes when the watermarked image is attacked by some image processing methods, which destroy the HF subbands, thanks to the watermarking components preserved in the LF subbands. Experimental results show the validity of CIAW in terms of both the watermarking invisibility and the watermarking robustness. In addition, the comparison experiments prove the high-efficiency of CIAW again.


IEEE Transactions on Consumer Electronics | 2007

Efficient Spatio-temporal Segmentation for Extracting Moving Objects in Video Sequences

Renjie Li; Songyu Yu; Xiaokang Yang

Extraction of moving objects is an important and fundamental research topic for many digital video applications. This paper addresses an efficient spatio-temporal segmentation scheme to extract moving objects from video sequences. The temporal segmentation yields a temporal mask that indicates moving regions and static regions for each frame. For localization of moving objects, a block-based motion detection method considering a novel feature measure is proposed to detect changed regions. These changed regions are coarse and need accurate spatial compensation. An edge-based morphological dilation method is presented to achieve the anisotropic expansion of the changed regions. Furthermore, to solve the temporarily stopping problem of moving objects, the inertia information of moving objects is considered in the temporal segmentation. The spatial segmentation based on the watershed algorithm is performed to provide homogeneous regions with closed and precise boundaries. It considers the global information to improve the accuracy of the boundaries. To reduce over-segmentation in the watershed segmentation, a novel mean filter is proposed to suppress some minima. A fusion of the spatial and temporal segmentation results produces complete moving objects faithfully. Compared with the reference algorithms, the fusion threshold in our scheme is fixed for different sequences. Experiments on typical sequences have successfully demonstrated the validity of the proposed scheme.


IEEE Transactions on Consumer Electronics | 2007

Moving Objects Detection Method Based on Brightness Distortion and Chromaticity Distortion

Rui Zhang; Sizhu Zhang; Songyu Yu

This paper addresses a new background subtraction algorithm for detecting moving objects from a static background scene. The algorithm is proposed on the basis of brightness distortion and chromaticity distortion in RGB color space. For a given video sequence, a background frame is first extracted. Statistical analysis of chromaticity value is used in background frame extraction. It can solve the problem of deleting moving traces of moving objects in the case of few frames and slow movement. Gaussian model is used to calculate the threshold of brightness distortion in object detection. It can solve the problem of manually setting the brightness distortions of different video sequences. Experimental results show that the method proposed in this paper can detect moving objects exactly and effectively. It has been used in intelligent video surveillance system and region of interest (ROI) based low-bitrate video coding system.


IEEE Transactions on Broadcasting | 2005

Rate control for real-time video network transmission on end-to-end rate-distortion and application-oriented QoS

Hongkai Xiong; Jun Sun; Songyu Yu; Jun Zhou; Chuan Chen

In the merged multimedia packet-switched networks, the limited or no end-to-end Quality of Service (QoS) guarantees induce that rate control has been evolved to joint source-channel adjustment architecture based on application-oriented QoS. Based on the statistical analysis under the spatial intra and temporal inter prediction subject to a universal spatial-temporal coding framework and a general error concealment by the decoder, an end-to-end distortion estimation model is proposed. On the basis of the analytic model, this paper fulfills a picture quality parameters selection solution with rate-distortion (R-D) Lagrange optimization for a general coding engine including H.264/AVC, exploiting either the source-driven iterative prediction or feedback recursion. Further, a corresponding joint source-channel rate control strategy is proposed. For the real-time variable bit-rate (VBR) video transmission under a given time-varying network condition, the strategy could estimate an instantaneous available transmission rate on traffic smoothing and codecs buffer control, adopt the proposed an end-to-end distortion regressive model and a global optimal error control parameters selection, and address the consistent bit allocation in group of picture (GOP) level, picture level, and MB level. The extensive network simulation experiments show a better and more consistent end-to-end picture quality, in contrast with the locally optimal control strategy at MB level.


Optical Engineering | 2007

Fusion of multispectral and panchromatic satellite images based on contourlet transform and local average gradient

Haohao Song; Songyu Yu; Li Song; Xiaokang Yang

Recent studies show that wavelet-based image fusion methods provide a high spectral quality in fused satellite images. However, images fused by most wavelet-based methods have less spatial resolution because the critical downsampling is included in the wavelet transform. We propose a useful fusion method based on contourlet and local average gradient (LAG) for multispectral and panchromatic satellite images. Contourlet represents edges and texture better than wavelet. Because edges and texture are fundamental in image representation, enhancing them is an effective means of enhancing spatial resolution. Based on LAG, the proposed fusion method reduces the spectral distortion of the fused image further. Experimental results show that the proposed fusion method is able to increase the spatial resolution and reduce the spectral distortion of the fused image at the same time.


IEEE Transactions on Consumer Electronics | 2005

Joint source-channel decoding for H.264 coded video stream

Yue Wang; Songyu Yu

H.264 is the newest video coding standard and has achieved a significant improvement in coding efficiency. The entropy coding methods used in H.264 are CAVLC and CABAC. Although these two variable length code methods can achieve high compression, they are very sensitive to channel errors. This paper presents a joint source-channel MAP (maximum a posteriori probability) decoding method to dealing with this sensitivity to channel errors and applied it to the decoding of the motion vector in H.264 coded video stream. Although H.264 codec has proposed several error resilience methods, we believe this method could provide additional error resilience to H.264 stream. Experiment indicates that our JSCD achieves significant improvement than a separate scheme.


international conference on acoustics, speech, and signal processing | 2008

Confidence based optical flow algorithm for high reliability

Renjie Li; Songyu Yu

Estimation of optical flow is an important topic to provide motion information for motion analysis. This paper addresses an effective confidence based optical flow algorithm. It considers the bidirectional symmetry of forward and backward flow to compute the confidence measure for each flow estimate. According to the confidence, the reliable flow estimates have greater contribution to local averages while unreliable estimates are suppressed. The errors cannot be propagated. Since the image-driven and flow-driven discontinuity preserving methods have complementary advantages and limitations, we propose a region based method combining these two types of methods to preserve motion boundaries. Experiments on typical sequences have successfully demonstrated the validity of the proposed algorithm.


international conference on acoustics, speech, and signal processing | 2003

1-D and 2-D transforms from integers to integers

Jia Wang; Jun Sun; Songyu Yu

Substituting a real valued linear transform with an integer-to-integer mapping has become very important in lots of applications. This paper introduces a new kind of matrix decomposition method called lifting-like factorization, which leads to a theorem: every 2/sup n/-order real matrix with determinant norm 1 can be expressed as the product of one permutation matrix and at most three unit triangular matrices. Rounding error of this method is analyzed. Realization of 2D integer transform is also studied and it is shown that a 2D integer-to-integer transform cannot be realized by performing two 1D integer transforms separately. Left and right permutation matrices are introduced to reduce rounding error and an application of this method to intDCT is discussed.

Collaboration


Dive into the Songyu Yu's collaboration.

Top Co-Authors

Avatar

Xiaokang Yang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Hongkai Xiong

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Li Song

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Jun Sun

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Jia Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Wenjun Zhang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Rui Zhang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Xiangwen Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Xiaofeng Lu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Haohao Song

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge