Junyu Dong
Ocean University of China
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junyu Dong.
International Journal of Computer Vision | 2005
Junyu Dong; Mike J. Chantler
We present and compare five approaches for capturing, synthesising and relighting real 3D surface textures. Unlike 2D texture synthesis techniques they allow the captured textures to be relit using illumination conditions that differ from those of the original. We adapted a texture quilting method due to Efros and combined this with five different relighting representations, comprising: a set of three photometric images; surface gradient and albedo maps; polynomial texture maps; and two eigen based representations using 3 and 6 base images.We used twelve real textures to perform quantitative tests on the relighting methods in isolation. We developed a qualitative test for the assessment of the complete synthesis systems. Ten observers were asked to rank the images obtained from the five methods using five real textures. Statistical tests were applied to the rankings.The six-base-image eigen method produced the best quantitative relighting results and in particular was better able to cope with specular surfaces. However, in the qualitative tests there were no significant performance differences detected between it and the other two top performers. Our conclusion is therefore that the cheaper gradient and three-base-image eigen methods should be used in preference, especially where the surfaces are Lambertian or near Lambertian.
IEEE Transactions on Systems, Man, and Cybernetics | 2015
Muwei Jian; Kin-Man Lam; Junyu Dong; Linlin Shen
The human visual system (HVS) can reliably perceive salient objects in an image, but, it remains a challenge to computationally model the process of detecting salient objects without prior knowledge of the image contents. This paper proposes a visual-attention-aware model to mimic the HVS for salient-object detection. The informative and directional patches can be seen as visual stimuli, and used as neuronal cues for humans to interpret and detect salient objects. In order to simulate this process, two typical patches are extracted individually and in parallel from the intensity channel and the discriminant color channel, respectively, as the primitives. In our algorithm, an improved wavelet-based salient-patch detector is used to extract the visually informative patches. In addition, as humans are sensitive to orientation features, and as directional patches are reliable cues, we also propose a method for extracting directional patches. These two different types of patches are then combined to form the most important patches, which are called preferential patches and are considered as the visual stimuli applied to the HVS for salient-object detection. Compared with the state-of-the-art methods for salient-object detection, experimental results using publicly available datasets show that our produced algorithm is reliable and effective.
The Imaging Science Journal | 2011
Muwei Jian; Junyu Dong; Jun Ma
Abstract A content-based image retrieval system normally returns the retrieval results according to the similarity between features extracted from the query image and candidate images. In certain circumstances, however, users may concern more about salient regions in an image of their interest and only wish to retrieve images containing the relevant salient regions while ignoring those irrelevant (such as the background or other regions and objects). Although how to represent the local image properties is still one of the most active research issues, much previous work on image retrieval does not examine salient regions in an image. In this paper, we propose an improved salient point detector based on wavelet transform; it can extract salient points in an image more accurately. Then salient points are segmented into different salient regions according to their spatial distribution. Colour moments and Gabor features of these different salient regions are computed and form a feature vector to index the image. We test the proposed scheme using a wide range of image samples from the Corel Image Library. The experimental results indicate that the method has produced promising results.
software engineering, artificial intelligence, networking and parallel/distributed computing | 2007
Muwei Jian; Junyu Dong; Yang Zhang
Image fusion is the process of combing multiple images of the same scene into a single fused image with the aim of preserving the full content information and retaining the important features from each of the original images. In this paper, we propose a novel scheme to measure every wavelet decomposition coefficients saliency of the original images. The saliency value reflects the visually meaningful content of the wavelet decomposition coefficients and is consistent with human visual perception. The novel scheme aims to preserve the full content value and retain the visually meaningful information with human visual perception more exactly than the traditional method. In addition, the proposed novel method can be combined with any sophisticated fusion rules and fusion operators that are based on wavelet decomposition. Experimental results show the effectiveness of the proposed scheme, which can retain perceptually important image information.
Pattern Recognition | 2013
Muwei Jian; Kin-Man Lam; Junyu Dong
Abstract In this paper, an efficient mapping model based on singular value decomposition (SVD) is proposed for face hallucination. We can observe and prove that the main singular values of an image at one resolution have approximately linear relationships with their counterparts at other resolutions. This makes the estimation of the singular values of the corresponding high-resolution (HR) face images from a low-resolution (LR) face image more reliable. From the signal-processing point of view, this can effectively preserve and reconstruct the dominant information in the HR face images. Interpolating the other two matrices obtained from the SVD of the LR image does not change either the primary facial structure or the pattern of the face image. The corresponding two matrices for the HR face images can be constructed in a “coarse-to-fine” manner using global reconstruction. Our proposed method retains the holistic structure of face images, while the learned mapping matrices, which are represented as embedding coefficients of the individual mapping matrices learned from LR-HR training pairs, can be seen as holistic constraints in the reconstruction of HR images. Compared to state-of-the-art algorithms, experiments show that our proposed face-hallucination scheme is effective in terms of producing plausible HR images with both a holistic structure and high-frequency details.
Multimedia Tools and Applications | 2016
Shengke Wang; Long Chen; Zixi Zhou; Xin Sun; Junyu Dong
Fall incidents have been reported as the second most common cause of death, especially for elderly people. Human fall detection is necessary in smart home healthcare systems. Recently various fall detection approaches have been proposed., among which computer vision based approaches offer a promising and effective way. In this paper, we proposed a new framework for fall detection based on automatic feature learning methods. First, the extracted frames, including human from video sequences of different views, form the training set. Then, a PCANet model is trained by using all samples to predict the label of every frame. Because a fall behavior is contained in many continuous frames, the reliable fall detection should not only analyze one frame but also a video sequence. Based on the prediction result of the trained PCANet model for each frame, an action model is further obtained by SVM with the predicted labels of frames in video sequences. Experiments show that the proposed method achieved reliable results compared with other commonly used methods based on the multiple cameras fall dataset, and a better result is further achieved in our dataset which contains more training samples.
software engineering, artificial intelligence, networking and parallel/distributed computing | 2007
Muwei Jian; Junyu Dong; Ruichun Tang
Content-based image retrieval (CBIR) systems normally return the retrieval results according to the similarity between features extracted from the query image and candidate images. In certain circumstance, however, users concern more about objects of their interest and only wish to retrieve images containing relevant objects, while ignoring irrelevant image areas (such as the background). Previous work on retrieval of objects of users interest (OUT) normally requires complicated segmentation of the object from the background. In this paper, we propose a method that utilize color, texture and shape features of a user specified window containing the OUI to retrieve relevant images, whereas complicated image segmentation is avoided. We use color moments and subband statistics of wavelet decomposition as color and texture features respectively. The similarity is first calculated using these features. Then shape features, generated by mathematical morphology operators, are further employed to produce the final retrieval results. We use a wide range of color images for the experiments and evaluate the performance of the proposed method in different color spaces, including RGB, HSV, YCbCr. Although simple, the method has produced promising results.
international conference on multimedia and expo | 2007
Muwei Jian; Junyu Dong; Rong Jiang
In content-based image retrieval, the representation of local properties in an image is one of the most active research issues. This paper proposes a salient region detector based on wavelet transform. The detector can extract the visually meaningful regions on an image and reflect local characteristics. An annular segmentation algorithm based on the distribution of salient regions is designed. It takes not only local image features into account, but also the spatial distribution information of the salient regions. Color moments and Gabor features around the salient regions in every annular region are computed as feature vectors used for indexing the image. We have tested the proposed scheme using a wide range of image samples from the Corel Image Library for content-based image retrieval. The experiments indicate that the method has produced promising results.
Information Sciences | 2014
Muwei Jian; Kin-Man Lam; Junyu Dong
An efficient hierarchical scheme, which is robust to illumination and pose variations in face images, is proposed for accurate facial-feature detection and localization. In our algorithm, having detected a face region using a face detector, a wavelet-based saliency map - which can reflect the most visually meaningful regions - is computed on the detected face region. As the eye region always has the most variations in a face image, the coarse eye region can be reliably located based on the saliency map, and verified by means of principal component analysis. This step in the proposed hierarchical scheme narrows down the search space, thereby reducing the computational cost in the further precise localization of the two eye positions based on a pose-adapted eye template. Moreover, among the facial features, the eyes play the most important role, and their positions can be used as an approximate geometric reference to localize the other facial features. Therefore, localization of the nose and mouth can be determined by using the saliency values in the saliency map and the detected eye positions as geometric references. Our proposed algorithm is non-iterative and computationally simple. Experimental results show that our algorithm can achieve a superior performance compared to other state-of-the-art methods.
Applied Optics | 2006
Ailing Yang; Wendong Li; Guang Yuan; Junyu Dong; Jinliang Zhang
A theoretical analysis of the fringe pattern produced by a capillary tube interferometer is presented, which is expected to be two-beam interference, and a computer program to simulate the interference fringe pattern is established. By comparing the simulated fringe pattern and the experimental fringe pattern, the refractive index of the liquid can be given when the two fringes coincide best. The results of this method are close to those of the Abbe refractometer.