Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ran Ju is active.

Publication


Featured researches published by Ran Ju.


international conference on image processing | 2014

Depth saliency based on anisotropic center-surround difference

Ran Ju; Ling Ge; Wenjing Geng; Tongwei Ren; Gangshan Wu

Most previous works on saliency detection are dedicated to 2D images. Recently it has been shown that 3D visual information supplies a powerful cue for saliency analysis. In this paper, we propose a novel saliency method that works on depth images based on anisotropic center-surround difference. Instead of depending on absolute depth, we measure the saliency of a point by how much it outstands from surroundings, which takes the global depth structure into consideration. Besides, two common priors based on depth and location are used for refinement. The proposed method works within a complexity of O(N) and the evaluation on a dataset of over 1000 stereo images shows that our method outperforms state-of-the-art.


Signal Processing-image Communication | 2015

Depth-aware salient object detection using anisotropic center-surround difference

Ran Ju; Yang Liu; Tongwei Ren; Ling Ge; Gangshan Wu

Most previous works on salient object detection concentrate on 2D images. In this paper, we propose to explore the power of depth cue for predicting salient regions. Our basic assumption is that a salient object tends to stand out from its surroundings in 3D space. To measure the object-to-surrounding contrast, we propose a novel depth feature which works on a single depth map. Besides, we integrate the 3D spatial prior into our method for saliency refinement. By sparse sampling and representing the image using superpixels, our method works very fast, whose complexity is linear to the image resolution. To segment the salient object, we also develop a saliency based method using adaptive thresholding and GrabCut. The proposed method is evaluated on two large datasets designed for depth-aware salient object detection. The results compared with several state-of-the-art 2D and depth-aware methods show that our method has the most satisfactory overall performance. HighlightsWe proposed a new depth feature for salient region detection.Spatial prior is integrated for saliency refinement.A saliency-based object segmentation method is presented.We built the largest dataset for depth-aware salient object detection evaluation.


international conference on multimedia and expo | 2014

OBSIR: Object-based stereo image retrieval

Xiangyang Xu; Wenjing Geng; Ran Ju; Yang Yang; Tongwei Ren; Gangshan Wu

Recent years, the stereo image has become an emerging media in the field of 3D technology, which leads to an urgent demand of stereo image retrieval. In this paper, we attempt to introduce a framework for object-based stereo image retrieval (OBSIR), which retrieves images containing the similar objects to the one captured in the query image by the user. The proposed approach consists of both online and offline procedures. In the offline procedure, we propose a salient object segmentation method making use of both color and depth to extract objects from each image. The extracted objects are then represented by multiple visual feature descriptors. In order to improve the image search efficiently, we construct an approximate nearest neighbor (ANN) index using cluster-based locality sensitive hashing (LSH). In the online stage, the user may supply the query object by selecting a region of interest (ROI) in the query image, or clicking one of the objects recommended by the salient object detector. For the image retrieval evaluation we build a new dataset containing over 10K stereo images. The experiments on this dataset show that the proposed method can effectively recommend the correct object and the final retrieval result is also better than other baseline methods.


Multimedia Tools and Applications | 2016

How important is location information in saliency detection of natural images

Tongwei Ren; Yan Liu; Ran Ju; Gangshan Wu

Location information, i.e., the position of content in image plane, is considered as an important supplement in saliency detection. The effect of location information is usually evaluated by integrating it with the selected saliency detection methods and measuring the improvement, which is highly influenced by the selection of saliency methods. In this paper, we provide direct and quantitative analysis of the importance of location information for saliency detection in natural images. We firstly analyze the relationship between content location and saliency distribution on four public image datasets, and validate the distribution by simply treating location based Gaussian distribution as saliency map. To further validate the effectiveness of location information, we propose a location based saliency detection approach, which completely initializes saliency maps with location information and propagate saliency among patches based on color similarity, and discuss the robustness of location information’s effect. The experimental results show that location information plays a positive role in saliency detection, and the proposed method can outperform most state-of-the-art saliency detection methods and handle natural images with different object positions and multiple salient objects.


pacific rim conference on multimedia | 2015

Interactive RGB-D Image Segmentation Using Hierarchical Graph Cut and Geodesic Distance

Ling Ge; Ran Ju; Tongwei Ren; Gangshan Wu

In this paper, we propose a novel interactive image segmentation method for RGB-D images using hierarchical Graph Cut. Considering the characteristics of RGB channels and depth channel in RGB-D image, we utilize Euclidean distance on RGB space and geodesic distance on 3D space to measure how likely a pixel belongs to foreground or background in color and depth respectively, and integrate the color cue and depth cue into a unified Graph Cut framework to obtain the optimal segmentation result. Moreover, to overcome the low efficiency problem of Graph Cut in handling high resolution images, we accelerate the proposed method with hierarchical strategy. The experimental results show that our method outperforms the state-of-the-art methods with high efficiency.


advances in multimedia | 2013

Stereo GrabCut: Interactive and Consistent Object Extraction for Stereo Images

Ran Ju; Xiangyang Xu; Yang Yang; Gangshan Wu

This paper presents an interactive object extraction approach for stereo images. The extraction task on stereo images has two significant differences compared to that on monoscopic images. First, the segmentation for both images should be consistent. Second, stereo images have implicit depth information, which supplies an important cue for object extraction. In this paper, we generate consistent segmentation by putting the correspondence relationship in a graph cut framework. Besides, we leverage depth information, which is obtained by stereo matching, to give a pre-estimation of foreground and background. The pre-estimation is then used to generate accurate color models to perform a graph cut based segmentation. To simplify the user interaction, we supply an interface similar to GrabCut, which only needs the user to drag a compact rectangle in most cases. The experiments show our approach works fast and produces more satisfactory results than state-of-the-art.


international conference on image processing | 2015

Saliency cuts based on adaptive triple thresholding

Shuzhen Li; Ran Ju; Tongwei Ren; Gangshan Wu

Salient object detection attracts much attention for its effectiveness in numerous applications. However, how to effectively produce a high quality binary mask from a saliency map, named saliency cuts, is still an open problem. In this paper, we propose a novel saliency cuts approach using unsupervised seeds generation and GrabCut algorithm. With the input of a saliency map, we produce seeds for segmentation using adaptive triple thresholding, and feed the seeds to GrabCut algorithm. Finally, a high quality object mask is generated by iteratively optimization. The experimental results show that the proposed approach is competent to the task of saliency cuts and outperforms the state-of-the-art methods.


international conference on computer vision | 2015

StereoSnakes: Contour Based Consistent Object Extraction for Stereo Images

Ran Ju; Tongwei Ren; Gangshan Wu

Consistent object extraction plays an essential role for stereo image editing with the population of stereoscopic 3D media. Most previous methods perform segmentation on entire images for both views using dense stereo correspondence constraints. We find that for such kind of methods the computation is highly redundant since the two views are near-duplicate. Besides, the consistency may be violated due to the imperfectness of current stereo matching algorithms. In this paper, we propose a contour based method which searches for consistent object contours instead of regions. It integrates both stereo correspondence and object boundary constraints into an energy minimization framework. The proposed method has several advantages compared to previous works. First, the searching space is restricted in object boundaries thus the efficiency significantly improved. Second, the discriminative power of object contours results in a more consistent segmentation. Furthermore, the proposed method can effortlessly extend existing single-image segmentation methods to work in stereo scenarios. The experiment on the Adobe benchmark shows superior extraction accuracy and significant improvement of efficiency of our method to state-of-the-art. We also demonstrate in a few applications how our method can be used as a basic tool for stereo image editing.


international conference on internet multimedia computing and service | 2014

How Important is Location in Saliency Detection

Tongwei Ren; Ran Ju; Yan Liu; Gangshan Wu

Current saliency detection methods mainly work on exploring the potential of low-level and high-level visual features, such as color, texture and face, but treat location information as a weak assistance or completely ignore it. In this paper, we reveal the importance of location information in saliency detection. We analyze the largest public image dataset for saliency detection THUS10000, and and the relationship between content location and saliency distribution. To further validate the effect of location information, we propose two location based saliency detection approaches, location based Gaussian distribution and location based saliency propagation, which make use of no or weak assistance of image content. Experimental results show that location based saliency detection can obtain much better performance than random selection, even better than most state-of-the-art saliency detection methods.


pacific rim conference on multimedia | 2016

Say Cheese: Personal Photography Layout Recommendation Using 3D Aesthetics Estimation

Ben Zhang; Ran Ju; Tongwei Ren; Gangshan Wu

Many people fail to take exquisite pictures in a beautiful scenery for the lack of professional photography knowledge. In this paper, we focus on how to aid people to master daily life photography using a computational layout recommendation method. Given a selected scene, we first generate several synthetic photos with different layouts using 3D estimation. Then we employ a 3D layout aesthetic estimation model to rank the proposed photos. The results with high scores are selected as layout recommendations, which is then translated to a hint for where people shall locate. The key to our success lies on the combination of 3D structures with aesthetic models. The subjective evaluation shows superior preference of our method to previous work. We also give a few application examples to show the power of our method in creating better daily life photographs.

Collaboration


Dive into the Ran Ju's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Liu

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge