Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruyi Liu is active.

Publication


Featured researches published by Ruyi Liu.


Journal of Visual Communication and Image Representation | 2016

Improved road centerlines extraction in high-resolution remote sensing images using shear transform, directional morphological filtering and enhanced broken lines connection

Ruyi Liu; Qiguang Miao; Bormin Huang; Jianfeng Song; Johan Debayle

Road information plays an important role in many civilian and military applications. This paper proposes an improved method for road centerlines extraction, which is based on shear transform, directional segmentation, shape features filtering, directional morphological filtering, tensor voting, multivariate adaptive regression splines (MARS) and enhanced broken lines connection. The proposed method consists of five steps. Firstly, directional segmentation based on spectral information and shear transform is used to segment the images for obtaining the initial road map. Shear transform is introduced to overcome the disadvantage of the loss of the road segment information. Secondly, we perform hole filling to remove the holes due to noise in some road regions. Thirdly, reliable road segments are extracted by road shape features and directional morphological filtering. Directional morphological filtering can separate road from the neighboring non-road objects to ensure the independence of each road target candidate. Fourthly, tensor voting and MARS are exploited to extract smooth road centerlines, which overcome the shortcoming that the road centerlines extracted by the thinning algorithm have many spurs. Finally, we propose an enhanced broken lines connection algorithm to generate a complete road network, in which a new measure function is constructed and spectral similarity is introduced.Display Omitted We apply spectral information and shear transform in directional segmentation.Directional morphological filtering is adopted to ensure the independence of road.We propose an enhanced broken lines connection algorithm. Road information plays an important role in many civilian and military applications. Road centerlines extraction from high-resolution remote sensing images can be used to update a transportation database. However, it is difficult to extract a complete road network from high-resolution images, especially when the color of road is close to that of background. This paper proposes an improved method for road centerlines extraction, which is based on shear transform, directional segmentation, shape features filtering, directional morphological filtering, tensor voting, multivariate adaptive regression splines (MARS) and enhanced broken lines connection. The proposed method consists of five steps. Firstly, directional segmentation based on spectral information and shear transform is used to segment the images for obtaining the initial road map. Shear transform is introduced to overcome the disadvantage of the loss of the road segment information. Secondly, we perform hole filling to remove the holes due to noise in some road regions. Thirdly, reliable road segments are extracted by road shape features and directional morphological filtering. Directional morphological filtering can separate road from the neighboring non-road objects to ensure the independence of each road target candidate. Fourthly, tensor voting and MARS are exploited to extract smooth road centerlines, which overcome the shortcoming that the road centerlines extracted by the thinning algorithm have many spurs. Finally, we propose an enhanced broken lines connection algorithm to generate a complete road network, in which a new measure function is constructed and spectral similarity is introduced. We evaluate the performance on the high-resolution aerial and QuickBird satellite images. The results demonstrate that the proposed method is promising.


bio-inspired computing: theories and applications | 2015

Remote Sensing Image Fusion Based on Shearlet and Genetic Algorithm

Qiguang Miao; Ruyi Liu; Yiding Wang; Jianfeng Song; Yining Quan; Yunan Li

Image fusion is a technology which can effectively enhance the utilization ratio of image information, the accuracy of target recognition and the interpretation ability of image. However, traditional fusion methods may lead to the information loss and image distortion. Hence a novel remote sensing image fusion method is proposed in this paper. As one of the multi-scale geometric analysis tools, Shearlet has been widely used in image processing. In this paper, Shearlet is used to decompose the image. Genetic Algorithm, a intelligent optimization algorithm, is also applied to image fusion and it aims to optimize the weighted factors in order to improve the quality of fusion. Experimental results prove the superiority and feasibility of this method.


IEEE Access | 2018

A Semi-Supervised Image Classification Model Based on Improved Ensemble Projection Algorithm

Qiguang Miao; Ruyi Liu; Peipei Zhao; Yunan Li; Erqiang Sun

Image classification has been an incredibly active research topic in recent years with widespread applications. Researchers have put forward many remarkable techniques and semi-supervised learning (SSL) is one among them. However, due to not taking the relationship of samples among different classes in consideration, previous approaches cannot often get a clear decision boundary. In this paper, we propose an improved classification model on the basis of SSL. First, we adopt a deformable part-based model to capture a stable global structure and salient objects, and then, we find a better decision boundary by our classification algorithm-based on an improved ensemble projection (IEP). Our IEP exploits the weighted average method. To evaluate the effectiveness of our approach, we do experiments not only with the LandUse-21 (L-21) data set, but also with an architecture style data set. Experimental results show that our approach is capable of achieving the state-of-the-art performance on the two data sets. For each class in L-21 data set, when 50 images are randomly chosen as training images, the multi-class average precision increases to 97.63%. Besides, for the architecture style data set, we achieve the best result with about 80% accuracy and have about a 10% improvement over the previous best work. Although there are a small number of labeled data used to train, we get the satisfactory performance.


Neurocomputing | 2016

Dynamic character grouping based on four consistency constraints in topographic maps

Pengfei Xu; Qiguang Miao; Ruyi Liu; Xiaojiang Chen; Xunli Fan

In optical character recognition, text strings should be extracted from images first. But only the complete text strings can accurately express the meanings of the words, so the extracted individual characters should be grouped into text strings before recognition. There are lots of text strings in topographic maps, and these texts consist of the characters with multi-colored, multi-sized and multi-oriented, and the existing methods cannot effectively group them. In this paper, a dynamic character grouping method is proposed to group the characters into text strings based on four consistency constraints, which are the color, size, spacing and direction respectively. As we know that the characters in the same word have similar colors, sizes and distances between them, and they are also on some curve lines with a certain bending, but the characters in different words are not. Based on these features of the characters, the background pixels around the characters are expanded to link the characters into text strings. In this method, due to the introduction of the color consistency constraint, the characters with different colors can be grouped well. And this method can deal with curved character strings more accurately by the improved direction consistency constraint. The final experimental results show that this method can group the characters more efficiently, especially for the case in which the beginning or the end characters of words are close to the characters of the other words.


CCF Chinese Conference on Computer Vision | 2015

A Novel Dynamic Character Grouping Approach Based on the Consistency Constraints

Pengfei Xu; Qiguang Miao; Ruyi Liu; Feng Chen; Xiaojiang Chen; Weike Nie

In optical character recognition, text strings are extracted from images so that it can be edited, formatted, indexed, searched, or translated. Characters should be grouped into text strings before recognition, but the existing methods cannot group characters accurately. This paper proposes a new approach to group characters into text strings based on the consistency constraints. According to the features of the characters in the topographic maps, three kinds of consistency constraints are proposed, which are the color, size and direction consistency constraint respectively. In the proposed method, due to the introduction of the color consistency constraint, the characters with different colors can be grouped well; and this method can deal with the curved character strings more accurately by the improved direction consistency constraint. The final experimental results show that this method can group the characters more accurately, and lay a good foundation for text recognition.


Archive | 2018

A Review of Recent Advances in Identity Identification Technology Based on Biological Features

Jianan Tang; Pengfei Xu; Weike Nie; Yi Zhang; Ruyi Liu

With the development of social informatization technology, the problems of individual information security are becoming serious. Nowadays identity identification has been required essentially in government and business field. In this paper, we summarize and analyze the identification principles and identification methods based on biometrics, including the present researches fingerprint, palmprint, iris, human face, vein, gait and signature, and make comparative analysis of the differences of the error recognition rate, stability, acquisition difficulty and counterfeiting degree. Finally, the prospects of biometric recognition technologies are discussed additionally.


Archive | 2018

LaG-DESIQUE: A Local-and-Global Blind Image Quality Evaluator Without Training on Human Opinion Scores

Ruyi Liu; Yi Zhang; Damon M. Chandler; Qiguang Miao; Tiange Liu

This paper extends our previous DESIQUE [1] algorithm to a local-and-global way (LaG-DESIQUE) to blindly measure image quality without training on human opinion scores. The local DESIQUE extracts block-based log-derivative features and evaluates image quality through measuring the multivariate Gaussian distance between selected natural and test image patches. The global DESIQUE extracts image-based log-derivative features and image quality is estimated based on a two-stage framework, which was trained on a set of regenerated distorted images with their quality scores estimated by MAD [2] algorithm. The overall quality is the weighted average of local and global DESIQUE scores. Test on several image databases demonstrates that LaG-DESIQUE performs competitively well in predicting image quality.


Neurocomputing | 2018

Multiscale road centerlines extraction from high-resolution aerial imagery

Ruyi Liu; Qiguang Miao; Jianfeng Song; Yining Quan; Yunan Li; Pengfei Xu; Jing Dai

Abstract Accurate road extraction from high-resolution aerial imagery has many applications such as urban planning and vehicle navigation system. The common road extraction methods are based on classification algorithm, which needs to design robust handcrafted features for road. However, designing such features is difficult. For the road centerlines extraction problem, the existing algorithms have some limitations, such as spurs, time consuming. To address the above issues to some extent, we introduce the feature learning based on deep learning to extract robust features automatically, and present a method to extract road centerlines based on multiscale Gabor filters and multiple directional non-maximum suppression. The proposed algorithm consists of the following four steps. Firstly, the aerial imagery is classified by a pixel-wise classifier based on convolutional neural network (CNN). Specifically, CNN is used to learn features from raw data automatically, especially the structural features. Then, edge-preserving filtering is conducted on the resulting classification map, with the original imagery serving as the guidance image. It is exploited to preserve the edges and the details of the road. After that, we do some post-processing based on shape features to extract more reliable roads. Finally, multiscale Gabor filters and multiple directional non-maximum suppression are integrated to get a complete and accurate road network. Experimental results show that the proposed method can achieve comparable or higher quantitative results, as well as more satisfactory visual performance.


Neurocomputing | 2017

A multi-scale fusion scheme based on haze-relevant features for single image dehazing

Yunan Li; Qiguang Miao; Ruyi Liu; Jianfeng Song; Yining Quan; Yuhui Huang

Abstract Outdoor images are often degraded by aerosols suspending in atmosphere in bad weather conditions like haze. To cope with this phenomenon, researchers have proposed many approaches and single image based techniques draw attention mostly. Recently, a fusion-based strategy achieves good results, which derives two enhanced images from single image and blends them to recover haze-free image. However, there are still some deficiencies in the fusion-input images and weight maps, which leads their restoration less natural. In this paper, we propose a multi-scale fusion scheme for single image dehazing. We first use an adaptive color normalization to eliminate a common phenomenon, color distortion, in haze condition. Then two enhanced images, including our newly presented local detail enhanced image, are derived to be blended. Thereafter, five haze-relevant features of dark channel, clarity, saliency, luminance and chromatic are investigated since those can serve as weight maps for fusion. Dark channel, clarity and saliency features are finally selected due to their expression abilities and less interconnection. The fusion is processed with a pyramid strategy layer-by-layer. The multi-scale blended images are combined in a bottom-up manner. At last quantitative experiments demonstrate that our approach is effectiveness and yields better results than other methods.


International Journal of Bio-inspired Computation | 2017

Remote sensing image fusion based on shearlet and genetic algorithm

Qiguang Miao; Ruyi Liu; Yining Quan; Jianfeng Song

Image fusion is to combine information from two or more images of a scene into a single composite image, which will produce more information for visual perception or computer processing. In recent years, many algorithms have been developed, but there exist information loss and image distortion. To produce a satisfactory fusion result, an image fusion algorithm based on shearlet and genetic algorithm is proposed. As one of the multi-scale geometric analysis (MGA) tools, shearlet is equipped with a rich mathematical structure which is associated to a multi-resolution analysis. Genetic algorithm (GA) is an optimisation algorithm. So GA can be used to image fusion where parameter optimisation is required. Firstly, shearlet is performed on each input image to obtain their low-pass and high-pass coefficients. And then, GA is used to optimise the weighted factors in the fusion rule. Finally, the fused image is obtained by inverse shearlet transform. Experimental results have demonstrated that our method could ac...

Collaboration


Dive into the Ruyi Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bormin Huang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge