Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ligang Zhang is active.

Publication


Featured researches published by Ligang Zhang.


international conference on neural information processing | 2015

Class-Semantic Color-Texture Textons for Vegetation Classification

Ligang Zhang; Brijesh Verma; David R. Stockwell

This paper proposes a new color-texture texton based approach for roadside vegetation classification in natural images. Two individual sets of class-semantic textons are first generated from color and filter bank texture features for each class. The color and texture features of testing pixels are then mapped into one of the generated textons using the nearest distance, resulting in two texton occurrence matrices – one for color and one for texture. The classification is achieved by aggregating color-texture texton occurrences over all pixels in each over-segmented superpixel using a majority voting strategy. Our approach outperforms previous benchmarking approaches and achieves 81% and 74.5% accuracies of classifying seven objects on a cropped region dataset and six objects on an image dataset collected by the Department of Transport and Main Roads, Queensland, Australia.


Pattern Recognition | 2016

Spatial contextual superpixel model for natural roadside vegetation classification

Ligang Zhang; Brijesh Verma; David R. Stockwell

In this paper, we present a novel Spatial Contextual Superpixel Model (SCSM) for vegetation classification in natural roadside images. The SCSM accomplishes the goal by transforming the classification task from a pixel into a superpixel domain for more effective adoption of both local and global spatial contextual information between superpixels in an image. First, the image is segmented into a set of superpixels with strong homogeneous texture, from which Pixel Patch Selective (PPS) features are extracted to train class-specific binary classifiers for obtaining Contextual Superpixel Probability Maps (CSPMs) for all classes, coupled with spatial constraints. A set of superpixel candidates with the highest probabilities is then determined to represent global characteristics of a testing image. A superpixel merging strategy is further proposed to progressively merge superpixels with low probabilities into the most similar neighbors by performing a double-check on whether a superpixel and its neighour accept each other, as well as enhancing a global contextual constraint. We demonstrate high performance by the proposed model on two challenging natural roadside image datasets from the Department of Transport and Main Roads and on the Stanford background benchmark dataset. A novel Spatial Contextual Superpixel Model (SCSM) for natural vegetation classification.A new reverse superpixel merging strategy to progressively merge superpixels.High performance on challenging natural datasets and Stanford background data.


international conference on natural computation | 2015

Roadside vegetation classification using color intensity and moments

Ligang Zhang; Brijesh Verma; David R. Stockwell

Roadside vegetation classification plays a significant role in many applications, such as grass fire risk assessment and vegetation growth condition monitoring. Most existing approaches focus on the use of vegetation indices from the invisible spectrum, and only limited attention has been given to using visual features, such as color and texture. This paper presents a new approach for vegetation classification using a fusion of color and texture features. The color intensity features are extracted in the opponent color space, while the texture comprises of three color moments. We demonstrate 79% accuracy of the approach on a dataset created from real world video data collected by the Department of Transport and Main Roads (DTMR), Queensland, Australia, and promising results on a set of natural images. We also highlight some typical challenges for roadside vegetation classification in natural conditions.


international joint conference on neural network | 2016

Spatially Constrained Location Prior for scene parsing

Ligang Zhang; Brijesh Verma; David R. Stockwell; Sujan Chowdhury

Semantic context is an important and useful cue for scene parsing in complicated natural images with a substantial amount of variations in objects and the environment. This paper proposes Spatially Constrained Location Prior (SCLP) for effective modelling of global and local semantic context in the scene in terms of inter-class spatial relationships. Unlike existing studies focusing on either relative or absolute location prior of objects, the SCLP effectively incorporates both relative and absolute location priors by calculating object co-occurrence frequencies in spatially constrained image blocks. The SCLP is general and can be used in conjunction with various visual feature-based prediction models, such as Artificial Neural Networks and Support Vector Machine (SVM), to enforce spatial contextual constraints on class labels. Using SVM classifiers and a linear regression model, we demonstrate that the incorporation of SCLP achieves superior performance compared to the state-of-the-art methods on the Stanford background and SIFT Flow datasets.


machine vision applications | 2017

Superpixel-based class-semantic texton occurrences for natural roadside vegetation segmentation

Ligang Zhang; Brijesh Verma

Vegetation segmentation from roadside data is a field that has received relatively little attention in present studies, but can be of great potentials in a wide range of real-world applications, such as road safety assessment and vegetation condition monitoring. In this paper, we present a novel approach that generates class-semantic color–texture textons and aggregates superpixel-based texton occurrences for vegetation segmentation in natural roadside images. Pixel-level class-semantic textons are learnt by generating two individual sets of bag-of-word visual dictionaries from color and filter bank texture features separately for each object class using manually cropped training data. For a testing image, it is first oversegmented into a set of homogeneous superpixels. The color and texture features of all pixels in each superpixel are extracted and further mapped to one of the learnt textons using the nearest distance metric, resulting in a color and a texture texton occurrence matrix. The color and texture texton occurrences are aggregated using a linear mixing method over each superpixel and the segmentation is finally achieved using a simple yet effective majority voting strategy. Evaluations on two datasets such as video data collected by the Department of Transport and Main Roads, Queensland, Australia, and a public roadside grass dataset show high accuracy of the proposed approach. We also demonstrate the effectiveness of the approach for vegetation segmentation in real-world scenarios.


international joint conference on neural network | 2016

Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside vegetation classification.

Ligang Zhang; Brijesh Verma; David R. Stockwell; Sujan Chowdhury

Roadside vegetation classification has recently attracted increasing attention, due to its significance in applications such as vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised classification and cluster-level texton occurrence within a voting strategy over superpixels for vegetation classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.


digital image computing techniques and applications | 2015

Class-Semantic Textons with Superpixel Neighborhoods for Natural Roadside Vegetation Classification

Ligang Zhang; Brijesh Verma

Accurate classification of roadside vegetation plays a significant role in many practical applications, such as vegetation growth management and fire hazard identification. However, relatively little attention has been paid to this field in previous studies, particularly for natural data. In this paper, a novel approach is proposed for natural roadside vegetation classification, which generates class- sematic color-texture textons at a pixel level and then makes a collective classification decision in a neighborhood of superpixels. It first learns two individual sets of bag-of-word visual dictionaries (i.e. class-semantic textons) from color and filter-bank texture features respectively for each object. The color and texture features of all pixels in each superpixel in a test image are mapped into one of the learnt textons using the nearest Euclidean distance, which are further aggregated into class probabilities for each superpixel. The class probabilities in each superpixel and its neighboring superpixels are combined using a linear weighting mixing, and the classification of this superpixel is finally achieved by assigning it the class with the highest class probability. Our approach shows higher accuracy than four benchmarking approaches on both a cropped region and an image datasets collected by the Department of Transport and Main Roads, Queensland, Australia.


Expert Systems With Applications | 2018

Density Weighted Connectivity of Grass Pixels in Image Frames for Biomass Estimation

Ligang Zhang; Brijesh Verma; David R. Stockwell; Sujan Chowdhury

Abstract Accurate estimation of the biomass of roadside grasses plays a significant role in applications such as fire-prone region identification. Current solutions heavily depend on field surveys, remote sensing measurements and image processing using reference markers, which often demand big investments of time, effort and cost. This paper proposes Density Weighted Connectivity of Grass Pixels (DWCGP) to automatically estimate grass biomass from roadside image data. The DWCGP calculates the length of continuously connected grass pixels along a vertical orientation in each image column, and then weights the length by the grass density in a surrounding region of the column. Grass pixels are classified using feedforward artificial neural networks and the dominant texture orientation at every pixel is computed using multi-orientation Gabor wavelet filter vote. Evaluations on a field survey dataset show that the DWCGP reduces Root-Mean-Square Error from 5.84 to 5.52 by additionally considering grass density on top of grass height. The DWCGP shows robustness to non-vertical grass stems and to changes of both Gabor filter parameters and surrounding region widths. It also has performance close to human observation and higher than eight baseline approaches, as well as promising results for classifying low vs. high fire risk and identifying fire-prone road regions.


ACM Computing Surveys | 2018

Facial Expression Analysis under Partial Occlusion: A Survey

Ligang Zhang; Brijesh Verma; Dian Tjondronegoro; Vinod Chandran

Automatic machine-based Facial Expression Analysis (FEA) has made substantial progress in the past few decades driven by its importance for applications in psychology, security, health, entertainment, and human–computer interaction. The vast majority of completed FEA studies are based on nonoccluded faces collected in a controlled laboratory environment. Automatic expression recognition tolerant to partial occlusion remains less understood, particularly in real-world scenarios. In recent years, efforts investigating techniques to handle partial occlusion for FEA have seen an increase. The context is right for a comprehensive perspective of these developments and the state of the art from this perspective. This survey provides such a comprehensive review of recent advances in dataset creation, algorithm development, and investigations of the effects of occlusion critical for robust performance in FEA systems. It outlines existing challenges in overcoming partial occlusion and discusses possible opportunities in advancing the technology. To the best of our knowledge, it is the first FEA survey dedicated to occlusion and aimed at promoting better-informed and benchmarked future work.


Archive | 2017

Roadside Video Data Analysis Framework

Brijesh Verma; Ligang Zhang; David R. Stockwell

This chapter introduces a general framework for roadside video data analysis. The main processing steps in the framework are described separately.

Collaboration


Dive into the Ligang Zhang's collaboration.

Top Co-Authors

Avatar

Brijesh Verma

Central Queensland University

View shared research outputs
Top Co-Authors

Avatar

David R. Stockwell

Central Queensland University

View shared research outputs
Top Co-Authors

Avatar

Sujan Chowdhury

Central Queensland University

View shared research outputs
Top Co-Authors

Avatar

Dian Tjondronegoro

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Vinod Chandran

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge