Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiwen Fang is active.

Publication


Featured researches published by Zhiwen Fang.


Computers and Electronics in Agriculture | 2015

Fine-grained maize tassel trait characterization with multi-view representations

Hao Lu; Zhiguo Cao; Yang Xiao; Zhiwen Fang; Yanjun Zhu; Ke Xian

Display Omitted A novel pipeline is proposed for efficient tassel potential region extraction.We proposed to characterize the maize tassel with a multi-view mechanism.Effective tassel detection is performed for fine-grained trait characterization.Time-series monitoring is executed to acquire tassel trait growing parameters.We have established a relatively large-scale maize tassel dataset. The characteristics of maize tassel trait are important cues to improve the farming operation for production enhancement. Currently, the information obtained from the maize tassel mainly depends on human labor, which is subjective and labor-intensive. Recent researches have introduced several image-based approaches to overcome the shortage with a modest degree of success. However, due to the variation of cultivar, pose and illumination, and the clustered background, characterizing the maize tassel trait with computer vision remains a challenging problem. To this end, an automatic fine-grained machine vision system termed mTASSEL is developed in this paper. We proposed to characterize the maize tassel with multi-view representations that combine multiple feature views and different channel views, which can alleviate the influence of environmental variations. In addition to the total tassel number trait, some fine-grained tassel traits, including the tassel color, branch number, length, width, perimeter and diameter, are further characterized to execute the time-series monitoring. To boost the related research, a relatively large-scale maize tassel dataset (10 sequences with 16,031 samples) is first constructed by our team. The experimental results demonstrate that both system modules significantly outperform other state-of-the-art approaches by large margins (26.0% for the detection and 7.8% for the segmentation). Results of this research can serve the automatic growth stage detection, accurate yield estimation and machine detasseling, as well as the field-based phenotyping research. The dataset and source code of the system are available online.


IEEE Transactions on Image Processing | 2016

Adobe Boxes: Locating Object Proposals Using Object Adobes

Zhiwen Fang; Zhiguo Cao; Yang Xiao; Lei Zhu; Junsong Yuan

Despite the previous efforts of object proposals, the detection rates of the existing approaches are still not satisfactory enough. To address this, we propose Adobe Boxes to efficiently locate the potential objects with fewer proposals, in terms of searching the object adobes that are the salient object parts easy to be perceived. Because of the visual difference between the object and its surroundings, an object adobe obtained from the local region has a high probability to be a part of an object, which is capable of depicting the locative information of the proto-object. Our approach comprises of three main procedures. First, the coarse object proposals are acquired by employing randomly sampled windows. Then, based on local-contrast analysis, the object adobes are identified within the enlarged bounding boxes that correspond to the coarse proposals. The final object proposals are obtained by converging the bounding boxes to tightly surround the object adobes. Meanwhile, our object adobes can also refine the detection rate of most state-of-the-art methods as a refinement approach. The extensive experiments on four challenging datasets (PASCAL VOC2007, VOC2010, VOC2012, and ILSVRC2014) demonstrate that the detection rate of our approach generally outperforms the state-of-the-art methods, especially with relatively small number of proposals. The average time consumed on one image is about 48 ms, which nearly meets the real-time requirement.


IEEE Transactions on Automation Science and Engineering | 2018

Toward Good Practices for Fine-Grained Maize Cultivar Identification With Filter-Specific Convolutional Activations

Hao Lu; Zhiguo Cao; Yang Xiao; Zhiwen Fang; Yanjun Zhu

Crop cultivar identification is an important aspect in agricultural systems. Traditional solutions involve excessive human interventions, which is labor-intensive and time-consuming. In addition, cultivar identification is a typical task of fine-grained visual categorization (FGVC). Compared with other common topics in FGVC, studies of this problem are somewhat lagging and limited. In this paper, targeting four Chinese maize cultivars of Jundan No.20, Wuyue No.3, Nongda No.108, and Zhengdan No.958, we first consider the problem of identifying the maize cultivar based on its tassel characteristics by computer vision. In particular, a novel fine-grained maize cultivar identification data set termed HUST-FG-MCI that contains 5000 images is first constructed. To better capture the textual differences in a weakly supervised manner, we proposed an effective deep convolutional neural network and Fisher vector (FV)-based feature encoding mechanism. The mechanism tends to highlight subtle object patterns via filter-specific convolutional representations and thus provides strong discrimination for cultivar identification. Experimental results demonstrate that our method outperforms other state-of-the-art approaches. We show also that FV encoding can weaken the linear dependency between convolutional activations, redundant filters exist in the convolutional layer, and high accuracy can be maintained with relatively low-dimensional convolutional features and one or two Gaussian components in FV.Note to Practitioners—In-field cultivar identification remains an open question for industrial applications in agriculture. This paper describes a practical computer vision system to explore the feasibility for automatic maize cultivar identification. Our system shows potentials to be applied in an embedded system as long as convolutional models could be compressed to address storage issues. Aside from using the fixed image acquisition device mentioned in this paper, another promising way is to integrate our system into an unmanned aircraft to achieve flexible field-based observations.


chinese automation congress | 2015

Selective features for RGB-D saliency

Lei Zhu; Zhiguo Cao; Zhiwen Fang; Yang Xiao; Jin Wu; Huiping Deng; Jing Liu

The depth image has greatly broadened various applications of computer vision, however, it is seldom explored in the field of salient object detection. In this paper, we propose a learning-based approach for extracting saliency from RGB-D images. For best fitting the contrast-based stimulus that guides the saliency search in human vision system, massive visual attributes that are extracted from several multi-scale feature channels such as the color channels, the texture channels and the depth channel are investigated to represent the contrasts between segments. In addition, discriminative features are able to be automatically selected by learning several decision trees based on the ground truth, and those features are further utilized to search the saliency regions via voting the predictions of the trees. We argue that, introducing the selective features from the depth information of a scene can benefit the saliency detection and achieve better performance than only using the appearance features that are extracted from the color images of the same scene. The experimental results demonstrate that our method outperforms other 10 state-of-the-art approaches on a large RGB-D benchmark.


IEEE Transactions on Aerospace and Electronic Systems | 2015

Recognizing the formations of CVBG based on multiviewpoint context

Chunhua Deng; Zhiguo Cao; Yang Xiao; Yin Chen; Zhiwen Fang; Ruicheng Yan

Recently, context-based recognition approaches have gained more and more attention. However, conventional algorithms usually do not have a satisfactory performance when recognizing formations of a carrier battle group (CVBG), due to the complexity of sea backgrounds and the context of ships. Detailed information about each enemy ship cannot be obtained because monitoring points are usually far from CVBG areas, and a CVBG is assumed to be formed by some scattered points that cannot be described by a fixed shape. But most current algorithms are proposed to recognize continuous objects, such as handwriting letter, face, etc. In order to recognize CVBG formation characterized by a set of scattered points, we propose a novel approach called multiviewpoint context (MVC). Formation recognition needs to deal with invariance problems in scale and rotation because monitoring points come from random heights and directions. We address those problems by calculating contextual information in natural-coordinate systems, which are established on viewpoints selected from the Archimedes spiral. We can also obtain sufficient information about a CVBG from a series of viewpoints. It is difficult to get the local information of a viewpoint because the local region of each viewpoint is not easy to define for scattered points, and a probability density function (pdf) is introduced to describe the local information of viewpoints in the standard formation. The similarity between two formations is measured by combining the MVC descriptor and the pdf. We present a self-adaptive method to identify formation by utilizing the similarities between all pairs of standard formations. The experimental results demonstrate that our algorithm outperforms the current state-of-the-art methods in formation recognition.


Applied Soft Computing | 2017

Refine BING using effective cascade ranking

Zhiwen Fang; Zhiguo Cao; Yang Xiao; Hao Lu

Graphical abstractDisplay Omitted HighlightsWe propose the cascade ranking to improve the detection rate of the proposals.Scale-sets histogram is proposed to predict the potential object size.We use color-texture consistence to improve the ranking of the object proposals.Hierarchical sorting with color contrast can emphasize the true positives. As an important pre-filtering procedure for object detection, objectness estimation now draws lots of attentions from both the video analysis and computer vision communities. Among the existing approaches, BING [1] (binarized normed gradients) is the recently proposed one that possesses the advantages of high object detection rate, ultra fast computational speed (i.e. 300fps), small number of proposals, and good generalization ability. However, within BINGs framework, only the uniform hierarchical ranking structure and gradient information are employed for objectness characterization, which leads to high false positive rate. To address this issue, an effective and efficient cascade ranking method is proposed to refine BING, in this paper. It declares three main contributions. First, the concept of scale-sets histogram is novelly introduced. It helps to analyze the potential sizes of the objects. Secondly, more descriptive visual features (i.e. the color-texture consistence) are considered simultaneously for objectness characterization. Lastly, we propose hierarchical sorting to further leverage the ranking performance, according to the local contrast analysis between the inside and straddling superpixels of the object proposal windows. The experimental results on four challenging datasets PASCAL VOC2007, PASCAL VOC2010, PASCAL VOC2012 and ILSVRC2014 demonstrate that, the refined BING can achieve good balance between high detection rate (DR) (e.g. 96.7% DR with 1000 proposal windows on VOC2007), and low time consumption (e.g. 75ms per image on VOC2007). It is also worth noting that, with a relatively small number of proposals (e.g. fewer than 200 proposals), our approach generally outperforms the state-of-the-art methods on the detection rate, on all the testing datasets.


Applied Soft Computing | 2017

Towards fine-grained maize tassel flowering status recognition

Hao Lu; Zhiguo Cao; Yang Xiao; Zhiwen Fang; Yanjun Zhu

Graphical abstractDisplay Omitted HighlightsWe first address the problem of flowering status recognition with computer vision.Densely sampled SIFT and Fisher vector are employed for feature representation.An effective metric learning method is proposed to leverage the performance.A maize tassel flowering status dataset of 3000 images is established. Maize is one of the three main cereal crops of the world. Accurately knowing its tassel flowering status can help to analyze the growth status and adjust the farming operation accordingly. At the current stage, acquiring the tassel flowering status mainly depends on human observation. Actually, it is costly and subjective, especially for the large-scale quantitative analysis under the in-field environment. To alleviate this, we propose an automatic maize tassel flowering status (i.e., non-flowering, partially-flowering and fully-flowering) recognition method via the computer vision technology in this paper. In particular, this task is formulated as a fine-grained image categorization problem. More specifically, scale-invariant feature transform (SIFT) is first extracted as the low-level visual descriptor to characterize the maize flower. Fisher vector (FV) is then applied to execute feature encoding on SIFT to generate more discriminative flowering status representation. To further leverage the performance, a novel metric leaning method termed large-margin dimensionality reduction (LMDR) is proposed. To verify the effectiveness of the proposed method, a flowering status dataset that consists of 3000 images is built. The experimental results demonstrate that our approach goes beyond the state-of-the-art by large margins (at least 8.3%). The dataset and source code are made available online.


chinese automation congress | 2015

Hybrid RGB-D object recognition using Convolutional Neural Network and Fisher Vector

Wei Li; Zhiguo Cao; Yang Xiao; Zhiwen Fang

With the recent emergence of low-cost depth sensors (e.g., Microsoft Kinect), RGB-D image can be captured more easily for object recognition. Compared to the existing RGB-based paradigm, the introduction of depth information indeed imports extra descriptive cues (e.g., surface geometry) for object characterization. In this paper, a novel hybrid RGB-D object categorization model is proposed. It is fruited simultaneously from two state-of-the-art image representation technologies: Convolutional Neural Network (CNN) and Fisher Vector (FV). Specifically, the objects are characterized by CNN in RGB domain. While, CNN is not applied to depth domain, due to the lack of sufficient samples for training. We propose to extract the corresponding depth representation via FV with the densely sampled HONV descriptors. The CNN and FV description are then fused to form the unified RGB-D object signature. SVM is employed for decision. The experiments on a large-scale RGB-D dataset demonstrate that, our hybrid RGB-D object recognition model outperforms the state-of-the-art approaches by large margins (at least 6.3%).


chinese automation congress | 2015

Object detection based on Multi-viewpoint histogram

Chunhua Deng; Zhiguo Cao; Yang Xiao; Zhiwen Fang

Object detection is a key issue in computer vision, and technologies based local descriptor become more and more mature, especially in pedestrian detection. In this paper, we present a novel algorithm to detect targets from an image. Local descriptor has been used in object detection for quite some time and proven to be a valuable tool. However, Local descriptors are usually accompanied by boundary effect and poor continuity. Our algorithm addresses these problems via utilizing the global descriptor, called Multi-viewpoint histogram (MVH), which is more likely to rotate and zoom. The challenge of using just the global descriptor is what can not provide enough global information and local information. Different from the previous researches, this study integrates sufficient global information and local information of image by multi-viewpoints and probability density function (PDF). The experimental results demonstrate that our algorithm has fared well in some publically available database, moreover, outperforms current state-of-the-art methods in some respects, such as image matching between different spectra images and in the case of the small targets.


MIPPR 2013: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications | 2013

Ship detection from optical satellite image using optical flow and saliency

Chunhua Deng; Zhiguo Cao; Zhiwen Fang; Zhenghong Yu

This paper present an effective method for ship detection using optical flow and saliency methods from optical satellite images, which can be able to identify more than one ship targets in the complex dynamic sea background and succeeds to reduce the false positive rate compared to traditional methods. In this paper, moving targets in the image are highlighted through the classical optical flow method, and the dynamic waves are restrained by combining the state-of-art saliency method. We make the best of the low-level (size, color, etc.) and high-level (adjacent frames information, etc.) features of image, which can adapt to different dynamic background situation. Compared to existing method, experimental results demonstrate the robustness of the proposed method with high performance.

Collaboration


Dive into the Zhiwen Fang's collaboration.

Top Co-Authors

Avatar

Zhiguo Cao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yang Xiao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hao Lu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chunhua Deng

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Zhu

Wuhan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yanjun Zhu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ruicheng Yan

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kaicheng Gong

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chang Li

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge