Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yijun Yan is active.

Publication


Featured researches published by Yijun Yan.


Pattern Recognition | 2018

Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement

Yijun Yan; Jinchang Ren; Genyun Sun; Huimin Zhao; Junwei Han; Xuelong Li; Stephen Marshall; Jin Zhan

Abstract Visual attention is a kind of fundamental cognitive capability that allows human beings to focus on the region of interests (ROIs) under complex natural environments. What kind of ROIs that we pay attention to mainly depends on two distinct types of attentional mechanisms. The bottom-up mechanism can guide our detection of the salient objects and regions by externally driven factors, i.e. color and location, whilst the top-down mechanism controls our biasing attention based on prior knowledge and cognitive strategies being provided by visual cortex. However, how to practically use and fuse both attentional mechanisms for salient object detection has not been sufficiently explored. To the end, we propose in this paper an integrated framework consisting of bottom-up and top-down attention mechanisms that enable attention to be computed at the level of salient objects and/or regions. Within our framework, the model of a bottom-up mechanism is guided by the gestalt-laws of perception. We interpreted gestalt-laws of homogeneity, similarity, proximity and figure and ground in link with color, spatial contrast at the level of regions and objects to produce feature contrast map. The model of top-down mechanism aims to use a formal computational model to describe the background connectivity of the attention and produce the priority map. Integrating both mechanisms and applying to salient object detection, our results have demonstrated that the proposed method consistently outperforms a number of existing unsupervised approaches on five challenging and complicated datasets in terms of higher precision and recall rates, AP (average precision) and AUC (area under curve) values.


Cognitive Computation | 2018

Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos

Yijun Yan; Jinchang Ren; Huimin Zhao; Genyun Sun; Zheng Wang; Jiangbin Zheng; Stephen Marshall; John J. Soraghan

In this paper, we present an efficient framework to cognitively detect and track salient objects from videos. In general, colored visible image in red-green-blue (RGB) has better distinguishability in human visual perception, yet it suffers from the effect of illumination noise and shadows. On the contrary, the thermal image is less sensitive to these noise effects though its distinguishability varies according to environmental settings. To this end, cognitive fusion of these two modalities provides an effective solution to tackle this problem. First, a background model is extracted followed by a two-stage background subtraction for foreground detection in visible and thermal images. To deal with cases of occlusion or overlap, knowledge-based forward tracking and backward tracking are employed to identify separate objects even the foreground detection fails. To evaluate the proposed method, a publicly available color-thermal benchmark dataset Object Tracking and Classification in and Beyond the Visible Spectrum is employed here. For our foreground detection evaluation, objective and subjective analysis against several state-of-the-art methods have been done on our manually segmented ground truth. For our object tracking evaluation, comprehensive qualitative experiments have also been done on all video sequences. Promising results have shown that the proposed fusion-based approach can successfully detect and track multiple human objects in most scenes regardless of any light change or occlusion problem.


Multidimensional Systems and Signal Processing | 2016

Fusion of block and keypoints based approaches for effective copy-move image forgery detection

Jiangbin Zheng; Yanan Liu; Jinchang Ren; Tingge Zhu; Yijun Yan; Heng Yang

Keypoint-based and block-based methods are two main categories of techniques for detecting copy-move forged images, one of the most common digital image forgery schemes. In general, block-based methods suffer from high computational cost due to the large number of image blocks used and fail to handle geometric transformations. On the contrary, keypoint-based approaches can overcome these two drawbacks yet are found difficult to deal with smooth regions. As a result, fusion of these two approaches is proposed for effective copy-move forgery detection. First, our scheme adaptively determines an appropriate initial size of regions to segment the image into non-overlapped regions. Feature points are extracted as keypoints using the scale invariant feature transform (SIFT) from the image. The ratio between the number of keypoints and the total number of pixels in that region is used to classify the region into smooth or non-smooth (keypoints) regions. Accordingly, block based approach using Zernike moments and keypoint based approach using SIFT along with filtering and post-processing are respectively applied to these two kinds of regions for effective forgery detection. Experimental results show that the proposed fusion scheme outperforms the keypoint-based method in reliability of detection and the block-based method in efficiency.


Multidimensional Systems and Signal Processing | 2016

Adaptive fusion of color and spatial features for noise-robust retrieval of colored logo and trademark images

Yijun Yan; Jinchang Ren; Yinsheng Li; James F. C. Windmill; Winifred Ijomah; Kuo-Ming Chao

Due to their uniqueness and high value commercially, logos/trademarks play a key role in e-business based global marketing. However, existing trademark/logo retrieval techniques and content-based image retrieval methods are mostly designed for generic images, which cannot provide effective retrieval of trademarks/logos. Although color and spatial features have been intensively investigated for logo image retrieval, in most cases they were applied separately. When these are combined in a fused manner, a fixed weighting is normally used between them which cannot reflect the significance of these features in the images. When the image quality is degraded by various reasons such as noise, the reliability of color and spatial features may change in different ways, such that the weights between them should be adapted to such changes. In this paper, adaptive fusion of color and spatial descriptors is proposed for colored logo/trademark image retrieval. First, color quantization and k-means are combined for effective dominant color extraction. For each extracted dominant color, a component-based spatial descriptor is derived for local features. By analyzing the image histogram, an adaptive fusion of these two features is achieved for more effective logo abstraction and more accurate image retrieval. The proposed approach has been tested on a database containing over 2300 logo/trademark images. Experimental results have shown that the proposed methodology yields improved retrieval precision and outperforms three state-of-the-art techniques even with added Gaussian, salt and pepper, and speckle noise.


ieee international conference on multimedia big data | 2015

Fusion of Dominant Colour and Spatial Layout Features for Effective Image Retrieval of Coloured Logos and Trademarks

Yijun Yan; Jinchang Ren; Yinsheng Li; James F. C. Windmill; Winifred Ijomah

Due to its uniqueness and high value in commercial side, logos and trademarks play a key role in e-business based global marketing. Detecting misused and faked logos need designated and accurate image processing and retrieval techniques. However, existing colour and shape based retrieval techniques, which are mainly designed for natural images, cannot provide effective retrieval of logo images. In this paper, an effective approach is proposed for content-based image retrieval of coloured logos and trademarks. By extracting the dominant colour from colour quantization and measuring the spatial similarity, fusion of colour and spatial layout features is achieved. The proposed approach has been tested on a database containing over 250 logo images. Experimental results show that the proposed methodology yields more accurate results in retrieving relevant images than conventional approaches even with added Gaussian and Salt & pepper noise.


pacific rim conference on multimedia | 2016

Fusion of Thermal and Visible Imagery for Effective Detection and Tracking of Salient Objects in Videos

Yijun Yan; Jinchang Ren; Huimin Zhao; Jiangbin Zheng; Ezrinda Mohd Zaihidee; John J. Soraghan

In this paper, we present an efficient approach to detect and track salient objects from videos. In general, colored visible image in red-green-blue (RGB) has better distinguishability in human visual perception, yet it suffers from the effect of illumination noise and shadows. On the contrary, thermal image is less sensitive to these noise effects though its distinguishability varies according to environmental settings. To this end, fusion of these two modalities provides an effective solution to tackle this problem. First, a background model is extracted followed by background-subtraction for foreground detection in visible images. Meanwhile, adaptively thresholding is applied for foreground detection in thermal domain as human objects tend to be of higher temperature thus brighter than the background. To deal with cases of occlusion, prediction based forward tracking and backward tracking are employed to identify separate objects even the foreground detection fails. The proposed method is evaluated on OTCBVS, a publicly available color-thermal benchmark dataset. Promising results have shown that the proposed fusion based approach can successfully detect and track multiple human objects.


brain inspired cognitive systems | 2018

Deep Background Subtraction of Thermal and Visible Imagery for Pedestrian Detection in Videos

Yijun Yan; Huimin Zhao; Fu-Jen Kao; Valentin Masero Vargas; Sophia Zhao; Jinchang Ren

In this paper, we introduce an efficient framework to subtract the background from both visible and thermal imagery for pedestrians’ detection in the urban scene. We use a deep neural network (DNN) to train the background subtraction model. For the training of the DNN, we first generate an initial background map and then employ randomly 5% video frames, background map, and manually segmented ground truth. Then we apply a cognition-based post-processing to further smooth the foreground detection result. We evaluate our method against our previous work and 11 recently widely cited method on three challenge video series selected from a publicly available color-thermal benchmark dataset OCTBVS. Promising results have been shown that the proposed DNN-based approach can successfully detect the pedestrians with good shape in most scenes regardless of illuminate changes and occlusion problem.


brain inspired cognitive systems | 2018

Making industrial robots smarter with adaptive reasoning and autonomous thinking for real-time tasks in dynamic environments: a case study

Jaime Zabalza; Zixiang Fei; Cuebong Wong; Yijun Yan; Carmelo Mineo; Erfu Yang; Tony Rodden; Jörn Mehnen; Quang-Cuong Pham; Jinchang Ren

In order to extend the abilities of current robots in industrial applications towards more autonomous and flexible manufacturing, this work presents an integrated system comprising real-time sensing, path-planning and control of industrial robots to provide them with adaptive reasoning, autonomous thinking and environment interaction under dynamic and challenging conditions. The developed system consists of an intelligent motion planner for a 6 degrees-of-freedom robotic manipulator, which performs pick-and-place tasks according to an optimized path computed in real-time while avoiding a moving obstacle in the workspace. This moving obstacle is tracked by a sensing strategy based on machine vision, working on the HSV space for color detection in order to deal with changing conditions including non-uniform background, lighting reflections and shadows projection. The proposed machine vision is implemented by an off-board scheme with two low-cost cameras, where the second camera is aimed at solving the problem of vision obstruction when the robot invades the field of view of the main sensor. Real-time performance of the overall system has been experimentally tested, using a KUKA KR90 R3100 robot.


brain inspired cognitive systems | 2018

Unsupervised Hyperspectral Band Selection Based on Maximum Information Entropy and Determinantal Point Process

Zhijing Yang; Weizhao Chen; Yijun Yan; Faxian Cao; Nian Cai

Band selection is of great important for hyperspectral image processing, which can effectively reduce the data redundancy and computation time. In the case of unknown class labels, it is very difficult to select an effective band subset. In this paper, an unsupervised band selection algorithm is proposed which can preserve the original information of the hyperspectral image and select a low-redundancy band subset. First, a search criterion is designed to effectively search the best band subset with maximum information entropy. It is challenging to select a low-redundancy spectral band subset with maximizing the search criteria since it is a NP-hard problem. To overcome this problem, a double-graph model is proposed to capture the correlations between spectral bands with full use of the spatial information. Then, an improved Determinantal Point Process algorithm is presented as the search method to find the low-redundancy band subset from the double-graph model. Experimental results verify that our algorithm achieves better performance than other state-of-the-art methods.


Iet Image Processing | 2018

Dimensionality reduction based on determinantal point process and singular spectrum analysis for hyperspectral images

Weizhao Chen; Zhijing Yang; Faxian Cao; Yijun Yan; Meilin Wang; Chunmei Qing; Yongqiang Cheng

Dimensionality reduction is of high importance in hyperspectral data processing, which can effectively reduce the data redundancy and computation time for improved classification accuracy. Band selection and feature extraction methods are two widely used dimensionality reduction techniques. By integrating the advantages of the band selection and feature extraction, the authors propose a new method for reducing the dimension of hyperspectral image data. First, a new and fast band selection algorithm is proposed for hyperspectral images based on an improved determinantal point process (DPP). To reduce the amount of calculation, the dual-DPP is used for fast sampling representative pixels, followed by k-nearest neighbour-based local processing to explore more spatial information. These representative pixel points are used to construct multiple adjacency matrices to describe the correlation between bands based on mutual information. To further improve the classification accuracy, two-dimensional singular spectrum analysis is used for feature extraction from the selected bands. Experiments show that the proposed method can select a low-redundancy and representative band subset, where both data dimension and computation time can be reduced. Furthermore, it also shows that the proposed dimensionality reduction algorithm outperforms a number of state-of-the-art methods in terms of classification accuracy.

Collaboration


Dive into the Yijun Yan's collaboration.

Top Co-Authors

Avatar

Jinchang Ren

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiangbin Zheng

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jaime Zabalza

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sophia Zhao

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

Winifred Ijomah

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

Faxian Cao

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Genyun Sun

China University of Petroleum

View shared research outputs
Researchain Logo
Decentralizing Knowledge