Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chongzhao Han is active.

Publication


Featured researches published by Chongzhao Han.


International Journal of Approximate Reasoning | 2013

Discounted combination of unreliable evidence using degree of disagreement

Yi Yang; Deqiang Han; Chongzhao Han

Abstract When Dempster’s rule is used to implement a combination of evidence, all sources are considered equally reliable. However, in many real applications, all the sources of evidence may not have the same reliability. To resolve this problem, a number of methods for discounting unreliable sources of evidence have been proposed in which the estimation of the discounting (weighting) factors is crucial, especially when prior knowledge is unavailable. In this paper, we propose a new degree of disagreement through which discounting factors can be generated for discounting combinations of unreliable evidence. The new degree of disagreement is established using distance of evidence. It can be experimentally verified that our degree of disagreement describes the disagreements or differences among bodies of evidence well and that it can be effectively used in discounting combinations of unreliable evidence.


international conference on information fusion | 2005

Sequential unscented Kalman filter for radar target tracking with range rate measurements

Zhansheng Duan; Xianqing Li; Chongzhao Han; Hongyan Zhu

To solve the radar target tracking problem with range rate measurements, in which the errors between range and range rate measurements are correlated, a sequential unscented Kalman filter (SUKF) is proposed in this paper. A pseudo measurement is constructed by block-partitioned Cholesky factorization first, this can keep the range, bearing and elevation (or two direction cosine) measurements unchanged, while the errors between the original range and range rate measurement are decorrelated; then based on the UKF, the bearing, elevation (or two direction cosine) and the pseudo measurement are sequentially processed to enhance the filtering precision and the computational efficiency simultaneously. Validity and consistency of the new proposed algorithm is verified by Monte-Carlo simulation.


Information Fusion | 2013

Poisson image fusion based on Markov random field fusion model

Jian Sun; Hongyan Zhu; Zongben Xu; Chongzhao Han

In this paper, we present a gradient domain image fusion framework based on the Markov Random Field (MRF) fusion model. In this framework, the salient structures of the input images are fused in the gradient domain, then the final fused image is reconstructed by solving a Poisson equation which forces the gradients of the fused image to be close to the fused gradients. To fuse the structures in the gradient domain, an effective MRF-based fusion model is designed based on both the per-pixel fusion rule defined by the local saliency and also the smoothness constraints over the fusion weights, which is optimized by graph cut algorithm. This MRF-based fusion model enables the accurate estimation of region-based fusion weights for the salient objects or structures. We apply this method to the applications of multi-sensor image fusion, including infrared and visible image fusion, multi-focus image fusion and medical image fusion. Extensive experiments and comparisons show that the proposed fusion model is able to better fuse the multi-sensor images and produces high-quality fusion results compared with the other state-of-the-art methods.


Signal Processing | 2012

Unified cardinalized probability hypothesis density filters for extended targets and unresolved targets

Feng Lian; Chongzhao Han; Weifeng Liu; Jing Liu; Jian Sun

The unified cardinalized probability hypothesis density (CPHD) filters for extended targets and unresolved targets are proposed. The theoretically rigorous measurement-update equations for the proposed filters are derived according to the theory of random finite set (RFS) and finite-set statistics (FISST). By assuming that the predicted distributions of the extended targets and unresolved targets and the distribution of the clutter are Poisson, the exact extended-target and unresolved-target CPHD correctors reduce to the exact extended-target and unresolved-target PHD correctors, respectively. Since the exact CPHD and PHD corrector equations involve with a number of operations that grow exponentially with the number of measurements, the computationally tractable approximations for them are presented, which can be used when the extended targets and the unresolved targets are not too close together and the clutter density is not too large. Monte Carlo simulation results show that the approximate extended-target and unresolved-target CPHD filters, respectively, outperform the approximate extended-target and unresolved-target PHD filters a lot in estimating the target number and states, although the computational requirement of the CPHD filters is more expensive than that of the PHD filters.


IEEE Transactions on Aerospace and Electronic Systems | 2010

Multitarget State Extraction for the PHD Filter using MCMC Approach

Weifeng Liu; Chongzhao Han; Feng Lian; Hongyan Zhu

It is known that multitarget states cannot be directly derived from the particle probability hypothesis density (particle-PHD) filter. Therefore, some cluster algorithms are used to extract the states from the particles. Actually, these algorithms become a crucial step in how to cluster the particles effectively and robustly in the particle-PHD filter. A novel multitarget state extraction algorithm for the particle-PHD filter is proposed. The proposed algorithm is comprised of two steps. First, the target number is calculated via the particle-PHD filter. Second, the distribution of the particles is fitted using finite mixture models (FMMs), whose parameters can be derived using a Markov chain Monte Carlo (MCMC) sampling scheme. Then the states can be extracted according to the fitted mixture distribution. The final simulations show that the proposed algorithm is effective for the extraction of the individual states even when the clutter is dense and the distribution of the particles is relatively complex.


Pattern Recognition Letters | 2011

A novel classifier based on shortest feature line segment

Deqiang Han; Chongzhao Han; Yi Yang

A new approach called shortest feature line segment (SFLS) is proposed to implement pattern classification in this paper, which can retain the ideas and advantages of nearest feature line (NFL) and at the same time can counteract the drawbacks of NFL. The proposed SFLS uses the length of the feature line segment satisfying given geometric relation with query point instead of the perpendicular distance defined in NFL. SFLS has clear geometric-theoretic foundation and is relatively simple. Experimental results on some artificial datasets and real-world datasets are provided, together with the comparisons between SFLS and other neighborhood-based classification methods, including nearest neighbor (NN), k-NN, NFL and some refined NFL methods, etc. It can be concluded that SFLS is a simple yet effective classification approach.


IEEE Transactions on Aerospace and Electronic Systems | 2010

Estimating Unknown Clutter Intensity for PHD Filter

Feng Lian; Chongzhao Han; Weifeng Liu

In most of the existing probability hypothesis density (PHD) filters, the clutter is modeled as a Poisson random finite set (RFS) with a known intensity. The clutter intensity is characterized as a product of the average number of clutter (false alarm) points per scan and the probability density of clutter spatial distribution. The PHD filter is generalized to the problem of multi-target tracking (MTT) in clutter with an unknown intensity. In the proposed approach, the unknown clutter intensity is first estimated for the PHD filter. Estimation of the clutter intensity involves the estimation of the average clutter number per scan and the estimation of the clutter density. The clutter density is estimated as finite mixture models (FMM) via either expectation maximum (EM) or Markov chain Monte Carlo (MCMC) algorithm. Then, the estimated intensity is used directly in the PHD filter to perform multi-target detecting and tracking. Monte Carlo (MC) simulation results show that the proposed approach outperforms the naive PHD filter of assuming uniform clutter distribution significantly especially when the nominal clutter model is obviously different from the ground truth.


decision support systems | 2013

Sequential weighted combination for unreliable evidence based on evidence variance

Deqiang Han; Yong Deng; Chongzhao Han

Dempster-Shafer evidence theory is a powerful tool in uncertainty reasoning and decision-making. However counter-intuitive results can be encountered when unreliable bodies of evidence are combined by using Dempsters rule of combination in some cases. In this paper, a novel sequential evidence combination approach is proposed based on the weighted modification of bodies of evidence according to our proposed variances of evidence sequences. Experimental results show that the proposed approach is rational and effective. The proposed approach is a sequential evidence combination rule.Variance of evidence sequence is used to modify the bodies of evidence.The proposed approach can suppress counter-intuitive behaviors of Dempsters rule.


Science in China Series F: Information Sciences | 2012

Some notes on betting commitment distance in evidence theory

Deqiang Han; Yong Deng; Chongzhao Han; Yi Yang

The distance of evidence, which represents the degree of dissimilarity between bodies of evidence, has attracted more and more interest and has found extensive uses in many realms. In this paper some notes on a widely used distance of evidence, i.e., betting commitment distance, are provided, including the arguments on the rationality of its definition, some misuses and some counter-intuitive behaviors of betting commitment distance. Several numerical examples are also provided to support and verify our arguments.


international conference on automation and logistics | 2007

Time Series Forecasting Based on Wavelet KPCA and Support Vector Machine

Fei Chen; Chongzhao Han

Kernel principal components analysis (KPCA) has the advantage of extracting nonlinear features. Nonlinear mapping and generalization are the strong capabilities of support vector machine (SVM). By integrating the characteristics of KPCA and SVM, a chaotic time series forecasting method based on these two algorithms is presented. The wavelet is a kernel for KPCA and support vector machines, and genetic algorithm (GA) is used to tune the parameters automatically. It is shown that the proposed method in this paper has two-fold contributions: (1) this approach can escape from the blindness of man-made choice of the parameters. (2) The method possesses higher prediction precision and excellent forecasting effect.

Collaboration


Dive into the Chongzhao Han's collaboration.

Top Co-Authors

Avatar

Feng Lian

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Deqiang Han

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Yi Yang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jing Liu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Hongyan Zhu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Weifeng Liu

Hangzhou Dianzi University

View shared research outputs
Top Co-Authors

Avatar

Xianghui Yuan

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Hui Chen

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Xin Kang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Zengguo Sun

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge