Peiliang Xu
Kyoto University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peiliang Xu.
Journal of Geodesy | 2012
Peiliang Xu; Jingnan Liu; Chuang Shi
The weighted total least squares (TLS) method has been developed to deal with observation equations, which are functions of both unknown parameters of interest and other measured data contaminated with random errors. Such an observation model is well known as an errors-in-variables (EIV) model and almost always solved as a nonlinear equality-constrained adjustment problem. We reformulate it as a nonlinear adjustment model without constraints and further extend it to a partial EIV model, in which not all the elements of the design matrix are random. As a result, the total number of unknowns in the normal equations has been significantly reduced. We derive a set of formulae for algorithmic implementation to numerically estimate the unknown model parameters. Since little statistical results about the TLS estimator in the case of finite samples are available, we investigate the statistical consequences of nonlinearity on the nonlinear TLS estimate, including the first order approximation of accuracy, nonlinear confidence region and bias of the nonlinear TLS estimate, and use the bias-corrected residuals to estimate the variance of unit weight.
Journal of Geodesy | 2005
Peiliang Xu
The findings of this paper are summarized as follows: (1) We propose a sign-constrained robust estimation method, which can tolerate 50% of data contamination and meanwhile achieve high, least-squares-comparable efficiency. Since the objective function is identical with least squares, the method may also be called sign-constrained robust least squares. An iterative version of the method has been implemented and shown to be capable of resisting against more than 50% of contamination. As a by-product, a robust estimate of scale parameter can also be obtained. Unlike the least median of squares method and repeated medians, which use a least possible number of data to derive the solution, the sign-constrained robust least squares method attempts to employ a maximum possible number of good data to derive the robust solution, and thus will not be affected by partial near multi-collinearity among part of the data or if some of the data are clustered together; (2) although M-estimates have been reported to have a breakdown point of 1/(t+1), we have shown that the weights of observations can readily deteriorate such results and bring the breakdown point of M-estimates of Huber’s type to zero. The same zero breakdown point of the L1-norm method is also derived, again due to the weights of observations; (3) by assuming a prior distribution for the signs of outliers, we have developed the concept of subjective breakdown point, which may be thought of as an extension of stochastic breakdown by Donoho and Huber but can be important in explaining real-life problems in Earth Sciences and image reconstruction; and finally, (4) We have shown that the least median of squares method can still break down with a single outlier, even if no highly concentrated good data nor highly concentrated outliers exist.
Earth, Planets and Space | 2009
Juraj Janák; Yoichi Fukuda; Peiliang Xu
In principle, every component of a disturbing gravity gradient tensor measured in a satellite orbit can be used to obtain gravity anomalies on the Earth’s surface. Consequently, these can be used in combination with ground or marine data for further gravity field modeling or for verification of satellite data. Theoretical relations can be derived both in spectral and spatial forms. In this paper, we focus on the derivation of a spatial integral form in the geocentric spherical coordinates that seems to be the most convenient one for regional gravity field modeling. All of the second partial derivatives of the generalized Stokes’s kernel are derived, and six surface Fredholm integral equations are formulated and discretized. The inverse problems formulated for particular components clearly reveal different behaviors in terms of numerical stability of the solution. Simulated GOCE data disturbed with Gaussian noise are used to study the performance of two regularization methods: truncated singular value decomposition and ridge regression. The optimal ridge regression method shows slightly better results in terms of the root mean squared deviation of the differences from the exact solution.
Journal of Geodesy | 2014
Peiliang Xu; Jingnan Liu
Although total least squares has been substantially investigated theoretically and widely applied in practical applications, almost nothing has been done to simultaneously address the estimation of parameters and the errors-in-variables (EIV) stochastic model. We prove that the variance components of the EIV stochastic model are not estimable, if the elements of the random coefficient matrix can be classified into two or more groups of data of the same accuracy. This result of inestimability is surprising as it indicates that we have no way of gaining any knowledge on such an EIV stochastic model. We demonstrate that the linear equations for the estimation of variance components could be ill-conditioned, if the variance components are theoretically estimable. Finally, if the variance components are estimable, we derive the biases of their estimates, which could be significantly amplified due to a large condition number.
Survey Review | 2012
Peiliang Xu; Chuang Shi; Jingnan Liu
Abstract The integer least squares (ILS) problem, also known as the weighted closest point problem, is highly interdisciplinary, but no algorithm can find its global optimal integer solution in polynomial time. We first outline two suboptimal integer solutions, which can be important either in real time communication systems or to solve high dimensional GPS integer ambiguity unknowns. We then focus on the most efficient algorithm to search for the exact integer solution, which is shown to be faster than LAMBDA in the sense that the ratio of integer candidates to be checked by the efficient algorithm to those by LAMBDA can be theoretically expressed by rm, where r⩽1 and m is the number of integer unknowns. Finally, we further improve the searching efficiency of the most powerful combined algorithm by implementing two sorting strategies, which can either be used for finding the exact integer solution or for constructing a suboptimal integer solution. Test examples clearly demonstrate that the improved methods can perform significantly better than the most powerful combined algorithm to simultaneously find the optimal and second optimal integer solutions, if the ILS problem cannot be well reduced.
Journal of Geodesy | 2012
Peiliang Xu
The LLL reduction of lattice vectors and its variants have been widely used to solve the weighted integer least squares (ILS) problem, or equivalently, the weighted closest point problem. Instead of reducing lattice vectors, we propose a parallel Cholesky-based reduction method for positive definite quadratic forms. The new reduction method directly works on the positive definite matrix associated with the weighted ILS problem and is shown to satisfy part of the inequalities required by Minkowski’s reduction of positive definite quadratic forms. The complexity of the algorithm can be fixed a priori by limiting the number of iterations. The simulations have clearly shown that the parallel Cholesky-based reduction method is significantly better than the LLL algorithm to reduce the condition number of the positive definite matrix, and as a result, can significantly reduce the searching space for the global optimal, weighted ILS or maximum likelihood estimate.
Journal of Geodesy | 2014
Peiliang Xu; Jingnan Liu; Wenxian Zeng; Yunzhong Shen
Although total least squares (TLS) is more rigorous than the weighted least squares (LS) method to estimate the parameters in an errors-in-variables (EIV) model, it is computationally much more complicated than the weighted LS method. For some EIV problems, the TLS and weighted LS methods have been shown to produce practically negligible differences in the estimated parameters. To understand under what conditions we can safely use the usual weighted LS method, we systematically investigate the effects of the random errors of the design matrix on weighted LS adjustment. We derive the effects of EIV on the estimated quantities of geodetic interest, in particular, the model parameters, the variance–covariance matrix of the estimated parameters and the variance of unit weight. By simplifying our bias formulae, we can readily show that the corresponding statistical results obtained by Hodges and Moore (Appl Stat 21:185–195, 1972) and Davies and Hutton (Biometrika 62:383–391, 1975) are actually the special cases of our study. The theoretical analysis of bias has shown that the effect of random matrix on adjustment depends on the design matrix itself, the variance–covariance matrix of its elements and the model parameters. Using the derived formulae of bias, we can remove the effect of the random matrix from the weighted LS estimate and accordingly obtain the bias-corrected weighted LS estimate for the EIV model. We derive the bias of the weighted LS estimate of the variance of unit weight. The random errors of the design matrix can significantly affect the weighted LS estimate of the variance of unit weight. The theoretical analysis successfully explains all the anomalously large estimates of the variance of unit weight reported in the geodetic literature. We propose bias-corrected estimates for the variance of unit weight. Finally, we analyze two examples of coordinate transformation and climate change, which have shown that the bias-corrected weighted LS method can perform numerically as well as the weighted TLS method.
Journal of Geodesy | 2015
Yun Shi; Peiliang Xu; Jingnan Liu; Chuang Shi
The purpose of this note is to derive alternative formulae for parameter estimation in partial errors-in-variables models to those given in Xu et al. (J Geod 86:661–675, 2012). We show that these new formulae are more compact and more straightforward to interpret the corrections of the random elements of the design matrix. Nevertheless, each formula has its own computational advantage, depending on the relationship between the number of measurements y and that of the independent random elements of the design matrix. The original formula is computationally more efficient, if the number of measurements y is significantly larger than that of the independent random elements of the design matrix; otherwise, the new alternative formula is much more efficient.
Journal of Geodesy | 2012
Yunzhong Shen; Peiliang Xu; Bofeng Li
A regularized solution is well-known to be biased. Although the biases of the estimated parameters can only be computed with the true values of parameters, we attempt to improve the regularized solution by using the regularized solution itself to replace the true (unknown) parameters for estimating the biases and then removing the computed biases from the regularized solution. We first analyze the theoretical relationship between the regularized solutions with and without the bias correction, derive the analytical conditions under which a bias-corrected regularized solution performs better than the ordinary regularized solution in terms of mean squared errors (MSE) and design the corresponding method to partially correct the biases. We then present two numerical examples to demonstrate the performance of our partially bias-corrected regularization method. The first example is mathematical with a Fredholm integral equation of the first kind. The simulated results show that the partially bias-corrected regularized solution can improve the MSE of the ordinary regularized function by 11%. In the second example, we recover gravity anomalies from simulated gravity gradient observations. In this case, our method produces the mean MSE of 3.71 mGal for the resolved mean gravity anomalies, which is better than that from the regularized solution without bias correction by 5%. The method is also shown to successfully reduce the absolute maximum bias from 13.6 to 6.8 mGal.
Communications in Statistics - Simulation and Computation | 2000
Peiliang Xu; Seiichi Shimada
A simple multiplicative noise model with a constant signal has become a basic mathematical model in processing synthetic aperture radar images. The purpose of this paper is to examine a general multiplicative noise model with linear signals represented by a number of unknown parameters. The ordinary least squares (LS) and weighted LS methods are used to estimate the model parameters. The biases of the weighted LS estimates of the parameters are derived. The biases are then corrected to obtain a second-order unbiased estimator, which is shown to be exactly equivalent to the maximum log quasi-likelihood estimation, though the quasi-likelihood function is founded on a completely different theoretical consideration and is known, at the present time, to be a uniquely acceptable theory for multiplicative noise models. Synthetic simulations are carried out to confirm theoretical results and to illustrate problems in processing data contaminated by multiplicative noises. The sensitivity of the LS and weighted LS methods to extremely noisy data is analysed through the simulated examples.