Ove Edlund
Luleå University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ove Edlund.
Computational Optimization and Applications | 2014
Per Bergström; Ove Edlund
Registration of point sets is done by finding a rotation and translation that produces a best fit between a set of data points and a set of model points. We use robust M-estimation techniques to limit the influence of outliers, more specifically a modified version of the iterative closest point algorithm where we use iteratively re-weighed least squares to incorporate the robustness. We prove convergence with respect to the value of the objective function for this algorithm. A comparison is also done of different criterion functions to figure out their abilities to do appropriate point set fits, when the sets of data points contains outliers. The robust methods prove to be superior to least squares minimization in this setting.
Numerical Algorithms | 2017
Per Bergström; Ove Edlund
The problem of finding a rigid body transformation, which aligns a set of data points with a given surface, using a robust M-estimation technique is considered. A refined iterative closest point (ICP) algorithm is described where a minimization problem of point-to-plane distances with a proposed constraint is solved in each iteration to find an updating transformation. The constraint is derived from a sum of weighted squared point-to-point distances and forms a natural trust region, which ensures convergence. Only a minor number of additional computations are required to use it. Two alternative trust regions are introduced and analyzed. Finally, numerical results for some test problems are presented. It is obvious from these results that there is a significant advantage, with respect to convergence rate of accuracy, to use the proposed trust region approach in comparison with using point-to-point distance minimization as well as using point-to-plane distance minimization and a Newton- type update without any step size control.
Computational Statistics & Data Analysis | 2005
Ove Edlund; Håkan Ekblom
Constrained M-estimates for regression have been previously proposed as an alternative class of robust regression estimators with high breakdown point and high asymptotic efficiency. These are closely related to S-estimates, and it is shown that in some cases they will necessarily coincide. It has been difficult to use the CM-estimators in practice for two reasons. Adequate computational methods have been lacking and there has also been some confusion concerning the tuning parameters. Both of these problems are addressed; an updated table for choice of suitable parameter value is given, and an algorithm to compute CM-estimates for regression is presented. It can also be used to compute S-estimates. The computational problem to be solved is global optimization with an inequality constraint. The algorithm consists of two phases. The first phase is finding suitable starting values for the local optimization. The second phase, the efficient finding of a local minimum, is described in more detail. There is a MATLAB code generally available from the net. A Monte Carlo simulation is performed, using this code, to test the performance of the estimator as well as the algorithm.
Bit Numerical Mathematics | 1997
Ove Edlund
A subproblem in the trust region algorithm for non-linear M-estimation by Ekblom and Madsen is to find the restricted step. It is found by calculating the M-estimator of the linearized model, subject to anL2-norm bound on the variables. In this paper it is shown that this subproblem can be solved by applying Hebden-iterations to the minimizer of the Lagrangian function. The new method is compared with an Augmented Lagrange implementation.
COMPSTAT : 24/08/1998 - 28/08/1998 | 1998
Håkan Ekblom; Ove Edlund
We consider the problem of fitting a model of the form y = f (x, β) to a set of points (x i , y i ), i = 1,..., n. If there are measurement or observation errors in x as well as in y, we have the so called errors-in-variables-problem with model equation
Journal of Statistical Software | 2004
Ove Edlund
The International Journal of Advanced Manufacturing Technology | 2011
Per Bergström; Ove Edlund; Inge Söderkvist
{y_i} = f\left( {{x_i} + {\delta _i},\beta } \right) + {\varepsilon _i},\left( {i = 1, \ldots ,n} \right)
Metrika | 2002
O. Arslan; Ove Edlund; Håkan Ekblom
ACM Transactions on Mathematical Software | 2002
Ove Edlund
(1) where δ i ∈ ℝm, i = 1,..., n are the errors in x i ∈ ℝm. Then the problem is to find a vector of parameters β ∈ ℝ p that minimizes the errors e i and δ i in some loss function subject to (1). We will present algorithms using more robust alternatives to the least squares criterion. Figure 1 gives examples where the least squares (L2), the least absolute deviation (L1) and the Huber criteria are used.
Computational statistics (Zeitschrift) | 1997
Ove Edlund; Håkan Ekblom; Kaj Madsen