Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark A. Lukas is active.

Publication


Featured researches published by Mark A. Lukas.


Mathematics and Computers in Simulation | 2011

Original Article: Comparingparameter choice methods for regularization of ill-posed problems

Frank Bauer; Mark A. Lukas

Abstract: In the literature on regularization, many different parameter choice methods have been proposed in both deterministic and stochastic settings. However, based on the available information, it is not always easy to know how well a particular method will perform in a given situation and how it compares to other methods. This paper reviews most of the existing parameter choice methods, and evaluates and compares them in a large simulation study for spectral cut-off and Tikhonov regularization. The test cases cover a wide range of linear inverse problems with both white and colored stochastic noise. The results show some marked differences between the methods, in particular, in their stability with respect to the noise and its type. We conclude with a table of properties of the methods and a summary of the simulation results, from which we identify the best methods.


The International Journal of Biochemistry & Cell Biology | 2001

The interaction of acyl-CoA with acyl-CoA binding protein and carnitine palmitoyltransferase I

Khaled A.H. Abo-Hashema; Max H. Cake; Mark A. Lukas; Jens Knudsen

The affinity of recombinant rat acyl-CoA binding protein (ACBP) towards acyl-CoAs was investigated using both fluorimetric analysis and isothermal titration microcalorimetry, neither of which requires the physical separation of bound and free ligand for determining the dissociation constants (K(d)). The displacement of 11-(dansylamino)undecanoyl-CoA (DAUDA-CoA) from ACBP yielded binding parameters for the competing acyl-CoAs that compared favourably with those obtained using ultra-sensitive microcalorimetric titration. The K(d) values of ACBP for oleoyl-CoA and docosahexaenoyl-CoA are 0.014 and 0.016 microM, respectively. Under identical experimental conditions, carnitine palmitoyltransferase I (CPT I) of purified rat liver mitochondria has K(d) values of 2.4 and 22.7 microM for oleoyl-CoA and docosahexaenoyl-CoA, respectively. Given that CPT I was not only present at a much lower concentration but also has an appreciably lower affinity for acyl-CoAs than ACBP, it is proposed that CPT I is capable of interacting directly with ACBP-acyl-CoA binary complexes. This is supported by the fact that the enzyme activity correlated with the concentration of ACBP-bound acyl-CoA but not the free acyl-CoA. A transfer of acyl-CoA from ACBP-acyl-CoA binary complexes to CPT I could be a result of the enzyme inducing a conformational alteration in the ACBP leading to the release of acyl-CoA.


Computational Statistics & Data Analysis | 2002

An L 1 estimation algorithm with degeneracy and linear constraints

Mingren Shi; Mark A. Lukas

An implementation of the reduced gradient algorithm is proposed to solve the linear L1 estimation problem (least absolute deviations regression) with linear equality or inequality constraints, including rank deficient and degenerate cases. Degenerate points are treated by solving a derived L1 problem to give a descent direction. The algorithm is a direct descent, active set method that is shown to be finite, it is geometrically motivated and simpler than the projected gradient algorithm (PGA) of Bartels, Conn and Sinclair, which uses a penalty function approach for the constrained case. Computational experiments indicate that the proposed algorithm compares favourably, both in reliability and efficiency, to the PGA, to the algorithms ACM551 and AFK (which use an LP formulation of the L1 problem) and to LPASL1 (which is based on the Huber approximation method of Madsen, Nielsen and Pinar). Although it is not as efficient as ACM552 (Barrodale-Roberts algorithm) on large scale unconstrained problems, it performs better on large scale problems with bounded variable constraints.


Mathematics of Computation | 2011

Differentiation of matrix functionals using triangular factorization

F. R. de Hoog; R. S. Anderssen; Mark A. Lukas

In various applications, it is necessary to differentiate a matrix functional w(A(x)), where A(x) is a matrix depending on a parameter vector x. Usually, the functional itself can be readily computed from a triangular factorization of A(x). This paper develops several methods that also use the triangular factorization to efficiently evaluate the first and second derivatives of the functional. Both the full and sparse matrix situations are considered. There are similarities between these methods and algorithmic differentiation. However, the methodology developed here is explicit, leading to new algorithms. It is shown how the methods apply to several applications where the functional is a log determinant, including spline smoothing, covariance selection and restricted maximum likelihood.


Computational Statistics & Data Analysis | 2004

Sensitivity analysis of constrained linear L1 regression: perturbations to response and predictor variables

Mingren Shi; Mark A. Lukas

The active set framework of the reduced gradient algorithm is used to develop a direct sensitivity analysis of linear L1 (least absolute deviations) regression with linear equality and inequality constraints on the parameters. We investigate the effect on the L1 regression estimate of a perturbation to the values of the response or predictor variables. For observations with nonzero residuals, we find intervals for the values of the variables for which the estimate is unchanged. For observations with zero residuals, we find the change in the estimate due to a small perturbation to the variable value. The results provide practical diagnostic formulae. They quantify some robustness properties of constrained L1 regression and show that it is stable, but not uniformly stable. The level of sensitivity to perturbations depends on the degree of collinearity in the model and, for predictor variables, also on how close the estimate is to being nonunique. The results are illustrated with numerical simulations on examples including curve fitting and derivative estimation using trigonometric series.


Bulletin of The Australian Mathematical Society | 1995

On the discrepancy principle and generalised maximum likelihood for regularisation

Mark A. Lukas

Let fnλ be the regularised solution of a general, linear operator equation, K f0 = g, from discrete, noisy data yi = g(xi ) + ei, i = 1, …, n, where ei are uncorrelated random errors with variance σ2. In this paper, we consider the two well–known methods – the discrepancy principle and generalised maximum likelihood (GML), for choosing the crucial regularisation parameter λ. We investigate the asymptotic properties as n → ∞ of the “expected” estimates λD and λM corresponding to these two methods respectively. It is shown that if f0 is sufficiently smooth, then λD is weakly asymptotically optimal (ao) with respect to the risk and an L2 norm on the output error. However, λD oversmooths for all sufficiently large n and also for all sufficiently small σ2. If f0 is not too smooth relative to the regularisation space W, then λD can also be weakly ao with respect to a whole class of loss functions involving stronger norms on the input error. For the GML method, we show that if f0 is smooth relative to W (for example f0 xs2208 Wθ, 2, θ > m, if W = Wm, 2), then λM is asymptotically sub-optimal and undersmoothing with respect to all of the loss functions above.


Journal of Computational and Applied Mathematics | 2010

Efficient algorithms for robust generalized cross-validation spline smoothing

Mark A. Lukas; Frank de Hoog; R. S. Anderssen

Generalized cross-validation (GCV) is a widely used parameter selection criterion for spline smoothing, but it can give poor results if the sample size n is not sufficiently large. An effective way to overcome this is to use the more stable criterion called robust GCV (RGCV). The main computational effort for the evaluation of the GCV score is the trace of the smoothing matrix, trA, while the RGCV score requires both trA and trA^2. Since 1985, there has been an efficient O(n) algorithm to compute trA. This paper develops two pairs of new O(n) algorithms to compute trA and trA^2, which allow the RGCV score to be calculated efficiently. The algorithms involve the differentiation of certain matrix functionals using banded Cholesky decomposition.


Numerical Functional Analysis and Optimization | 1995

Convergence rates for moment collocation solutions of linear operator equations

Mark A. Lukas

Given a linear operator equation Kf=g with data g(x i )i=1,…,n, we consider the general moment collocation solution defined as the function f n that minimizes ||Pf n ||2 over a Hilbert space, subject to Kf n (x i )=g(x i )i=1,…,n. Here P is an orthogonal projection with a finite dimensional null space. In the case of P=I, the identity, it is known that if a certain kernel depending on K is continuous, then f n → f 0 , the true solution, as the maximum subinterval width → 0. Moreover, if the kernel satisfies a smoothness condition, then rates of convergence are known. In this paper we extend these results to the case with general P.


Inverse Problems | 2006

Robust generalized cross-validation for choosing the regularization parameter

Mark A. Lukas


Archive | 1980

The application and numerical solution of integral equations

R. S. Anderssen; Frank de Hoog; Mark A. Lukas

Collaboration


Dive into the Mark A. Lukas's collaboration.

Top Co-Authors

Avatar

R. S. Anderssen

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Frank de Hoog

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Mingren Shi

University of Southern Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mingren Shi

University of Southern Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge