Eric de Sturler
Virginia Tech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric de Sturler.
Plant Physiology | 2007
Xin-Guang Zhu; Eric de Sturler; Stephen P. Long
The distribution of resources between enzymes of photosynthetic carbon metabolism might be assumed to have been optimized by natural selection. However, natural selection for survival and fecundity does not necessarily select for maximal photosynthetic productivity. Further, the concentration of a key substrate, atmospheric CO2, has changed more over the past 100 years than the past 25 million years, with the likelihood that natural selection has had inadequate time to reoptimize resource partitioning for this change. Could photosynthetic rate be increased by altered partitioning of resources among the enzymes of carbon metabolism? This question is addressed using an “evolutionary” algorithm to progressively search for multiple alterations in partitioning that increase photosynthetic rate. To do this, we extended existing metabolic models of C3 photosynthesis by including the photorespiratory pathway (PCOP) and metabolism to starch and sucrose to develop a complete dynamic model of photosynthetic carbon metabolism. The model consists of linked differential equations, each representing the change of concentration of one metabolite. Initial concentrations of metabolites and maximal activities of enzymes were extracted from the literature. The dynamics of CO2 fixation and metabolite concentrations were realistically simulated by numerical integration, such that the model could mimic well-established physiological phenomena. For example, a realistic steady-state rate of CO2 uptake was attained and then reattained after perturbing O2 concentration. Using an evolutionary algorithm, partitioning of a fixed total amount of protein-nitrogen between enzymes was allowed to vary. The individual with the higher light-saturated photosynthetic rate was selected and used to seed the next generation. After 1,500 generations, photosynthesis was increased substantially. This suggests that the “typical” partitioning in C3 leaves might be suboptimal for maximizing the light-saturated rate of photosynthesis. An overinvestment in PCOP enzymes and underinvestment in Rubisco, sedoheptulose-1,7-bisphosphatase, and fructose-1,6-bisphosphate aldolase were indicated. Increase in sink capacity, such as increase in ADP-glucose pyrophosphorylase, was also indicated to lead to increased CO2 uptake rate. These results suggest that manipulation of partitioning could greatly increase carbon gain without any increase in the total protein-nitrogen investment in the apparatus for photosynthetic carbon metabolism.
SIAM Journal on Scientific Computing | 2006
Michael L. Parks; Eric de Sturler; Greg Mackey; Duane D. Johnson; Spandan Maiti
Many problems in science and engineering require the solution of a long sequence of slowly changing linear systems. We propose and analyze two methods that significantly reduce the total number of matrix-vector products required to solve all systems. We consider the general case where both the matrix and right-hand side change, and we make no assumptions regarding the change in the right-hand sides. Furthermore, we consider general nonsingular matrices, and we do not assume that all matrices are pairwise close or that the sequence of matrices converges to a particular matrix. Our methods work well under these general assumptions, and hence form a significant advancement with respect to related work in this area. We can reduce the cost of solving subsequent systems in the sequence by recycling selected subspaces generated for previous systems. We consider two approaches that allow for the continuous improvement of the recycled subspace at low cost. We consider both Hermitian and non-Hermitian problems, and we analyze our algorithms both theoretically and numerically to illustrate the effects of subspace recycling. We also demonstrate the effectiveness of our algorithms for a range of applications from computational mechanics, materials science, and computational physics.
SIAM Journal on Numerical Analysis | 1999
Eric de Sturler
Optimal Krylov subspace methods like GMRES and GCR have to compute an orthogonal basis for the entire Krylov subspace to compute the minimal residual approximation to the solution. Therefore, when the number of iterations becomes large, the amount of work and the storage requirements become excessive. In practice one has to limit the resources. The most obvious ways to do this are to restart GMRES after some number of iterations and to keep only some number of the most recent vectors in GCR. This may lead to very poor convergence and even stagnation. Therefore, we will describe a method that reveals which subspaces of the Krylov space were important for convergence thus far and exactly how important they are. This information is then used to select which subspace to keep for orthogonalizing future search directions. Numerical results indicate this to be a very effective strategy.
SIAM Journal on Scientific Computing | 2005
Eric de Sturler; Jörg Liesen
We study block-diagonal preconditioners and an efficient variant of constraint preconditioners for general two-by-two block linear systems with zero (2,2)-block. We derive block-diagonal preconditioners from a splitting of the (1,1)-block of the matrix. From the resulting preconditioned system we derive a smaller, so-called related system that yields the solution of the original problem. Solving the related system corresponds to an efficient implementation of constraint preconditioning. We analyze the properties of both classes of preconditioned matrices, in particular their spectra. Using analytical results, we show that the related system matrix has the more favorable spectrum, which in many applications translates into faster convergence for Krylov subspace methods. We show that fast convergence depends mainly on the quality of the splitting, a topic for which a substantial body of theory exists. Our analysis also provides a number of new relations between block-diagonal preconditioners and constraint preconditioners. For constrained problems, solving the related system produces iterates that satisfy the constraints exactly, just as for systems with a constraint preconditioner. Finally, for the Lagrange multiplier formulation of a constrained optimization problem we show how scaling nonlinear constraints can dramatically improve the convergence for linear systems in a Newton iteration. Our theoretical results are confirmed by numerical experiments on a constrained optimization problem. We consider the general, nonsymmetric, nonsingular case. Our only additional requirement is the nonsingularity of the Schur-complement--type matrix derived from the splitting that defines the preconditioners. In particular, the (1,2)-block need not equal the transposed (2,1)-block, and the (1,1)-block might be indefinite or even singular. This is the first paper in a two-part sequence. In the second paper we will study the use of our preconditioners in a variety of applications.
ACM Transactions on Graphics | 2002
Alla Sheffer; Eric de Sturler
Texture is an essential component of computer generated models. For a texture mapping procedure to be effective it has to generate continuous textures and cause only small mapping distortion. The Angle Based Flattening (ABF) parameterization method is guaranteed to provide a continuous (no foldovers) mapping. It also minimizes the angular distortion of the parameterization, including locating the optimal planar domain boundary. However, since it concentrates on minimizing the angular distortion of the mapping, it can introduce relatively large linear distortion.In this paper we introduce a new procedure for reducing length distortion of an existing parameterization and apply it to ABF results. The linear distortion reduction is added as a second step in a texture mapping computation. The new method is based on computing a mapping from the plane to itself which has length distortion very similar to that of the ABF parameterization. By applying the inverse mapping to the result of the initial parameterization, we obtain a new parameterization with low length distortion. We notice that the procedure for computing the inverse mapping can be applied to any other (convenient) mapping from the three-dimensional surface to the plane in order to improve it.The mapping in the plane is computed by applying weighted Laplacian smoothing to a Cartesian grid covering the planar domain of the initial mapping. Both the mapping and its inverse are provably continuous. Since angle preserving (conformal) mappings, such as ABF, locally preserve distances as well, the planar mapping has small local deformation. As a result, the inverse mapping does not significantly increase the angular distortion.The combined texture mapping procedure provides a mapping with low distance and angular distortion, which is guaranteed to be continuous.
SIAM Journal on Scientific Computing | 2005
Misha E. Kilmer; Eric de Sturler
We discuss the efficient solution of a long sequence of slowly varying linear systems arising in computations for diffuse optical tomographic imaging. The reconstruction of three-dimensional absorption and scattering information by matching computed solutions from a parameterized model to measured data leads to a nonlinear least squares problem that we solve using the Gauss--Newton method with a line search. This algorithm requires the solution of a long sequence of linear systems. Each choice of parameters in the nonlinear least squares algorithm results in a different matrix describing the optical properties of the medium. These matrices change slowly from one step to the next, but may change significantly over many steps. For each matrix we must solve a set of linear systems with multiple shifts and multiple right-hand sides. For this problem, we derive strategies for recycling Krylov subspace information that exploit properties of the application and the nonlinear optimization algorithm to significantly reduce the total number of iterations over all linear systems. Furthermore, we introduce variants of GCRO that exploit symmetry and that allow simultaneous solution of multiple shifted systems using a single Krylov subspace in combination with recycling. Although we focus on a particular application and optimization algorithm, our approach is applicable generally to problems where sequences of linear systems must be solved. This may guide other researchers to exploit the opportunities of tunable solvers. We provide results for two sets of numerical experiments to demonstrate the effectiveness of the resulting method.
international conference on computational science | 2001
Milind A. Bhandarkar; Laxmikant V. Kalé; Eric de Sturler; Jay Hoeflinger
Parallel Computational Science and Engineering (CSE) applications often exhibit irregular structure and dynamic load patterns. Many such applications have been developed using MPI. Incorporating dynamic load balancing techniques at the application-level involves significant changes to the design and structure of applications. On the other hand, traditional run-time systems for MPI do not support dynamic load balancing. Object-based parallel programming languages, such as Charm++ support efficient dynamic load balancing using object migration. However, converting legacy MPI applications to such object-based paradigms is cumbersome. This paper describes an implementation of MPI, called Adaptive MPI (AMPI) that supports dynamic load balancing for MPI applications. Conversion from MPI to this platform is straightforward even for large legacy codes. We describe our positive experience in converting the component codes ROCFLO and ROCSOLID of a Rocket Simulation application to AMPI.
SIAM Journal on Numerical Analysis | 2006
Chris Siefert; Eric de Sturler
\noindent We propose and examine block-diagonal preconditioners and variants of indefinite preconditioners for block two-by-two generalized saddle-point problems. That is, we consider the nonsymmetric, nonsingular case where the (2,2) block is small in norm, and we are particularly concerned with the case where the (1,2) block is different from the transposed (2,1) block. We provide theoretical and experimental analyses of the convergence and eigenvalue distributions of the preconditioned matrices. We also extend the results of [de Sturler and Liesen, SIAM J. Sci. Comput., 26 (2005), pp. 1598-1619] to matrices with nonzero (2,2) block and to the use of approximate Schur complements. To demonstrate the effectiveness of these preconditioners we show convergence results, spectra, and eigenvalue bounds for two model Navier--Stokes problems.
SIAM Journal on Scientific Computing | 2012
Kapil Ahuja; Eric de Sturler; Serkan Gugercin; Eun R. Chang
Science and engineering problems frequently require solving a sequence of dual linear systems. Besides having to store only a few Lanczos vectors, using the biconjugate gradient method (BiCG) to solve dual linear systems has advantages for specific applications. For example, using BiCG to solve the dual linear systems arising in interpolatory model reduction provides a backward error formulation in the model reduction framework. Using BiCG to evaluate bilinear forms---for example, in quantum Monte Carlo (QMC) methods for electronic structure calculations---leads to a quadratic error bound. Since our focus is on sequences of dual linear systems, we introduce recycling BiCG, a BiCG method that recycles two Krylov subspaces from one pair of dual linear systems to the next pair. The derivation of recycling BiCG also builds the foundation for developing recycling variants of other bi-Lanczos based methods, such as CGS, BiCGSTAB, QMR, and TFQMR. We develop an augmented bi-Lanczos algorithm and a modified two-te...
ieee international conference on high performance computing data and analytics | 1994
Eric de Sturler; Henk A. van der Vorst
On large distributed memory parallel computers the global communication cost of inner products seriously limits the performance of Krylov subspace methods [3]. We consider improved algorithms to reduce this communication overhead, and we analyze the performance by experiments on a 400-processor parallel computer and with a simple performance model.