Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K. J. Daun is active.

Publication


Featured researches published by K. J. Daun.


Applied Optics | 2006

Deconvolution of axisymmetric flame properties using Tikhonov regularization

K. J. Daun; Kevin A. Thomson; Fengshan Liu; Greg Smallwood

We present a method based on Tikhonov regularization for solving one-dimensional inverse tomography problems that arise in combustion applications. In this technique, Tikhonov regularization transforms the ill-conditioned set of equations generated by onion-peeling deconvolution into a well-conditioned set that is less susceptible to measurement errors that arise in experimental settings. The performance of this method is compared to that of onion-peeling and Abel three-point deconvolution by solving for a known field variable distribution from projected data contaminated with an artificially generated error. The results show that Tikhonov deconvolution provides a more accurate field distribution than onion-peeling and Abel three-point deconvolution and is more stable than the other two methods as the distance between projected data points decreases.


Journal of Heat Transfer-transactions of The Asme | 2006

Comparison of Methods for Inverse Design of Radiant Enclosures

K. J. Daun; Francis Henrique Ramos França; Marvin E. Larsen; Guillaume Leduc; John R. Howell

A particular inverse design problem is proposed as a benchmark for comparison of five solution techniques used in design of enclosures with radiating sources. The enclosure is three-dimensional and includes some surfaces that are diffuse and others that are specular diffuse. Two aspect ratios are treated. The problem is completely described, and solutions are presented as obtained by the Tikhonov method, truncated singular value decomposition, conjugate gradient regularization, quasi-Newton minimization, and simulated annealing. All of the solutions use a common set of exchange factors computed by Monte Carlo, and smoothed by a constrained maximum likelihood estimation technique that imposes conservation, reciprocity, and non-negativity. Solutions obtained by the various methods are presented and compared, and the relative advantages and disadvantages of these methods are summarized.


Journal of Heat Transfer-transactions of The Asme | 2003

Geometric optimization of radiant enclosures containing specular surfaces

K. J. Daun; David P. Morton; John R. Howell

This paper presents an optimization methodology for designing radiant enclosures containing specularly-reflecting surfaces. The optimization process works by making intelligent perturbations to the enclosure geometry at each design iteration using specialized numerical algorithms. This procedure requires far less time than the forward ‘‘trial-anderror’’ design methodology, and the final solution is near optimal. The radiant enclosure is analyzed using a Monte Carlo technique based on exchange factors, and the design is optimized using the Kiefer-Wolfowitz method. The optimization design methodology is demonstrated by solving two industrially-relevant design problems involving twodimensional enclosures that contain specular surfaces. @DOI: 10.1115/1.1599369#


Applied Optics | 2008

Parameter selection methods for axisymmetric flame tomography through Tikhonov regularization

Emil O. Akesson; K. J. Daun

Deconvolution of optically collected axisymmetric flame data is equivalent to solving an ill-posed problem subject to severe error amplification. Tikhonov regularization has recently been shown to be well suited for stabilizing this deconvolution, although the success of this method hinges on choosing a suitable regularization parameter. Incorporating a parameter selection scheme transforms this technique into a reliable automatic algorithm that outperforms unregularized deconvolution of a smoothed data set, which is currently the most popular way to analyze axisymmetric data. We review the discrepancy principle, L-curve curvature, and generalized cross-validation parameter selection schemes and conclude that the L-curve curvature algorithm is best suited to this problem.


Journal of Heat Transfer-transactions of The Asme | 2008

Investigation of Thermal Accommodation Coefficients in Time-Resolved Laser-Induced Incandescence

K. J. Daun; Gregory J. Smallwood; F. Liu

Accurate particle sizing through time-resolved laser-induced incandescence (TR-LII) requires knowledge of the thermal accommodation coefficient, but the underlying physics of this parameter is poorly understood. If the particle size is known a priori, however, TR-LII data can instead be used to infer the thermal accommodation coefficient. Thermal accommodation coefficients measured between soot and different monatomic and polyatomic gases show that the accommodation coefficient increases with molecular mass for monatomic gases and is lower for polyatomic gases. This latter result indicates that surface energy is accommodated preferentially into translational modes over internal modes for these gases.


Applied Optics | 2012

Laser-absorption tomography beam arrangement optimization using resolution matrices.

Matthew G. Twynstra; K. J. Daun

Laser-absorption tomography experiments infer the concentration distribution of a gas species from the attenuation of lasers transecting the flow field. Although reconstruction accuracy strongly depends on the layout of optical components, to date experimentalists have had no way to predict the performance of a given beam arrangement. This paper shows how the mathematical properties of the coefficient matrix are related to the information content of the attenuation data, which, in turn, forms a basis for a beam-arrangement design algorithm that minimizes the reliance on additional assumed information about the concentration distribution. When applied to a simulated laser-absorption tomography experiment, optimized beam arrangements are shown to produce more accurate reconstructions compared to other beam arrangements presented in the literature.


Combustion Theory and Modelling | 2012

Application of the conditional source-term estimation model for turbulence–chemistry interactions in a premixed flame

M. Mahdi Salehi; W. K. Bushe; K. J. Daun

Conditional Source-term Estimation (CSE) is a closure model for turbulence–chemistry interactions. This model uses the first-order CMC hypothesis to close the chemical reaction source terms. The conditional scalar field is estimated by solving an integral equation using inverse methods. It was originally developed and has been used extensively in non-premixed combustion. This work is the first application of this combustion model for a premixed flame. CSE is coupled with a Trajectory Generated Low-Dimensional Manifold (TGLDM) model for chemistry. The CSE-TGLDM combustion model is used in a RANS code to simulate a turbulent premixed Bunsen burner. Along with this combustion model, a similar model which relies on the flamelet assumption is also used for comparison. The results of these two approaches in the prediction of the velocity field, temperature and species mass fractions are compared together. Although the flamelet model is less computationally expensive, the CSE combustion model is more general and does not have the limiting assumption underlying the flamelet model.


Metallurgical and Materials Transactions B-process Metallurgy and Materials Processing Science | 2013

Experimental Characterization of Heat Transfer Coefficients During Hot Forming Die Quenching of Boron Steel

Etienne Caron; K. J. Daun; Mary A. Wells

The heat transfer coefficient (HTC) between the sheet metal and the cold tool is required to predict the final microstructure and mechanical properties of parts manufactured via hot forming die quenching. Temperature data obtained from hot stamping experiments conducted on boron steel blanks were processed using an inverse heat conduction algorithm to calculate heat fluxes and temperatures at the blank/die interface. The effect of the thermocouple response time on the calculated heat flux was compensated by minimizing the heat imbalance between the blank and the die. Peak HTCs obtained at the end of the stamping phase match steady-state model predictions. At higher blank temperatures, the time-dependent deformation of contact asperities is associated with a transient regime in which calculated HTCs are a function of the initial stamping temperature.


Numerical Heat Transfer Part A-applications | 2004

OPTIMIZATION OF TRANSIENT HEATER SETTINGS TO PROVIDE SPATIALLY UNIFORM HEATING IN MANUFACTURING PROCESSES INVOLVING RADIANT HEATING

K. J. Daun; John R. Howell; David P. Morton

This article presents an optimization methodology for finding the heater settings that provide spatially uniform transient heating in manufacturing processes involving radiant heating. Equations governing the transient temperature and temperature sensitivity distributions over the product are first derived using an infinitesimal-area technique and then solved numerically to calculate the objective function and gradient vector. Minimization is done using a quasi-Newton algorithm that incorporates an active set method to enforce design constraints. This methodology is demonstrated by finding the optimal transient heater settings of a two-dimensional annealing furnace.


Applied Optics | 2011

Infrared species tomography of a transient flow field using Kalman filtering

K. J. Daun; Steven Lake Waslander; Brandon. B. Tulloch

In infrared species tomography, the unknown concentration distribution of a species is inferred from the attenuation of multiple collimated light beams shone through the measurement field. The resulting set of linear equations is rank-deficient, so prior assumptions about the smoothness and nonnegativity of the distribution must be imposed to recover a solution. This paper describes how the Kalman filter can be used to incorporate additional information about the time evolution of the distribution into the reconstruction. Results show that, although performing a series of static reconstructions is more accurate at low levels of measurement noise, the Kalman filter becomes advantageous when the measurements are corrupted with high levels of noise. The Kalman filter also enables signal multiplexing, which can help achieve the high sampling rates needed to resolve turbulent flow phenomena.

Collaboration


Dive into the K. J. Daun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John R. Howell

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

F. Liu

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christof Schulz

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Thomas Dreier

University of Duisburg-Essen

View shared research outputs
Researchain Logo
Decentralizing Knowledge