Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where C. F. Jeff Wu is active.

Publication


Featured researches published by C. F. Jeff Wu.


Technometrics | 2008

Bayesian Hierarchical Modeling for Integrating Low-Accuracy and High-Accuracy Experiments

Peter Z. G. Qian; C. F. Jeff Wu

Standard practice when analyzing data from different types of experiments is to treat data from each type separately. By borrowing strength across multiple sources, an integrated analysis can produce better results. Careful adjustments must be made to incorporate the systematic differences among various experiments. Toward this end, some Bayesian hierarchical Gaussian process models are proposed. The heterogeneity among different sources is accounted for by performing flexible location and scale adjustments. The approach tends to produce prediction closer to that from the high-accuracy experiment. The Bayesian computations are aided by the use of Markov chain Monte Carlo and sample average approximation algorithms. The proposed method is illustrated with two examples, one with detailed and approximate finite elements simulations for mechanical material design and the other with physical and computer experiments for modeling a food processor.


Technometrics | 1997

Columnwise-pairwise algorithms with applications to the construction of supersaturated designs

William Li; C. F. Jeff Wu

Motivated by the construction of supersaturated designs, we develop a class of algorithms called columnwise-pairwise exchange algorithms. They differ from the k-exchange algorithms in two respects: (1) They exchange columns instead of rows of the design matrix, and (2) they employ a pairwise adjustment in the search for a “better” column. The proposed algorithms perform very well in the construction of supersaturated designs both for a single criterion and for multiple criteria. They are also applicable to the construction of designs that are not supersaturated.


ACS Nano | 2009

Optimizing and Improving the Growth Quality of ZnO Nanowire Arrays Guided by Statistical Design of Experiments

Sheng Xu; Nagesh Adiga; Shan Ba; Tirthankar Dasgupta; C. F. Jeff Wu; Zhong Lin Wang

Controlling the morphology of the as-synthesized nanostructures is usually challenging, and there lacks of a general theoretical guidance in experimental approach. In this study, a novel way of optimizing the aspect ratio of hydrothermally grown ZnO nanowire (NW) arrays is presented by utilizing a systematic statistical design and analysis method. In this work, we use pick-the-winner rule and one-pair-at-a-time main effect analysis to sequentially design the experiments and identify optimal reaction settings. By controlling the hydrothermal reaction parameters (reaction temperature, time, precursor concentration, and capping agent), we improved the aspect ratio of ZnO NWs from around 10 to nearly 23. The effect of noise on the experimental results was identified and successfully reduced, and the statistical design and analysis methods were very effective in reducing the number of experiments performed and in identifying the optimal experimental settings. In addition, the antireflection spectrum of the as-synthesized ZnO NWs clearly shows that higher aspect ratio of the ZnO NW arrays leads to about 30% stronger suppression in the UV-vis range emission. This shows great potential applications as antireflective coating layers in photovoltaic devices.


Technometrics | 2008

Gaussian Process Models for Computer Experiments With Qualitative and Quantitative Factors

Peter Z. G. Qian; Huaiqing Wu; C. F. Jeff Wu

Modeling experiments with qualitative and quantitative factors is an important issue in computer modeling. We propose a framework for building Gaussian process models that incorporate both types of factors. The key to the development of these new models is an approach for constructing correlation functions with qualitative and quantitative factors. An iterative estimation procedure is developed for the proposed models. Modern optimization techniques are used in the estimation to ensure the validity of the constructed correlation functions. The proposed method is illustrated with an example involving a known function and a real example for modeling the thermal distribution of a data center.


Technometrics | 1997

Optimal Blocking Schemes for 2n and 2n—p Designs

Don X. Sun; C. F. Jeff Wu; Youyi Chen

Systematic sources of variations in factorial experiments can be effectively reduced without biasing the estimates of the treatment effects by grouping the runs into blocks. For full factorial designs, optimal blocking schemes are obtained by applying the minimum aberration criterion to the block defining contrast subgroup. A related concept of order of estimability is proposed. For fractional factorial designs, because of the intrinsic difference between treatment factors and block variables, the minimum aberration approach has to be modified. A concept of admissible blocking schemes is proposed for selecting block designs based on multiple criteria. The resulting 2 n and 2 n–P designs are shown to have better overall properties for practical experiments than those in the literature.


Annals of Statistics | 2009

Construction of nested space-filling designs

Peter Z. G. Qian; Mingyao Ai; C. F. Jeff Wu

New types of designs called nested space-filling designs have been proposed for conducting multiple computer experiments with different levels of accuracy. In this article, we develop several approaches to constructing such designs. The development of these methods also leads to the introduction of several new discrete mathematics concepts, including nested orthogonal arrays and nested difference matrices.


Journal of Building Performance Simulation | 2014

Uncertainty quantification of microclimate variables in building energy models

Yuming Sun; Yeonsook Heo; Matthias H. Y. Tan; Huizhi Xie; C. F. Jeff Wu; Godfried Augenbroe

The last decade has seen a surge in the need for uncertainty analysis (UA) for building energy assessment. The rigorous determination of uncertainty in model parameters is a vital but often overlooked part of UA. To undertake this, one has to turn ones attention to a thriving area in engineering statistics that focuses on uncertainty quantification (UQ) for short. This paper applies dedicated methods and theories that are emerging in this area of statistics to the field of building energy models, and specifically to the microclimate variables embedded in them. We argue that knowing the uncertainty in these variables is a vital prerequisite for ensuing UA of whole building behaviour. Indeed, significant discrepancies have been observed between the predicted and measured state variables of building microclimates. This paper uses a set of approaches from the growing UQ arsenal, mostly regression-based methods, to develop statistical models that quantify the uncertainties in the following most significant microclimate variables: local temperature, wind speed, wind pressure and solar irradiation. These are the microclimate variables used by building energy models to define boundary conditions that encapsulate the interaction of the building with the surrounding physical environment. Although our analysis is generically applicable to any of the current energy models, we will base our UQ examples on the energy model used in EnergyPlus.


Technometrics | 2008

The Future of Industrial Statistics: A Panel Discussion

David M. Steinberg; Søren Bisgaard; Necip Doganaksoy; N. I. Fisher; Bert Gunter; Gerald J. Hahn; Sallie Keller-McNulty; Jon R. Kettenring; William Q. Meeker; Douglas C. Montgomery; C. F. Jeff Wu

Technometrics was founded in 1959 as a forum for publishing statistical methods and applications in engineering and the physical and chemical sciences. The expanding role of statistics in industry was a major stimulus, and, throughout the years many articles in the journal have been motivated by industrial problems. In this panel discussion we look ahead to the future of industrial statistics. Ten experts, encompassing a range of backgrounds, experience, and expertise, answered my request to share with us their thoughts on what lies ahead in industrial statistics. Short biographical sketches of the panelists are provided at the end of the discussion. The panelists wrote independent essays, which I have combined into an integrated discussion. Most of the essays were written as responses to a list of 10 questions that I provided to help the participants direct their thoughts. I have organized the discussion in that same fashion, stating the questions and then providing the related responses. Several discussants added remarks on the role of statistics journals, particularly of Technometrics, and I have added that as a final question. We see this article, not as the end of the story, but rather as the takeoff point for further discussion. To that end, we are initiating an open discussion forum; to participate, go to http://www.asq.org/pub/techno/ and click on Networking and Events. The American Society for Quality will host the forum and Bert Gunter has graciously agreed to serve as moderator.


Technometrics | 2013

Sequential Design and Analysis of High-Accuracy and Low-Accuracy Computer Codes

Shifeng Xiong; Peter Z. G. Qian; C. F. Jeff Wu

A growing trend in engineering and science is to use multiple computer codes with different levels of accuracy to study the same complex system. We propose a framework for sequential design and analysis of a pair of high-accuracy and low-accuracy computer codes. It first runs the two codes with a pair of nested Latin hypercube designs (NLHDs). Data from the initial experiment are used to fit a prediction model. If the accuracy of the fitted model is less than a prespecified threshold, the two codes are evaluated again with input values chosen in an elaborate fashion so that their expanded scenario sets still form a pair of NLHDs. The nested relationship between the two scenario sets makes it easier to model and calibrate the difference between the two sources. If necessary, this augmentation process can be repeated a number of times until the prediction model based on all available data has reasonable accuracy. The effectiveness of the proposed method is illustrated with several examples. Matlab codes are provided in the online supplement to this article.


Journal of the American Statistical Association | 2008

Statistical Modeling and Analysis for Robust Synthesis of Nanostructures

Tirthankar Dasgupta; Christopher Ma; V. Roshan Joseph; Zhong Lin Wang; C. F. Jeff Wu

We systematically investigate the best process conditions that ensure synthesis of different types of one-dimensional cadmium selenide nanostructures with high yield and reproducibility. Through a designed experiment and rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a multinomial generalized linear model is proposed and used. The optimum process conditions, which maximize the preceding probabilities and make the synthesis process robust (i.e., less sensitive) to variations in process variables around set values, are derived from the fitted models using Monte Carlo simulations. Cadmium selenide has been found to exhibit one-dimensional morphologies of nanowires, nanobelts, and nanosaws, often with the three morphologies being intimately intermingled within the as-deposited material. A slight change in growth condition can result in a totally different morphology. To identify the optimal process conditions that maximize the yield of each type of nanostructure and, at the same time, make the synthesis process robust (i.e., less sensitive) to variations of process variables around set values, a large number of trials were conducted with varying process conditions. Here, the response is a vector whose elements correspond to the number of appearances of different types of nanostructures. The fitted statistical models would enable nanomanufacturers to identify the probability of transition from one nanostructure to another when changes, even tiny ones, are made in one or more process variables. Inferential methods associated with the modeling procedure help in judging the relative impact of the process variables and their interactions on the growth of different nanostructures. Owing to the presence of internal noise, that is, variation around the set value, each predictor variable is a random variable. Using Monte Carlo simulations, the mean and variance of transformed probabilities are expressed as functions of the set points of the predictor variables. The mean is then maximized to find the optimum nominal values of the process variables, with the constraint that the variance is under control.

Collaboration


Dive into the C. F. Jeff Wu's collaboration.

Top Co-Authors

Avatar

V. Roshan Joseph

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Z. G. Qian

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Mak

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rui Tuo

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Godfried Augenbroe

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthias H. Y. Tan

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chih-Li Sung

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shiang-Ting Yeh

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vigor Yang

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge