Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vijayan N. Nair is active.

Publication


Featured researches published by Vijayan N. Nair.


Technometrics | 1988

Estimation of reliability in field-performance studies

J. D. Kalbfleisch; J. F. Lawless; Vijayan N. Nair; Jeffrey A. Robinson

Likelihood-based methods are developed for the analysis of field-performance studies with particular attention centered on the estimation of regression coefficients in parametric models. Failure-record data are those in which the time to failure and the regressor variables are observed only for those items that fail in some prespecified follow-up or warranty period (0, T]. It is noted that for satisfactory inference about baseline failure rates or regression effects it is usually necessary to supplement the failure-record data either by incorporating specific prior information about x or by taking a supplementary sample of items that survive T ○. General methods are outlined and specific formulas for various likelihood-based methods are obtained when the failure-time model is exponential or Weibull. In these models the methods are compared with respect to asymptotic efftciency of estimation. Several extensions to more complicated sampling plans are considered.


IEEE Transactions on Semiconductor Manufacturing | 1997

Model-free estimation of defect clustering in integrated circuit fabrication

David J. Friedman; Mark Hansen; Vijayan N. Nair; David A. James

This paper describes a model-free method for estimating some yield metrics that are used to track integrated circuit fabrication processes. Our method uses binary probe test data at the wafer level to estimate the size, shape and location of large-area defects or clusters of defective chips. Unlike previous methods in the yield modeling literature, our approach makes extensive use of the location of failing chips to directly identify clusters. An important by product of this analysis is a decomposition of wafer yield that attributes defective chips to either large- or small-area defects. Simulation studies show that our procedure is superior to the time-honored windowing technique for achieving a similar breakdown. In addition, by directly estimating defect clusters, we can provide engineers with a greater understanding of the manufacturing process. It has been our experience that routine identification of the spatial signatures of clustered defects and associated root-cause analysis is a cost-effective approach to yield and process improvement.


Technometrics | 2004

Selective Assembly in Manufacturing: Statistical Issues and Optimal Binning Strategies

David Mease; Vijayan N. Nair; Agus Sudjianto

Selective assembly is a cost-effective approach for reducing the overall variation and thus improving the quality of an assembled product. In this process, components of a mating pair are measured and grouped into several classes (bins) as they are manufactured. The final product is assembled by selecting the components of each pair from appropriate bins to meet the required specifications as closely as possible. This approach is often less costly than tolerance design using tighter specifications on individual components. It leads to high-quality assembly using relatively inexpensive components. In this article we describe the statistical formulation of the problem and develop optimal binning strategies under several loss functions and distributional assumptions. Optimal schemes under absolute and squared error loss are studied in detail. The results are compared with two commonly used heuristic schemes. We consider situations in which only one component of the mating pair is binned, as well as cases in which both components are binned.


Archive | 2006

Network tomography: A review and recent developments

Earl Lawrence; George Michailidis; Vijayan N. Nair; Bowei Xi

The modeling and analysis of computer communications networks give rise to a variety of interesting statistical problems. This paper focuses on network tomography, a term used to characterize two classes of large-scale inverse problems. The first deals with passive tomography where aggregate data are collected at the individual router/node level and the goal is to recover path-level information. The main problem of interest here is the estimation of the origin-destination traffic matrix. The second, referred to as active tomography, deals with reconstructing link-level information from end-to-end path-level measurements obtained by actively probing the network. The primary application in this case is estimation of quality-of-service parameters such as loss rates and delay distributions. The paper provides a review of the statistical issues and developments in network tomography with an emphasis on active tomography. An application to Internet telephony is used to illustrate the results.


Technometrics | 1997

Monitoring wafer map data from integrated circuit fabrication processes for spatially clustered defects

Mark Hansen; Vijayan N. Nair; David J. Friedman

Quality control in integrated circuit (IC) fabrication has traditionally been based on overall summary data such as lot or wafer yield. These measures are adequate if the defective ICs are distributed randomly both within and across wafers in a lot. In practice, however, the defects often occur in clusters or display other systematic patterns, In general, these spatially clustered defects have assignable causes that can be traced to individual machines or to a series of process steps that did not meet specified requirements. In this article, we develop methods for routinely monitoring probe test data at the wafer map level to detect the presence of spatial clustering. The statistical properties of a family of monitoring statistics are developed under various null and alternative situations of interest, and the resulting methodology is applied to manufacturing data.


Journal of Quality Technology | 2002

Analysis of functional responses from robust design studies

Vijayan N. Nair; Winson Taam; Kenny Ye

Robust design studies with functional responses are becoming increasingly common. The goal in these studies is to analyze location and dispersion effects and optimize performance over a range of input-output values. Taguchi and others have proposed the so-called signal-to-noise ratio analysis for robust design with dynamic characteristics. We consider more general and flexible methods for analyzing location and dispersion effects from such studies and use three real applications to illustrate the methods. Two applications demonstrate the usefulness of functional regression techniques for location and dispersion analysis while the third illustrates a parametric analysis with two-stage modeling. Both a mean-variance analysis for random selection of noise settings as well as a control-by-noise interaction analysis for explicitly controlled noise factors are considered.


Journal of Quality Technology | 1997

Graphical methods for robust design with dynamic characteristics

Mahesh Lunani; Vijayan N. Nair; Gary S. Wasserman

There has been considerable interest recently in the application of parameter design methodology to make a systems performance robust over a wide range of input conditions. This has been referred to as robust design with dynamic characteristics. In thi..


Technometrics | 1998

On the efficiency and robustness of discrete proportional-integral control schemes

Fugee Tsung; Huaiqing Wu; Vijayan N. Nair

Feedback control schemes have been widely used in process industries for many years. They are also increasingly being used in the discrete-parts manufacturing industry in recent years. Proportionalintegral (PI) schemes are especially popular, primarily because of their simple structure and ease of implementation. This article studies the efficiency and robustness properties of discrete PI schemes under some commonly encountered situations. For process disturbance, we consider the stationary ARMA (1, 1) model and the nonstationary ARIMA (1, 1, 1) model. Process dynamics is studied under a first-order dynamic model, including the special case of pure gain. The efficiency of PI schemes is compared with that of minimum mean squared error (MMSE) schemes under these models. The PI schemes are seen to be quite efficient over a broad range of the parameter space. Furthermore, the PI schemes are much more robust than MMSE schemes to model misspecihcations, especially the presence of first-order nonstationarity. Th...


Journal of the American Statistical Association | 2006

Estimating network loss rates using active tomography

Bowei Xi; George Michailidis; Vijayan N. Nair

Active network tomography refers to an interesting class of large-scale inverse problems that arise in estimating the quality of service parameters of computer and communications networks. This article focuses on estimation of loss rates of the internal links of a network using end-to-end measurements of nodes located on the periphery. A class of flexible experiments for actively probing the network is introduced, and conditions under which all of the link-level information is estimable are obtained. Maximum likelihood estimation using the EM algorithm, the structure of the algorithm, and the properties of the maximum likelihood estimators are investigated. This includes simulation studies using the ns (network simulator) to obtain realistic network traffic. The optimal design of probing experiments is also studied. Finally, application of the results to network monitoring is briefly illustrated.


Technometrics | 2001

Methods for identifying dispersion effects in unreplicated factorial experiments: A critical analysis and proposed strategies

William A. Brenneman; Vijayan N. Nair

There has been considerable interest recently in the use of statistically designed experiments to identify both location and dispersion effects for quality improvement. Analysis of dispersion effects usually requires replications that can be expensive or time consuming. Several recent articles have considered identification of both location and dispersion effects from unreplicated fractional factorial experiments. In this article, we provide a systematic study of various methods that are commonly used or have been proposed recently. Both theoretical and simulation results are used to characterize the properties. Although all methods suffer from some degree of bias, some have serious problems when the bias remains large even as the design run size increases to infinity. Based on these analyses, we propose some iterative strategies for model selection and estimation of the dispersion effects. A real example and simulations as well are used to illustrate the results.

Collaboration


Dive into the Vijayan N. Nair's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Earl Lawrence

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Mease

San Jose State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cheryl Wiese

Group Health Cooperative

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge