Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jagdish S. Rustagi is active.

Publication


Featured researches published by Jagdish S. Rustagi.


Optimization Techniques in Statistics | 1994

Linear Programming Techniques

Jagdish S. Rustagi

The optimization of objective functions under general inequality constraints is needed in many applications, especially in business and industry. In statistics, optimization under inequality constraints occurs in several situations. Many problems in regression analysis involve constrained optimization with inequality constraints. Such problems of optimization under constraints are usually studied in mathematical programming. Usually, the solutions to optimization problems with inequality constraints do not lead to closed form solutions; hence, numerical procedures are devised to solve them. Mathematical programming techniques are applied also too many other areas such as economics, operations research, engineering, and industry. The area of optimization in operations research is understood to comprise all mathematical programming techniques. This chapter discusses the optimization techniques of stochastic approximation, dynamic programming, variational methods, and simulation methods in addition to those of mathematical programming.


Optimization Techniques in Statistics | 1994

Nonlinear Programming Methods

Jagdish S. Rustagi

This chapter discusses some nonlinear programming methods. Many applications in practice require a nonlinear function to be optimized under linear and nonlinear constraints. Such optimization problems belong to the area of nonlinear programming. A special case of nonlinear programming is linear programming where the objective function is linear and the constraints are given in terms of linear inequalities. The field of mathematical programming is concerned with the following optimization problem. When the inequalities are replaced by equations, the foregoing reduces to the classical optimization problem of constrained minima. Such problems with equality constraints are solved by the method of Lagrange multipliers. In special cases of nonlinear programming problems, such as in quadratic or convex programming, the uniqueness of the solution is established under an appropriate formulation. When there are no constraints, nonlinear optimization is accomplished through numerical or classical methods.


Optimization Techniques in Statistics | 1994

Optimization in Simulation

Jagdish S. Rustagi

Mathematical models play an important role in the description and analysis of data. In the study of biological, physical, and social systems, complex mathematical models arise. Only through computer simulations is an understanding of these models possible. The development of a model for a given system requires the use of the available knowledge of the system and the specification of objectives of the study. To model the system, mathematical language is usually used. The performance of the proposed model is studied through simulations. The actual information available from the system under study is used to validate the proposed model. Optimization techniques are required in the validation of models. These are used in specification of the model, estimating the parameters of the model, and then in model validation. Simulation is mathematical experimentation on the computer. In the design of simulation experiments, usual rules of good experimental design are needed. Optimization is also needed not only for minimizing computer time but also for using optimal experiments.


Optimization Techniques in Statistics | 1994

Numerical Methods of Optimization

Jagdish S. Rustagi

This chapter discusses the numerical methods of optimization. Solutions to many optimization problems are often not obtainable in a closed form, and numerical solutions are needed. In statistical applications such as censored data analysis of reliability and survival models, numerical procedures become a necessity. Easy access to numerical optimization procedures through electronic computers has recently become available, and such procedures are now extensively used in applications in business industry and the government. Statistical procedures using multivariate methods need involved computations, and they are especially suited for numerical results. Important areas of statistics such as those of robust procedures, jackknifing methods, and bootstrap techniques depend heavily on numerical analysis. The chapter discusses the Newton–Raphson, Gauss–Newton, and gradient methods, and presents some direct search techniques. It also focuses on the alternating conditional expectation algorithm, which has been developed to fit general regression models using transformation.


Computational Statistics & Data Analysis | 1991

Trimmed jackknife kernel estimate for the probability density function

Jagdish S. Rustagi; Walfredo R. Javier; Jose S. Victoria

Abstract Jackknife procedures have been used in estimating the probability density function recently. To robustify these procedures, a trimmed jackknife procedure, using the trimmed average of the pseudovalues, has been proposed for kernel density estimates. The proposed procedure provides not only the estimates with reduced bias but also leads to consistent and asymptotically normal estimates. This technique is applied to the data on breast cancer tumor volumes obtained from Cancer Registry of The Ohio State University Hospital.


Communications in Statistics - Simulation and Computation | 1978

Optimization in statistics: an overview

Jagdish S. Rustagi

This note presents a brief overview of the applications of optimization techniques in statistics. Optimizing techniques are broadly classified as classical, variational, numerical, and mathematical programming. A brief discussion of their applications to statistics is given, and a few references are provided for illustrative purposes.


Optimization Techniques in Statistics | 1994

Dynamic Programming Methods

Jagdish S. Rustagi

Dynamic programming deals with a procedure of optimization that solves problems concerned with a sequence of interrelated decisions. Ordinarily in a problem of dynamic programming, one is optimizing an objective function that simplifies to be the sum of objective functions that are dependent on individual decisions and the situations in the sequence of decision. The essence of the dynamic programming technique is that the current situation is transformed into a new situation through a decision, leading to a recursive relation. The technique of dynamic programming was developed by Richard Bellman in the early 1950s. The method of backward induction as used in solving statistical problems in sequential analysis may be regarded a precursor of the technique of dynamic programming. The technique has been found useful in solving problems in control theory, sequential decision theory, and the theory of adaptive processes. Optimization problems in various other applications in business also reduce to those of dynamic programming. Numerical answers to difficult optimization problems sometimes are possible through the use of dynamic programming.


Communications in Statistics - Simulation and Computation | 1987

A monte carlo study of the effect of grouping on cox's regression model

Chinying J. Wang; Jagdish S. Rustagi

Coxs discrete logistic model was extended to the study of the life table by Thompson (1977) to handle grouped survival data. Inferences about the effect of grouping are studies byMonte Carlo methods. The results show that the effect of grouping is not substantial. This approach is applied to the grouped data on liver cancer. The computer program developed for grouped censored data with continuous and indicator covariates is of practical importance and is available fromThe Ohio State University


Optimization Techniques in Statistics | 1994

Optimization and Inequalities

Jagdish S. Rustagi

This chapter explains how inequalities play an important role in statistics. Many results, such as the Cramer–Rao inequality, which provides the lower bound of the variance of any estimate of a parameter, are extremely important in theoretical statistics. Some of these inequalities are derived from classical inequalities in mathematics. Inequalities can often be derived from the application of optimization methods. The most universal weapon for the discovery and proof of inequalities is the general theory of maxima and minima of functions of any number of variables. In multivariate statistical problems and many other similar situations, matrices are involved, for example, in quadratic forms. Some multivariate statistical analysis involves matrices also.


Optimization Techniques in Statistics | 1994

Optimization in Function Spaces

Jagdish S. Rustagi

This chapter discusses the basic concepts of functional analysis. The general theory of optimization forms an important area of functional analysis. Many results in statistical theory require the justification for existence of optima in fairly general abstract spaces. For example, the method of obtaining maximum likelihood estimates for probability density functions needs optimization over the space of functions that forms a subset of a Hilbert space. Similarly, solution of moment problems may need optimization over the class of cumulative distribution functions, which may form a subset of a Banach space. Many problems in statistics need optimization of special type of functionals. For example, problems of finding optima of integrals involving unknown functions have been studied through the calculus of variations. The natural generalization of Euclidean space is to metric spaces, where the concept of distance is generalized to that of a metric. Further generalization of metric spaces is made to topological spaces, where the concept of neighborhood is defined in terms of open sets.

Collaboration


Dive into the Jagdish S. Rustagi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jose S. Victoria

University of the Philippines

View shared research outputs
Top Co-Authors

Avatar

Walfredo R. Javier

University of the Philippines

View shared research outputs
Top Co-Authors

Avatar

Abhaya Indrayan

University College of Medical Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge