Rajan Srinivasan
University of Twente
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rajan Srinivasan.
Signal Processing | 1998
Rajan Srinivasan
The estimation of rare event probabilities using importance sampling (IS) is studied in this paper. A new method is suggested in the context of iid sums which makes full use of the form of the component density function of the sum. It is proved that this results in an estimation technique that enhances the power of IS biasing methods which do not explicitly use this knowledge. The method is suitable for finite sums. Optimization of the scheme and its performance are illustrated through example and asymptotic analysis and new asymptotic expansions for tail probability are given. This technique facilitates solution of the inverse IS problem. The inverse problem is one of finding a number or threshold that is exceeded by a random variable or sum with a given probability. It turns out that with a suitably decreasing threshold the simulation gain becomes asymptotically constant. These methods are then applied to CFAR detection algorithms. New results for a censored ordered statistic cell averaging CFAR detector are obtained.
Signal Processing | 1998
Rajan Srinivasan
Sums of random variables appear frequently in several areas of the pure and applied sciences. When the variables are independent the sum density is the convolution of individual density functions. Convolution is almost always computationally intensive. We examine here the point estimation of i.i.d. sum densities and introduce the idea of an importance sampling convolver. This motivates an approximate analytical representation of the sum density that is easily computed. The representation involves a single convolution and is applicable to individual densities whose moment generating functions exist. Convergence to normality of the asymptotic form of the approximate density is established. The corresponding distribution approximations in the finite and asymptotic case are also given. One well-known application of practical value is considered in detail to demonstrate use of the approximation and establish its closeness to optimized simulation results. The key finding in this paper is that importance sampling can, in certain situations, lead to approximate formulae.
ieee international radar conference | 2006
Laura Anitori; Rajan Srinivasan; Muralidhar Rangaswamy
Importance sampling (IS) techniques are applied to space-time adaptive processing (STAP) radar detection algorithms for performance characterization via fast estimation of false alarm probabilities (FAPs). The work here builds on and extends the initial thrust in this area provided in a recent paper. The theory of the normalized matched filter (NMF) and normalized adaptive MF (NAMF) detectors is briefly discussed and a new variant, the envelope-law NAMF detector, is presented. New IS biasing techniques, using rotation of data vectors and 2-dimensional biasing, are proposed and used for fast simulation. The envelope (or E-NAMF) detector is also analyzed and its detection probability is compared with that of the NAMF detector for fluctuating and non-fluctuating targets in homogeneous and nonhomogeneous backgrounds
1998 International Conference on Applications of Photonic Technology III: Closing the Gap between Theory, Development, and Applications | 1998
David Remondo; Rajan Srinivasan; Victor F. Nicola; Wim van Etten; H.E.P. Tattje
In this paper we investigate the performance degradation in a wavelength division multiplexing network due to crosstalk in optical cross-connects. Worst-case analysis is carried out including in-band crosstalk components. In contrast with the approximate methods in the literature, all beat-noise terms are included, and both input signal hypotheses are considered. The results are obtained by using appropriate importance sampling strategies. The optimization of importance sampling parameters was done with new adaptive techniques based on stochastic Newton recursions, combined with a novel technique called the g-method. Accurate performance measures for practical system parameter values are obtained in short simulation run-times. Infinite and finite extinction ratios are considered. The results indicate that the detection threshold has a strong impact on the system performance. The importance sampling techniques are also useful for the optimization of this system parameter.
Archive | 2002
Rajan Srinivasan
We describe here a generic approach to CFAR processing, introduced in [74], which results in detection algorithms that tend to be robust to inhomogeneities in radar clutter. Termed as ensemble- or E-CFAR detection, it combines members from the family of known CFAR processors. It is simple and easy to implement. While finding the most robust algorithms is still an open problem, the concept allows the synthesis of a large number of candidate algorithms that can be tested for their properties.
Archive | 2002
Rajan Srinivasan
The accurate estimation of probabilities of rare events through fast simulation is a primary concern of importance sampling. Rare events are almost always defined on the tails of probability density functions. They have small probabilities and occur infrequently in real applications or in a simulation. This makes it difficult to generate them in sufficiently large numbers that statistically significant conclusions may be drawn. However, these events can be made to occur more often by deliberately introducing changes in the probability distributions that govern their behavior. Results obtained from such simulations are then altered to compensate for or undo the effects of these changes. In this chapter the concept of IS is motivated by examining the estimation of tail probabilities. It is a problem frequently encountered in applications and forms a good starting point for the study of IS theory.
Archive | 2002
Rajan Srinivasan
Several practical applications involve functions of many random variables. In systems driven or perturbed by stochastic inputs, sums of random variables frequently appear as quantities of importance. For example, they play a central role in most estimation operations in signal processing applications. In this chapter we apply IS concepts to sums of i.i.d. random variables. Apart from the usual biasing techniques, a method referred to as the g-method, Srinivasan [70], will be described. While not a form of biasing, it exploits knowledge of the common distribution function of the single variable to enhance the performance of any biasing technique. The g-method has a powerful feature, namely that of differentiability of the IS estimate, which permits solution of the inverse IS problem. This problem is one of finding through simulation a number which is exceeded by a sum of random variables to achieve a specified (tail) probability. It is of great importance in applications, for example, in the determination of thresholds for radar and sonar detectors, and parameter optimization in communication systems. All these systems are designed to operate with specific performance probabilities. A solution to the inverse IS problem is obtained by minimizing a suitable objective function.
Archive | 2002
Rajan Srinivasan
A large part of IS research has been concerned with the search for good simulation densities, or those that approach the optimal. Most of the suggested biasing schemes in use are motivated by the requirement that for tail probability estimation, the biasing density should effect an increase in the event probability as compared to the original density. In the previous chapter we introduced the problem of estimating the tail probability of a random variable with given density function. In most cases this probability can be either found analytically or evaluated accurately using numerical integration. The real power of IS lies in its ability to precisely estimate rare event probabilities involving a random variable that is a function of several other random variables. Such situations frequently arise in applications and examples of functions include i.i.d. and non-i.i.d. sums, and other transformations encountered in communications and nonlinear signal processing. The usual approach to finding good biasing densities involves the selection of a family (or class) of density functions indexed by one or more parameters. The form of the representative density is chosen based on its ability to effect increase in the event probability for an appropriate choice of the indexing parameters. Thus, once this choice is made, the rest of the IS problem is concerned with determining optimal parameter values. Biasing density families can be obtained directly as a result of transformations imposed on the original random variables or on their density functions. This is the method most often used in practice. Alternatively, densities can be chosen that are not apparently related to the original but which have the desired properties. The latter approach has not received much attention and we shall not deal with it here. Another approach to IS is concerned directly with the search for the optimal biasing density. This search is carried out adaptively and it has been studied in some detail mainly in the application area of reliability. In this chapter we describe some of the available biasing methods and also those that are commonly used in applications. The single random variable case is treated here. The development is carried out by means of several illustrative examples.
global communications conference | 1998
David Remondo; Rajan Srinivasan; Victor F. Nicola; van Wim Etten; H.E.P. Tattje
In this paper new adaptive importance sampling techniques are applied to the performance evaluation and parameter optimization of wavelength division multiplexing (WDM) network impaired by crosstalk in an optical cross-connect. Worst-case analysis is carried out including all the beat noise terms originated by in-band crosstalk. Both input signal hypotheses are considered. The accurate bit-error-rate estimates, which are obtained in short run-times, indicate that the influence of crosstalk is much lower than that predicted by previous analyses. This finding has a strong impact on the design of WDM networks. Besides, a method is used to optimize the detection threshold, which turns out to improve the system performance significantly. The presented techniques also allow us to determine the power penalty due to the introduction of additional WDM channels.
IEEE Transactions on Communications | 2004
Rajan Srinivasan; George Tiba