Vijaykumar A. Topkar
George Mason University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vijaykumar A. Topkar.
international symposium on microarchitecture | 1992
Ophir Frieder; Vijaykumar A. Topkar; Ramesh K. Karne; Arun K. Sood
Using Intels iPSC/2 hypercube, the authors measured the relationship between packet size, method of clustering messages, and internode traffic on the total sustained communication bandwidth. Having measured the costs associated with internode communication, they then analyzed duplicate removal algorithms. They also studied the effects of nonuniformly distributed attribute values and tuples across processors on three proposed duplicate removal algorithms. They chose algorithms to represent the several available in the literature and then evaluated the output collection time. The authors present a brief overview of the iPSC/2s hypercube message-passing system and discuss the results of their experimentation and analysis.<<ETX>>
parallel computing | 1991
Vijaykumar A. Topkar; Ophir Frieder; Arun K. Sood
The Duplicate Removal Problem (DRP) appears in a number of applications such as protocol verification, database operations and image processing. Although numerous parallel sorting algorithms have been proposed, DRP has received relatively little attention. In this paper we propose and study three parallel duplicate removal algorithms. The algorithms are implemented and evaluated on an Intel iPSC/2 hypercube. We assume that all the data are resident in the main memory and do not consider the I/O access times. The results indicate that the performance of a parallel duplicate removal algorithm is a function of the system and data conditions, viz. the number of nodes, the number of data values, the uniqueness factors, and the processing and data transfer speeds. The results suggest amethod of selecting an optimum algorithm based on the data and system conditions. To interpret and scale the results of the experiments, we developed analytical models of the algorithms. Those models compare favorably to the results obtained experimentally. Finally the average computational complexity of each of the three algorithms is presented.
Signal Processing | 1991
Vijaykumar A. Topkar; S. K. Mullick; Edward L. Titlebaum
Linear and non-linear coordinate transformations of the t-ω plane may be used to shift around the areas of high energy concentration of a Wigner Distribution (WD). However, it is important that such transformations are invariant w.r.t. WD, i.e., the transformed WDs are WD realizable. This paper derives the conditions under which such transformations are invariant. Both linear and nonlinear transformations are considered and special cases of rational spectra are discussed. The results are important and form an analytical basis for such transformations.
Signal Processing | 1992
Vijaykumar A. Topkar; Arun K. Sood
Abstract The noise and clutter cause major problems in scale-space because it involves taking higher derivatives. Statistical analysis of the noise as it is reflected in the scale-space representation and its effect on any algorithm based on this representation poses an interesting topic of research. Scale-space representation involves convolving the input with a smoothing filter (typically Gaussian) of different resolutions and detecting the primitives (typically the zero crossings of the second derivative) to form the scale-space representation. Statistical analysis of the scale-space is nontrivial because of two reasons: (i) it involves a nonlinear operation namely the detection of zero crossings and (ii) the noise at different scales is correlated. In this paper we prove theorems which give the probabilities of zero-crossings in the output in the presence of noise. The theorems are then applied to the case of Gaussian smoothing. These probabilities can be used for a number of applications such as performance analysis.
Archive | 1992
Vijaykumar A. Topkar; Bradley Pryor Kjell; Arun K. Sood
Scale-space representation is a topic of active research in computer vision. Several researchers have studied the behavior of signals in the scale-space domain and how a signal can be reconstructed from its scale-space. However, not much work has been done on the signal detection problem, i.e. detecting the presence or absence of signal models from a given scale-space representation. In this paper we propose a model-based object detection algorithm for separating the objects from the background in the scale-space domain. There are a number of unresolved issues, some of which are discussed here. The algorithm is used to detect an infrared image of a tank in a noisy background. The performance of a multiscale approach is compared with that of a single scale approach by using a synthetic image and adding controlled amounts of noise. A synthetic image of randomly placed blobs of different sizes is used as the clean image. Two classes of noisy images arc considered. The first class is obtained by adding clutter (i.e. colored noise) and the second class by adding an equivalent amount of white noise. The multiscale and single scale algorithms are applied to delect the blobs, and performance indices such as number of detections, number of false alarms, delocalization errors etc. are computed. The results indicate that (i) the multiscale approach is better than the single scale approach and (ii) the degradation in performance is greater with clutter than with white noise.
Applications of Artificial Intelligence VIII | 1990
Vijaykumar A. Topkar; Bradley Pryor Kjell; Arun K. Sood
Abstract not available.
Journal of Mathematical Imaging and Vision | 1995
Vijaykumar A. Topkar; Arun K. Sood
Scale-space description of an image refers to the descriptions of the same image at different resolutions. One popular scale-space representation, the topic of this research, is obtained by passing an image through a bank of smoothing filters (each tuned to a different scale) and detecting the zero crossings (z.c.s) of the second derivative (Laplacian) of the outputs. Any z.c. based algorithm will be affected by the corruption of the z.c.s due to the input noise. Statistical analysis of the z.c.s is used to determine the effect of scale change on the different types of z.c.s. We also study the computational and performance trade-offs involved in choosing scale. Statistical analysis of the z.c.s is nontrivial. The 2 main reasons are: (i) detection of z.c.s is a nonlinear operation, and (ii) the noise at different scales is correlated. In this paper we identify different types of z.c.s and compute the densities of their occurrence in the presence of white and colored noise. The formulae for computing the z.c. densities are applicable to any smoothing filter. The special case of Gaussian smoothing filters is investigated in depth. It is demonstrated how thea priori knowledge about the input image is reflected in the performance of an algorithm which uses the z.c.s as the primitives. Other applications of the statistical analysis of z.c.s include design of optimum smoothing filter, performance analysis and active multiscaling. It is quantitatively demonstrated how the performance of a scale-space system improves as more and morea priori knowledge about the scene is incorporated.
computer vision and pattern recognition | 1991
Vijaykumar A. Topkar; Arun K. Sood; Bradley Pryor Kjell
The authors propose four scale-space object detection algorithms for separating objects from the background. These algorithms do not need thresholding at any of the scales. The different algorithms are applicable to images with different noise and clutter characteristics. Statistical analysis of the four algorithms is conducted for noisy and cluttered backgrounds.<<ETX>>
Applications of Artificial Intelligence IX | 1991
Bradley Pryor Kjell; Arun K. Sood; Vijaykumar A. Topkar
In many applications, such as remote sensing or target detection, the target objects are small, compact blobs. In the images discussed in this paper these objects are only 6 or fewer pixels across, and the images contain noise and clutter which is similar in appearance to the targets. Since so few pixels comprise an object, the object shape is uncertain, so common shape features are unreliable. To distinguish targets from clutter, features which make use of scale-space have proven useful. The scale-space of an image is a sequence of Gauss-filtered versions of the image, using increasing scales from one image to the next. Experiments show that object features calculated at a single scale. Various moments of the value of the Laplacian at the centroid of a blob were particularly effective for some targets.
Cvgip: Image Understanding | 1994
Vijaykumar A. Topkar; Arun K. Sood; Bradley Pryor Kjell