Santosh S. Venkatesh
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Santosh S. Venkatesh.
Optical Engineering | 1984
Demetri Psaltis; Eung Gi Paek; Santosh S. Venkatesh
The use of a binary, magneto-optic spatial light modulator as an input device and as a spatial filter in a VanderLugt correlator is investigated. The statistics of the correlation that is obtained when the input image or the spatial filter is thresholded are estimated. Optical correlation using the magneto-optic device at the input and Fourier planes of a VanderLugt correlator is demonstrated experimentally.
IEEE Transactions on Information Theory | 1998
Sanjeev R. Kulkarni; Gábor Lugosi; Santosh S. Venkatesh
Classical and recent results in statistical pattern recognition and learning theory are reviewed in a two-class pattern classification setting. This basic model best illustrates intuition and analysis techniques while still containing the essential features and serving as a prototype for many applications. Topics discussed include nearest neighbor, kernel, and histogram methods, Vapnik-Chervonenkis theory, and neural networks. The presentation and the large (though nonexhaustive) list of references is geared to provide a useful overview of this field for both specialists and nonspecialists.
IEEE Transactions on Information Theory | 1989
Santosh S. Venkatesh; Demetri Psaltis
A model of associate memory incorporating global linearity and pointwise nonlinearities in a state space of n-dimensional binary vectors is considered. Attention is focused on the ability to store a prescribed set of state vectors as attractors within the model. Within the framework of such associative nets, a specific strategy for information storage that utilizes the spectrum of a linear operator is considered in some detail. Comparisons are made between this spectral strategy and a prior scheme that utilizes the sum of Kronecker outer products of the prescribed set of state vectors, which are to function nominally as memories. The storage capacity of the spectral strategy is linear in n (the dimension of the state space under consideration), whereas an asymptotic result of n/4 log n holds for the storage capacity of the outer product scheme. Computer-simulated results show that the spectral strategy stores information more efficiently. The preprocessing costs incurred in the two algorithms are estimated, and recursive strategies are developed for their computation. >
IEEE Transactions on Communications | 2000
Selaka B. Bulumulla; Saleem A. Kassam; Santosh S. Venkatesh
We derive the maximum a posteriori probability (MAP) receiver for orthogonal frequency-division multiplexed signals in a fading channel. As the complexity of the MAP receiver is high, we obtain a low-complexity, suboptimal receiver and evaluate its performance.
international symposium on information theory | 1993
Demetri Psaltis; Robert R. Snapp; Santosh S. Venkatesh
The finite sample performance of a nearest neighbor classifier is analyzed for a two-class pattern recognition problem. An exact integral expression is derived for the m-sample risk R/sub m/ given that a reference m-sample of labeled points is available to the classifier. The statistical setup assumes that the pattern classes arise in nature with fixed a priori probabilities and that points representing the classes are drawn from Euclidean n-space according to fixed class-conditional probability distributions. The sample is assumed to consist of m independently generated class-labeled points. For a family of smooth class-conditional distributions characterized by asymptotic expansions in general form, it is shown that the m-sample risk R/sub m/ has a complete asymptotic series expansion R/sub m//spl sim/R/sub /spl infin//+/spl Sigma//sub k=2//sup /spl infin//c/sub k/m/sup -k/n/ (m/spl rarr//spl infin/), where R/sub /spl infin// denotes the nearest neighbor risk in the infinite-sample limit and the coefficients c/sub k/ are distribution-dependent constants independent of the sample size m. The analysis thus provides further analytic validation of Bellmans curse of dimensionality. Numerical simulations corroborating the formal results are included, and extensions of the theory discussed. The analysis also contains a novel application of Laplaces asymptotic method of integration to a multidimensional integral where the integrand attains its maximum on a continuum of points. >
international conference on communications | 1998
S.B. Bulumulla; Saleem A. Kassam; Santosh S. Venkatesh
Interest in OFDM has renewed with the standardization of OFDM for digital audio broadcasting in Europe. In this paper, we consider an adaptive, diversity receiver for OFDM signals in a Rayleigh fading channel. The diversity receiver has L branches with each branch receiving the signal from L independently fading diversity channels. We model the fading process as a vector auto-regressive process and use the Kalman filter to obtain the MMSE optimum channel estimates for each branch. The channel estimates and the signals are combined using the maximal ratio combining rule to obtain the decision variables. We analyze the performance of this receiver and provide a numerical example to highlight the advantage of using diversity.
IEEE ACM Transactions on Networking | 2012
Sanjeev Khanna; Santosh S. Venkatesh; Omid Fatemieh; Fariba Khan; Carl A. Gunter
Denial-of-service (DoS) attacks are considered within the province of a shared channel model in which attack rates may be large but are bounded and client request rates vary within fixed bounds. In this setting, it is shown that clients can adapt effectively to an attack by increasing their request rate based on timeout windows to estimate attack rates. The server will be able to process client requests with high probability while pruning out most of the attack by selective random sampling. The protocol introduced here, called Adaptive Selective Verification (ASV), is shown to use bandwidth efficiently and does not require any server state or assumptions about network congestion. The main results of the paper are a formulation of optimal performance and a proof that ASV is optimal.
Journal of Computer and System Sciences | 1993
Santosh S. Venkatesh
Abstract Learning real weights for a McCulloch-Pitts neuron is equivalent to linear programming and can hence be done in polynomial time. Efficient local learning algorithms such as Perceptron Learning further guarantee convergence in finite time. The problem becomes considerably harder, however, when it is sought to learn binary weights; this is equivalent to integer programming which is known to be NP-complete. A family of probabilistic algorithms which learn binary weights for a McCulloch-Pitts neuron with inputs constrained to be binary is proposed here, the target functions being majority functions of a set literals. These algorithms have low computational demands and are essentially local in character. Rapid (average-case) quadratic rates of convergence for the algorithm are predicted analytically and confirmed through computer simulations when the number of examples is within capacity. It is also shown that, for the functions under consideration, Preceptron Learning converges rapidly (but to an, in general, non-binary solution weight vector).
Journal of Complexity | 1991
Santosh S. Venkatesh; Pierre Baldi
Abstract The focus of the paper is the estimation of the maximum number of states that can be made stable in higher-order extensions of neural network models. Each higher-order neuron in a network of n elements is modeled as a polynomial threshold element of degree d . It is shown that regardless of the manner of operation, or the algorithm used, the storage capacity of the higher-order network is of the order of one bit per interaction weight. In particular, the maximal (algorithm independent) storage capacity realizable in a recurrent network of n higher-order neurons of degree d is of the order of n d d! . A generalization of a spectral algorithm for information storage is introduced and arguments adducing near optimal capacity for the algorithm are presented.
conference on learning theory | 1991
Santosh S. Venkatesh
We investigate algorithms for learning binary weights from examples of majority functions of a set of literals. In particular, given a set of (randomly drawn) input-output pairs, with inputs being binary ±1 vectors, and the outputs likewise being ±1 classifications, we seek to find a vector of binary (±1) weights for a linear threshold element (or formal neuron) which provides a linearly separable hypothesis consistent on the set of examples. We present three algorithms–Directed Drift, Harmonic Update, and Majority Rule–for learning binary weights in this context, and examine their characteristics. In particular, we formally define a distribution dependent notion of algorithmic capacity (which is related to the distibution free notion of the VC dimension) and provide estimates of the capacity of the proposed algorithms.