Nasser M. Nasrabadi
State University of New York System
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nasser M. Nasrabadi.
IEEE Transactions on Communications | 1988
Nasser M. Nasrabadi; Robert A. King
A review of vector quantization techniques used for encoding digital images is presented. First, the concept of vector quantization is introduced, then its application to digital images is explained. Spatial, predictive, transform, hybrid, binary, and subband vector quantizers are reviewed. The emphasis is on the usefulness of the vector quantization when it is combined with conventional image coding techniques, or when it is used in different domains. >
IEEE Transactions on Neural Networks | 1992
Nasser M. Nasrabadi; Chang Y. Choo
An optimization approach is used to solve the correspondence problem for a set of features extracted from a pair of stereo images. A cost function is defined to represent the constraints on the solution, which is then mapped onto a two-dimensional Hopfield neural network for minimization. Each neuron in the network represents a possible match between a feature in the left image and one in the right image. Correspondence is achieved by initializing (exciting) each neuron that represents a possible match and then allowing the network to settle down into a stable state. The network uses the initial inputs and the compatibility measures between the matched points to find a stable state.
IEEE Transactions on Communications | 1990
Nasser M. Nasrabadi; Yushu Feng
A novel vector quantization scheme, called the address-vector quantizer (A-VQ), is proposed. It is based on exploiting the interblock correlation by encoding a group of blocks together using an address-codebook. The address-codebook consists of a set of address-codevectors where each codevector represents a combination of addresses (indexes). Each element of this codevector is an address of an entry in the LBG-codebook, representing a vector quantized block. The address-codebook consists of two regions: one is the active (addressable) region, and the other is the inactive (nonaddressable) region. During the encoding process the codevectors in the address-codebook are reordered adaptively in order to bring the most probable address-codevectors into the active region. When encoding an address-codevector, the active region of the address-codebook is checked, and if such an address combination exist its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The quality (SNR value) of the images encoded by the proposed A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate would be reduced by a factor of approximately two when compared to a memoryless vector quantizer. >
IEEE Transactions on Communications | 1994
Nasser M. Nasrabadi; Chang Y. Choo; Yushu Feng
A vector quantization (VQ) scheme with finite memory called dynamic finite-state vector quantization (DFSVQ) is presented. The encoder consists of a large codebook, so called super-codebook, where for each input vector a fixed number of its codevectors are chosen to generate a much smaller codebook (sub-codebook). This sub-codebook represents the best matching codevectors that could be found in the super-codebook for encoding the current input vector. The choice for the codevectors in the sub-codebook is based on the information obtained from the previously encoded blocks where directional conditional block probability (histogram) matrices are used in the selection of the codevectors. The index of the best matching codevector in the sub-codebook is transmitted to the receiver. An adaptive DFSVQ scheme is also proposed in which, when encoding an input vector, first the sub-codebook is searched for a matching codevector to satisfy a pre-specified waveform distortion. If such a codevector is not found in tile current sub-codebook then the whole super-codebook is checked for a better match. If a better match is found then a signaling flag along with the corresponding index of the codevector is transmitted to the receiver. Both the DFSVQ encoder and its adaptive version are implemented. Experimental results for several monochrome images with a super-codebook size of 256 or 512 and different sub-codebook sizes are presented. >
Optical Engineering | 1993
Nader Mohsenian; Syed A. Rizvi; Nasser M. Nasrabadi
A new predictive vector quantization (PVQ) technique capable of exploring the nonlinear dependencies in addition to the linear dependencies that exist between adjacent blocks (vectors) of pixels is introduced. The two components of the PVQ scheme, the vector predictor and the vector quantizer, are implemented by two different classes of neural networks. A multilayer perceptron is used for the predictive cornponent and Kohonen self-organizing feature maps are used to design the codebook for the vector quantizer. The multilayer perceptron uses the nonlinearity condition associated with its processing units to perform a nonlinear vector prediction. The second component of the PVQ scheme vector quantizes the residual vector that is formed by subtracting the output of the perceptron from the original input vector. The joint-optimization task of designing the two components of the PVQ scheme is also achieved. Simulation results are presented for still images with high visual quality.
Neural Networks | 1988
Nasser M. Nasrabadi; Yushu Feng
A neural-network clustering algorithm proposed by T. Kohonen (1986, 88) is used to design a codebook for the vector quantization of images. This neural-network clustering algorithm, which is better known as the Kohonen self-organizing feature maps, is a two-dimensional set of extensively interconnected nodes or unit of processors. The synaptic strengths between the input and the output nodes represent the centroid of the clusters after the network has been adapted to the input patterns. Input vectors are presented one at a time, and the weights connecting the input signals to the neurons are adaptively updated such that the point density function of the weights tends to approximate the probability density function of the input vector. Results are presented for a number of coded images using the codebook designed by the self-organization feature maps. The results are compared with coded images when the cookbook is designed by the Linde-Buzo-Gray algorithm.<<ETX>>
IEEE Transactions on Image Processing | 1995
Syed A. Rizvi; Nasser M. Nasrabadi
This paper presents a new vector quantization technique called predictive residual vector quantization (PRVQ). It combines the concepts of predictive vector quantization (PVQ) and residual vector quantization (RVQ) to implement a high performance VQ scheme with low search complexity. The proposed PRVQ consists of a vector predictor, designed by a multilayer perceptron, and an RVQ that is designed by a multilayer competitive neural network. A major task in our proposed PRVQ design is the joint optimization of the vector predictor and the RVQ codebooks. In order to achieve this, a new design based on the neural network learning algorithm is introduced. This technique is basically a nonlinear constrained optimization where each constituent component of the PRVQ scheme is optimized by minimizing an appropriate stage error function with a constraint on the overall error. This technique makes use of a Lagrangian formulation and iteratively solves a Lagrangian error function to obtain a locally optimal solution. This approach is then compared to a jointly designed and a closed-loop design approach. In the jointly designed approach, the predictor and quantizers are jointly optimized by minimizing only the overall error. In the closed-loop design, however, a predictor is first implemented; then the stage quantizers are optimized for this predictor in a stage-by-stage fashion. Simulation results show that the proposed PRVQ scheme outperforms the equivalent RVQ (operating at the same bit rate) and the unconstrained VQ by 2 and 1.7 dB, respectively. Furthermore, the proposed PRVQ outperforms the PVQ in the rate-distortion sense with significantly lower codebook search complexity.
IEEE Transactions on Circuits and Systems for Video Technology | 1995
Syed A. Rizvi; Nasser M. Nasrabadi
Vector quantizer (VQ) encoders generally use Euclidean distance measure to encode the vectors. The major computation in the Euclidean distance is square (multiplication operation) of the difference between the vector components. This article explores Euclidean distance computation and introduces a new technique which uses a truncated look-up table (LUT) to store a small set of repeatedly generated scalars. Specifically, for numbers represented by m bits, this technique requires to store only 2/sup m/ product terms instead of 2/sup m//spl times/2/sup m/ product terms needed to store in a conventional LUT. >
Journal of The Optical Society of America A-optics Image Science and Vision | 1989
Nasser M. Nasrabadi; Sandra P. Clifford; Yi Liu
A cooperative method is proposed in which image intensity (brightness) and optical-flow information are integrated into a single stereo technique by modeling the input data as coupled Markov random fields (MRF’s). The Bayesian probabilistic estimation method and the MRF–Gibbs equivalence theory are used to integrate the optical flow and the gray-level intensity information to obtain an energy function that will explicitly represent the depth discontinuity and occlusion constraints on the solution. This energy function involves the similarity in intensity (or edge orientation) and the optical flow between corresponding sites of the left and right images as well as the smoothness constraint on the disparity solution. If a simple MRF is used to model the data, the energy function will yield a poor disparity by smoothing across object boundaries, particularly when occluding objects are present. We use optical-flow information to indicate object boundaries (depth discontinuities) and occluded regions, in order to improve the disparity solution in occluded regions. A stochastic relaxation algorithm (simulated annealing) is used to find a favorable disparity solution by minimization of the energy equation.
computer vision and pattern recognition | 1996
Lipchen Alex Chan; Nasser M. Nasrabadi; Vincent Mirelli
An automatic target recognition (ATR) classifier is proposed that uses modularly cascaded vector quantizers (VQs) and multilayer perceptrons (MLPs). A dedicated VQ codebook is constructed for each target class at a specific range of aspects, which is trained with the K-means algorithm and a modified learning vector quantization (LVQ) algorithm. Each final codebook is expected to give the lowest mean squared error (MSE) for its correct target class at a given range of aspects. These MSEs are then processed by an array of window MLPs and a target MLP consecutively. In the spatial domain, target recognition rates of 90.3 and 65.3 percent are achieved for moderately and highly cluttered test sets, respectively. Using the wavelet decomposition with an adaptive and independent codebook per sub-band, the VQs alone have produced recognition rates of 98.7 and 69.0 percent on more challenging training and test sets, respectively.