Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pankaj Topiwala is active.

Publication


Featured researches published by Pankaj Topiwala.


IEEE Transactions on Image Processing | 1999

The application of multiwavelet filterbanks to image processing

Vasily Strela; Peter Niels Heller; Gilbert Strang; Pankaj Topiwala; Christopher Heil

Multiwavelets are a new addition to the body of wavelet theory. Realizable as matrix-valued filterbanks leading to wavelet bases, multiwavelets offer simultaneous orthogonality, symmetry, and short support, which is not possible with scalar two-channel wavelet systems. After reviewing this theory, we examine the use of multiwavelets in a filterbank setting for discrete-time signal and image processing. Multiwavelets differ from scalar wavelet systems in requiring two or more input streams to the multiwavelet filterbank. We describe two methods (repeated row and approximation/deapproximation) for obtaining such a vector input stream from a one-dimensional (1-D) signal. Algorithms for symmetric extension of signals at boundaries are then developed, and naturally integrated with approximation-based preprocessing. We describe an additional algorithm for multiwavelet processing of two-dimensional (2-D) signals, two rows at a time, and develop a new family of multiwavelets (the constrained pairs) that is well-suited to this approach. This suite of novel techniques is then applied to two basic signal processing problems, denoising via wavelet-shrinkage, and data compression. After developing the approach via model problems in one dimension, we apply multiwavelet processing to images, frequently obtaining performance superior to the comparable scalar wavelet transform.


Proceedings of the American Mathematical Society | 1996

Linear independence of time-frequency translates

Christopher Heil; Jayakumar Ramanathan; Pankaj Topiwala

Abstract. The refinement equation φ(t) = ∑N2 k=N1 ck φ(2t − k) plays a key role in wavelet theory and in subdivision schemes in approximation theory. Viewed as an expression of linear dependence among the time-scale translates |a|1/2φ(at − b) of φ ∈ L2(R), it is natural to ask if there exist similar dependencies among the time-frequency translates e2πibtf(t + a) of f ∈ L2(R). In other words, what is the effect of replacing the group representation of L2(R) induced by the affine group with the corresponding representation induced by the Heisenberg group? This paper proves that there are no nonzero solutions to lattice-type generalizations of the refinement equation to the Heisenberg group. Moreover, it is proved that for each arbitrary finite collection {(ak , bk)}k=1, the set of all functions f ∈ L2(R) such that {e2πibktf(t+ ak)}k=1 is independent is an open, dense subset of L2(R). It is conjectured that this set is all of L2(R) \ {0}.


Proceedings of SPIE | 2005

Comparative Study of JPEG2000 and H.264/AVC FRExt I– Frame Coding on High-Definition Video Sequences

Pankaj Topiwala

This paper reports the rate-distortion performance comparison of JPEG2000 with H.264/AVC Fidelity Range Extensions (FRExt) High Profile I-frame coding for high definition (HD) video sequences. This work can be considered as an extension of a similar earlier study involving H.264/AVC Main Profile [1]. Coding simulations are performed on a set of 720p and 1080p HD video sequences, which have been commonly used for H.264/AVC standardization work. As expected, our experimental results show that H.264/AVC FRExt I-frame coding offers consistent R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over JPEG2000 color image coding. However, similar to [1, 2], we have not considered scalability, computational complexity as well as other JPEG2000 features in this study.


Proceedings of SPIE, the International Society for Optical Engineering | 2000

Projection-based block-matching motion estimation

Chengjie Tu; Trac D. Tran; Jerry L. Prince; Pankaj Topiwala

This paper introduces a fast block-based motion estimation algorithm based on matching projections. The idea is simple: blocks cannot match well if their corresponding 1D projections do not match well. We can take advantage of this observation to translate the expensive 2D block matching problem to a simpler 1D matching one by quickly eliminating a majority of matching candidates. Our novel motion estimation algorithm offers computational scalability through a single parameter and global optimum can still be achieved. Moreover, an efficient implementation to compute projections and to buffer recyclable data is also presented. Experiments show that the proposed algorithm is several times faster than the exhaustive search algorithm with nearly identical prediction performance. With the proposed BME method, high-performance real-time all- software video encoding starts to become practical for reasonable video sizes.


SPIE's 1994 International Symposium on Optics, Imaging, and Instrumentation | 1994

Asymptotic singular value decay of time-frequency localization operators

Christopher Heil; Jayakumar Ramanathan; Pankaj Topiwala

The Weyl correspondence is a convenient way to define a broad class of time-frequency localization operators. Given a region (Omega) in the time-frequency plane R2 and given an appropriate (mu) , the Weyl correspondence can be used to construct an operator L((Omega) ,(mu) ) which essentially localizes the time-frequency content of a signal on (Omega) . Different choices of (mu) provide different interpretations of localization. Empirically, each such localization operator has the following singular value structure: there are several singular values close to 1, followed by a sharp plunge in values, with a final asymptotic decay to zero. The exact quantification of these qualitative observations is known only for a few specific choices of (Omega) and (mu) . In this paper we announce a general result which bounds the asymptotic decay rate of the singular values of any L((Omega) ,(mu) ) in terms of integrals of (chi) (Omega ) * -(mu) 2 and ((chi) (Omega ) * -(mu) )^2 outside squares of increasing radius, where -(mu) (a,b) equals (mu) (-a, -b). More generally, this result applies to all operators L((sigma) ,(mu) ) allowing window function (sigma) in place of the characteristic functions (chi) (Omega ). We discuss the motivation and implications of this result. We also sketch the philosophy of proof, which involves the construction of an approximating operator through the technology of Gabor frames--overcomplete systems which allow basis-like expansions and Plancherel-like formulas, but which are not bases and are not orthogonal systems.


Proceedings of SPIE | 2006

Performance comparison of JPEG2000 and H.264/AVC high profile intra-frame coding on HD video sequences

Pankaj Topiwala; Trac D. Tran; Wei Dai

This paper reconsiders the rate-distortion performance comparison of JPEG2000 with H.264/AVC High Profile I-frame coding for high definition (HD) video sequences. This work is a follow-on to our paper at SPIE 05 [14], wherein we further optimize both codecs. This also extends a similar earlier study involving H.264/AVC Main Profile [2]. Coding simulations are performed on a set of 720p and 1080p HD video sequences, which have been commonly used for H.264/AVC standardization work. As expected, our experimental results show that H.264/AVC I-frame coding offers consistent R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over JPEG2000 color image coding. As in [1, 2], we do not consider scalability, complexity in this study (JPEG2000 is used in non-scalable, but optimal mode).


Proceedings of SPIE | 2007

Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo

Trac D. Tran; Lijie Liu; Pankaj Topiwala

This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).


Proceedings of SPIE | 2009

Video super-resolution: from QVGA to HD in real-time

Raj Mudigoudar; Srinath Bagal; Zhanfeng Yue; Pramod Lakshmi; Pankaj Topiwala

In surveillance, reconnaissance and numerous other video applications, enhancing the resolution and quality enhances the usability of captured video. In many such applications, video is often acquired from low cost legacy sensors that offer low resolution due to modest optics and low-resolution arrays, providing imagery that may be grainy and missing important details - and still face transmission bottlenecks. Many post-processing techniques have been proposed to enhance the quality of the video and superresolution is one such technique. In this paper, we extend previous work on a real-time superresolution application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization technique in which the image is iteratively modified by applying back-projection to get a sharp and undistorted image. The matlab/simulink proven algorithm was migrated to hardware, to achieve 320x240 -> 1280x960, at more than 38 fps, a stunning superresolution by 16X in total pixels. This significant advance beyond real-time is the main contribution of this paper. Additionally the algorithm is implemented in C to achieve real-time performance in software with optimization for Intel I7 processor. Fixed 32 bit processing structure is used to achieve easy migration across platforms. This gives us a fine balance between the quality and performance. The proposed system is robust and highly efficient. Superresolution greatly decreases camera jitter to deliver a smooth, stabilized, high quality video.


ieee sp international symposium on time frequency and time scale analysis | 1992

Time-frequency localization operators

Jayakumar Ramanathan; Pankaj Topiwala

A technique for producing signals whose energy is concentrated in a given region of the time-frequency plane is examined. The degree to which a given signal is concentrated in a region is measured by integrating its time-frequency distribution over the region. This method has been used for time-varying filtering. Localization operators based on the Wigner distribution and spectrogram are studied. Estimates for the eigenvalue decay and the smoothness and decay of the eigenfunctions are presented.<<ETX>>


Proceedings of SPIE | 2009

An improved real time superresolution FPGA system

Pramod Lakshmi Narasimha; Basavaraj Mudigoudar; Zhanfeng Yue; Pankaj Topiwala

In numerous computer vision applications, enhancing the quality and resolution of captured video can be critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc. Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth, stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization technique in which the image is iteratively modified by applying back-projection to get a sharp and undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240 -> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters, such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and performance. The proposed system is robust and highly efficient. We have shown the performance improvement of the hardware superresolution over the software version (C code).

Collaboration


Dive into the Pankaj Topiwala's collaboration.

Top Co-Authors

Avatar

Wei Dai

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Trac D. Tran

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pramod Lakshmi Narasimha

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Christopher Heil

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lijie Liu

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

David L. Hench

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vasily Strela

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge