Thomas K. Paul
Santa Clara University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas K. Paul.
IEEE Circuits and Systems Magazine | 2008
Thomas K. Paul; Tokunbo Ogunfunmi
During the initial development of the IEEE 802.11n (11n) amendment for improving the throughput of wireless LANs, a lot of excitement existed surrounding the potential higher throughput (i.e., faster downloads), and increased range (distance) achievable. However, delays in the development of this standard (which began in 2003, and is still in the final draft stages) as well as vendor, customer reluctance to adopt the pre-11n offerings in the marketplace, have generally slowed interest in this next-generation technology. However, there is still much to be excited about. The latest draft of IEEE 802.11n (Draft 3.0) offers the potential of throughputs beyond 200 Mbps, based on physical layer (PHY) data rates up to 600 Mbps. This is achieved through the use of multiple transmit and receive antennas, referred to as MIMO (multiple input, multiple output). Using techniques such as spatial division multiplexing (SDM), transmitter beamforming, and space-time block coding (STBC), MIMO is used to increase dramatically throughput over single antenna systems (by two to four times) or to improve range of reception, depending on the environment. This article offers an exposition on the techniques used in IEEE 802.11n to achieve the above improvements to throughput and range. First, the current generation WLAN devices (11a/b/g) are described in terms of the benefits offered to end users. Next, the evolution of the Tin amendment is discussed, describing the main proposals given, and illustrating reasons for the delay in standardization. Then, the changes to the PHY for 11n are presented. A description of channel modeling with MIMO is shown, followed by the signal processing techniques employed, including MIMO channel estimation and detection, space-time block coding (STBC), and transmitter beamforming. Simulation results are presented which illustrate the benefits of these techniques, versus the existing a/g structures, for both throughput and range. Finally, a brief section outlining considerations for the rapid prototyping of a baseband design based on the 802.11n PHY is presented. We conclude with a discussion of the future for 11n, describing the issues addressed with Drafts 2.0 and 3.0, as well as its place in a wireless market with WiMAX and Bluetooth.
IEEE Transactions on Circuits and Systems | 2011
Thomas K. Paul; Tokunbo Ogunfunmi
The affine projection class of algorithms (APA) provides faster convergence than LMS-based adaptive filters. Its convergence analysis is not as extensively studied as Normalized LMS (NLMS), and remains an active area of research. For tractability, most works on APA make many assumptions on the statistics of the input, as well as correlation between signals. Here we consider the effect of the correlation between filter coefficients and past measurement noise on MSE error. The effect of this correlation was found to be dependent on step-size mu, increasing or decreasing the predicted MSE depending on whether mu is less than or greater than 1, irrespective of the input statistics. Simulations are used to verify the analysis results presented.
IEEE Communications Surveys and Tutorials | 2009
Thomas K. Paul; Tokunbo Ogunfunmi
IEEE 802.11n is a newly emerged WLAN standard capable of providing dramatically increased throughput, as well as improved range, reduced signal fading, over the existing IEEE 802.11a/g WLAN standards. These benefits are achieved through use of MIMO (multiple-input,multiple-output). The latest draft for IEEE 802.11n describes rates up to 600 Mbps, exceeding the maximum rate with the 11a/g standards by more than ten times. In addition, techniques such as space-time block coding and beamforming provide the potential of increasing signal strength at the receiver with optimal efficiency, based on the diversity order used. In this paper, a comparative analysis of the physical (PHY) layers in the original main proposals for the 11n amendment (the TGn Sync, WWiSE and TGn Joint proposals) is presented. The key architectural differences governing the performance of these proposals are outlined. In addition, insights are provided into the choices leading to the TGn Joint proposal, which reflects the PHY architecture described in the 11n standard. The insights and challenges described relate to the choices made in the TGn Joint proposal regarding the areas of channel estimation (considering the use of beamforming, channel smoothing), bit interleaving techniques (for maximizing coding gain under channels with high frequency diversity), space-time block coding (STBC) options (designed in an effort to achieve a good balance between achieving high diversity gain and low receiver design complexity), and pilot tone selection (for a reasonable tradeoff of robustness and link-level performance). Performance curves (based on simulation models developed in MATLAB/SIMULINK) are used to verify the analysis presented. This paper also includes a discussion of some of the future challenges for the 11n amendment.
IEEE Transactions on Neural Networks | 2015
Thomas K. Paul; Tokunbo Ogunfunmi
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.
international symposium on circuits and systems | 2011
Tokunbo Ogunfunmi; Thomas K. Paul
A recent paper titled “The Complex Gaussian Kernel LMS Algorithm” published by Bouboulis and Theodoridis introduced a complex version of the Gaussian kernel LMS (KLMS) algorithm. In this paper, we extend the concepts of complex and complexified RKHS spaces to develop suitable complex Kernel based adaptive algorithms using the Affine projection algorithm (KAPA) method. We apply the complex Gaussian kernel here, as well as develop APA-based algorithms using other suitable complex kernels. We evaluate the performance of the new algorithms using practical simulation applications. The complex KAPA algorithms are seen to outperform their LMS-based counterparts, particularly for applications where convergence rate is important.
IEEE Transactions on Neural Networks | 2013
Thomas K. Paul; Tokunbo Ogunfunmi
The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.
international symposium on circuits and systems | 2012
Thomas K. Paul; Tokunbo Ogunfunmi
Kernel-based adaptive filters present a new opportunity to re-cast nonlinear optimization problems over a Reproducing Kernel Hilbert Space (RKHS), transforming the nonlinear task to linear, where easier and well-known methods may be used. The approach can be seen to yield solutions suitable for sparse adaptive filtering. The new Complex Kernel Least Mean Square algorithm (CKLMS), derived by Bouboulis and Theodoridis, allows kernel-based online adaptive filtering for complex data. Here we report our results on the convergence of CKLMS with the complexified form of the Gaussian kernel. The analysis performed is based on a recent study of the Kernel LMS from Parreira et al. The analysis is used to generate theory-predicted MSE curves which consider the circularity/non-circularity of complex input which to our knowledge has not been considered previously for online nonlinear learning. Simulations are used to verify the theoretical analysis results.
IEEE Transactions on Circuits and Systems Ii-express Briefs | 2015
Tokunbo Ogunfunmi; Thomas K. Paul
We develop a kernel adaptive filter for quaternion data based on maximizing correntropy. We apply a modified form of the HR calculus that is applicable to Hilbert spaces for evaluating the cost function gradient to develop the quaternion kernel maximum correntropy (KMC) algorithm. The KMC method uses correntropy to measure similarity between the filter output and the desired response. Here, the approach is applied to quaternions for improving performance for biased or non-Gaussian signals compared with the minimum mean square error criterion of the kernel least-mean-square algorithm. Simulation results demonstrate the improved performance with non-Gaussian inputs.
signal processing systems | 2006
Tokunbo Ogunfunmi; Thomas K. Paul
This paper presents an analysis of the convergence of the frequency-domain LMS adaptive filter when the DFT is computed using the LMS steepest descent algorithm. In this case, the frequency-domain adaptive filter is implemented with a cascade of two sections, each updated using the LMS algorithm. Since the structure contains two adaptive algorithms updating in parallel, an analysis of the overall system convergence needs to consider the effect of the two adaptive algorithms on each other, in addition to their individual convergence. Analysis was done based on the expected mean-square coefficient error for each of the two LMS adaptive algorithms, with some simplifying approximations for the second algorithm, to describe the convergence behavior of the overall system. Simulations were used to verify the results
signal processing systems | 2009
Tokunbo Ogunfunmi; Thomas K. Paul
We present an analysis of the convergence of the frequency-domain LMS adaptive filter when the DFT is computed using the LMS steepest descent algorithm. In this case, the frequency-domain adaptive filter is implemented with a cascade of two sections, each updated using the LMS algorithm. The structure requires less computations compared to using the FFT and is modular suitable for VLSI implementations. Since the structure contains two adaptive algorithms updating in parallel, an analysis of the overall system convergence needs to consider the effect of the two adaptive algorithms on each other, in addition to their individual convergence. Analysis was based on the expected mean-square coefficient error for each of the two LMS adaptive algorithms, with some simplifying approximations for the second algorithm, to describe the convergence behavior of the overall system. Simulations were used to verify the results.