Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where C.R. Johnson is active.

Publication


Featured researches published by C.R. Johnson.


IEEE Transactions on Communications | 1991

Ill-convergence of Godard blind equalizers in data communication systems

Zhi Ding; Rodney A. Kennedy; Brian D. O. Anderson; C.R. Johnson

The existence of stable undesirable equilibria for the Godard algorithm is demonstrated through a simple autoregressive (AR) channel model. These undesirable equilibria correspond to local but nonglobal minima of the underlying mean cost function, and thus permit the ill-convergence of the Godard algorithms which are stochastic gradient descent in nature. Simulation results confirm predicted misbehavior. For channel input of constant modulus, it is shown that attaining the global minimum of the mean cost necessarily implies correct equalization. A criterion is also presented for allowing a decision at the equalizer as to whether a global or nonglobal minimum has been reached. >


IEEE Transactions on Information Theory | 1984

Adaptive IIR filtering: Current results and open issues

C.R. Johnson

A tutorial-style framework is presented for understanding the current status of adaptive infinite-impulse-response (IIR) filters. The paper begins with a detailed discussion of the difference equation models that are useful as adaptive IIR filters. The particular form of the resulting prediction error generic to adaptive IIR filters is highlighted and the structures of provable convergent adaptive algorithms are derived. A brief summary of particular, currently known performance properties, drawn principally from the system identification literature, is followed by the formulation of three illustrative adaptive signal processing problems, to which these adaptive IIR filters are applicable. The concluding section discusses various open issues raised by the formulation of this framework.


IEEE Signal Processing Magazine | 1996

Fractionally spaced equalizers

John R. Treichler; Inbar Fijalkow; C.R. Johnson

Modern digital transmission systems commonly use an adaptive equalizer as a key part of the receiver. The design of this equalizer is important since it determines the maximum quality attainable from the system, and represents a high fraction of the computation used to implement the demodulator. Analytical results offer a new way of looking at fractionally spaced equalizers and have some surprising practical implications. This article describes the data communications problem, the rationale for introducing fractionally spaced equalizers, new results, and their implications. We then apply those results to actual transmission channels.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1980

SHARF: An algorithm for adapting IIR digital filters

Michael G. Larimore; John R. Treichler; C.R. Johnson

The concept of adaptation in digital filtering has proven to be a powerful and versatile means of signal processing in applications where precise a priori filter design is impractical. Adaptive filters have traditionally been implemented with FIR structures, making their analysis fairly straightforward but leading to high computation cost in many cases of practical interest (e.g, sinusoid enhancement). This paper introduces a class of adaptive algorithms designed for use with IIR digital filters which offer a much reduced computational load for basically the same performance. These algorithms have their basis in the theory of hyperstability, a concept historically associated with the analysis of closed-loop nonlinear time-varying control systems. Exploiting this theory yields HARF, a hyperstable adaptive recursive filtering algorithm which has provable convergence properties. A simplified version of the algorithm, called SHARF, is then developed which retains provable convergence at low convergence rates and is well suited to real-time applications. In this paper both HARF and SHARF are described and some background into the meaning and utility of hyperstability is given, in addition, computer simulations are presented for two practical applications of IIR adaptive filters: noise and multi-path cancellation.


IEEE Signal Processing Magazine | 2008

Image processing for artist identification

C.R. Johnson; Ella Hendriks; Igor Berezhnoy; Eugene Brevdo; Shannon M. Hughes; Ingrid Daubechies; Jia Li; Eric O. Postma; James Ze Wang

A survey of the literature reveals that image processing tools aimed at supplementing the art historians toolbox are currently in the earliest stages of development. To jump-start the development of such methods, the Van Gogh and Kroller-Muller museums in The Netherlands agreed to make a data set of 101 high-resolution gray-scale scans of paintings within their collections available to groups of image processing researchers from several different universities. This article describes the approaches to brushwork analysis and artist identification developed by three research groups, within the framework of this data set.


IEEE Signal Processing Letters | 2002

A blind adaptive TEQ for multicarrier systems

Richard K. Martin; Jaiganesh Balakrishnan; William A. Sethares; C.R. Johnson

This letter exploits the cyclic prefix to create a blind adaptive globally convergent channel-shortening algorithm, with a complexity like least mean squares. The cost function is related to that of the shortening signal-to-noise solution of Melsa et al. (see IEEE Trans. Commun., vol.44, p.1662-72, Dec. 1996), and simulations are provided to demonstrate the performance of the algorithm.


IEEE Transactions on Information Theory | 1998

Relationships between the constant modulus and Wiener receivers

Hanks H. Zeng; Lang Tong; C.R. Johnson

The Godard (1980) or the constant modulus algorithm (CMA) is an effective technique for blind receiver design in communications. However, due to the complexity of the constant modulus (CM) cost function, the performance of the CM receivers has primarily been evaluated using simulations. Theoretical analysis is typically based on either the noiseless case or approximations of the cost function. The following question, while resolvable numerically for a specific example, remains unanswered in a generic manner. In the presence of channel noise, where are the CM local minima and what are their mean-squared errors (MSE)? In this paper, a geometrical approach is presented that relates the CM to Wiener (or minimum MSE) receivers. Given the MSE and the intersymbol/user interference of a Wiener receiver, a sufficient condition is given for the existence of a CM local minimum in the neighborhood of the Wiener receiver. The MSE bounds on CM receiver performance are derived and shown to be tight in simulations. The analysis shows that, while in some cases the CM receiver performs almost as well as the (nonblind) Wiener receiver, it is also possible that, due to its blind nature, the CM receiver may perform considerably worse than a (nonblind) Wiener receiver.


IEEE Transactions on Signal Processing | 2005

Unification and evaluation of equalization structures and design algorithms for discrete multitone modulation systems

Richard K. Martin; Koen Vanbleu; Ming Ding; Geert Ysebaert; Milos Milosevic; Brian L. Evans; Marc Moonen; C.R. Johnson

To ease equalization in a multicarrier system, a cyclic prefix (CP) is typically inserted between successive symbols. When the channel order exceeds the CP length, equalization can be accomplished via a time-domain equalizer (TEQ), which is a finite impulse response (FIR) filter. The TEQ is placed in cascade with the channel to produce an effective shortened impulse response. Alternatively, a bank of equalizers can remove the interference tone-by-tone. This paper presents a unified treatment of equalizer designs for multicarrier receivers, with an emphasis on discrete multitone systems. It is shown that almost all equalizer designs share a common mathematical framework based on the maximization of a product of generalized Rayleigh quotients. This framework is used to give an overview of existing designs (including an extensive literature survey), to apply a unified notation, and to present various common strategies to obtain a solution. Moreover, the unification emphasizes the differences between the methods, enabling a comparison of their advantages and disadvantages. In addition, 16 different equalizer structures and design procedures are compared in terms of computational complexity and achievable bit rate using synthetic and measured data.


IEEE Transactions on Signal Processing | 2002

Exploiting sparsity in adaptive filters

Richard K. Martin; William A. Sethares; Robert C. Williamson; C.R. Johnson

This paper studies a class of algorithms called natural gradient (NG) algorithms. The least mean square (LMS) algorithm is derived within the NG framework, and a family of LMS variants that exploit sparsity is derived. This procedure is repeated for other algorithm families, such as the constant modulus algorithm (CMA) and decision-directed (DD) LMS. Mean squared error analysis, stability analysis, and convergence analysis of the family of sparse LMS algorithms are provided, and it is shown that if the system is sparse, then the new algorithms will converge faster for a given total asymptotic MSE. Simulations are provided to confirm the analysis. In addition, Bayesian priors matching the statistics of a database of real channels are given, and algorithms are derived that exploit these priors. Simulations using measured channels are used to show a realistic application of these algorithms.


IEEE Transactions on Signal Processing | 1992

On the (non)existence of undesirable equilibria of Godard blind equalizers

Zhi Ding; C.R. Johnson; Rodney A. Kennedy

Existing results in the literature have proved that particular blind equalization algorithms, including Godard algorithms, are globally convergent in an ideal and nonimplementable setting where a doubly infinite dimensional equalizer is available for adaptation. Contrary to popular conjectures, it is shown that implementable finite dimensional equalizers which attempt to approximate the ideal setting generally fail to have global convergence to acceptable equalizer parameter settings without the use of special remedial measures. A theory based on the channel convolution matrix nullspace is proposed to explain the failure of Godard algorithms for such practical blind equalization situations. This nullspace theory is supported by a simple example showing ill convergence of the Godard algorithm. >

Collaboration


Dive into the C.R. Johnson's collaboration.

Top Co-Authors

Avatar

William A. Sethares

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Richard K. Martin

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brian D. O. Anderson

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhi Ding

University of California

View shared research outputs
Top Co-Authors

Avatar

Andrew G. Klein

Western Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rodney A. Kennedy

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge