Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rodney Lynn Kirlin is active.

Publication


Featured researches published by Rodney Lynn Kirlin.


IEEE Transactions on Signal Processing | 2003

A new time-delay estimation in multipath

Peyman Poor Moghaddam; Hamidreza Amindavar; Rodney Lynn Kirlin

This paper addresses a new approach to time-delay estimation based upon the autocorrelation estimator (AE). The primary aim of this paper is to estimate time-delays in a multipath environment in absence of prior knowledge of the channel. The maximum likelihood estimator (MLE) and AE are two computational tools that are used to determine the parameters of a multipath channel. MLE requires some priori knowledge of the source signal and the channel; AE can be a blind estimator but it is more suitable for a simple propagation model (one extra path). Under the multipath assumption we prove that if the observation sequence is zero padded the performance of MLE exceeds that of AE, however, at the price of higher computational efforts. The general autocorrelator estimator (GAE), based on autocorrelation of the received signal, is introduced. The GAE is formulated as a blind estimator, and the pertinent Cramer-Rao lower bounds (CRLB) are derived. We also develop an algorithm to estimate the parameters of a multipath environment based on the new generalization. The performance of this algorithm is examined for different signal-noise scenarios. Our results show that the time-delays are estimated accurately based on the proposed algorithm.


international conference on acoustics, speech, and signal processing | 2004

Array signal processing using GARCH noise modeling

Hadi Amiri; Hamidreza Amindavar; Rodney Lynn Kirlin

We propose a new method for modeling practical non-Gaussian and non-stationary noise in array signal processing. GARCH (generalized autoregressive conditional heteroscedasticity) models are introduced as the feasible model for the heavy tailed probability density functions (PDFs) and time varying variances of stochastic processes. We use the GARCH noise model in the maximum likelihood approach for the estimation of directions-of-arrival (DOAs). Our analysis exploits time varying variance and spatially non-uniform noise in sensor array signal processing. We show through simulations that this GARCH modeling is suitable for high-resolution source separation and noise suppression in a non-Gaussian environment.


international conference on acoustics speech and signal processing | 1996

Blind deconvolution of echosounder envelopes

D.A. Caughey; Rodney Lynn Kirlin

Ocean bottom classification performed using an an installed technology base requires compensation for the existing technologys constraints, including the fact that the digitized signal is the envelope of the convolution of the bottoms impulse response and the source ping. We present a method by which the impulse response coefficients may be estimated given the source signal and the envelope of the received signal. This is accomplished by modelling the Hilbert transform definition of an envelope as a finite-length second-order Volterra kernel, and then performing a constrained optimization on the non-linear filter coefficients.


sensor array and multichannel signal processing workshop | 2006

Maximum Likelihood Localization using GARCH Noise Models

Hadi Amiri; Hamidreza Amindavar; Rodney Lynn Kirlin

In this paper we propose a new source localization method using additive noise modeling based on generalized autoregressive conditional heteroscedasticity (GARCH) time-series. We use the GARCH noise model in the maximum likelihood (ML) sense for the estimation of direction of arrival (DOA) of impinging narrowband sources. In an actual application, the measurement of additive noise in a natural environment shows that noise can sometimes be significantly non-Gaussian and non-stationary. GARCH time-series are feasible for heavy-tail probability density function (PDF) and time-varying variances of stochastic noise process. We examine the suitability of the proposed method using simulated and experimental data


Digital Signal Processing | 1999

Blind System Identification Using Normalized Fourier Coefficient Gradient Vectors Obtained from Time-Frequency Entropy-Based Blind Clustering of Data Wavelets

B. Kaufhold; Rodney Lynn Kirlin; R.M. Dizaji

Abstract A method for the blind identification of spatially varying transfer functions found in various remote sensing applications such as medical imagery, radar, sonar, and seismology is described. The techniques proposed herein are based on model matching of Fourier coefficient sensitivity vectors of a known transfer function, which can be nonlinear in the parameters, with a set of eigenvectors obtained from data covariance matrices. One distinction between this technique and usual channel subspace methods is that no FIR structure for the individual transfer functions is assumed. Instead we assume that the frequency response as a function of the parameters is known as is often the case in wave transmission problems. A channel identification procedure based on subspace matching is proposed. The procedure matches the eigenvectors of the signal deviation covariance matrix to a set of scaled and energy-normalized sensitivity vectors. For the case where neither the number of channels, the model parameters of each channel nor the membership assignment of data traces to the channels is known, we propose a novel preliminary clustering process. By separating the data into clusters of modest variability such that the measurements are linear with the parameters, we are able to deduce all of the above. The clustering is based on feature vectors obtained from a time-frequency entropy measure, also a novelty of our paper. To support the theory developed, we include parameter estimation results based on simulated data backscattered from a synthetic multi-layer structure.


Journal of Visual Communication and Image Representation | 2015

Blind single-image super resolution based on compressive sensing

Naser karimi; Hamidreza Amindavar; Rodney Lynn Kirlin; Ahad Rajabi

A novel framework is proposed for blind single-image super resolution based on compressive sensing.Due to the extremely ill-posed nature of the problem, just a few works have been proposed.The proposed method is one of the first works that considers general PSFs.The fundamental idea is to use sparsity as regularizer in both the image and blur domains.The efficiency of the proposed method is competitive with methods that use multiple LR images. Blind super resolution is an interesting area in image processing that can restore high resolution (HR) image without requiring prior information of the volatile point spread function (PSF). In this paper, a novel framework is proposed for blind single-image super resolution (SISR) problem based on compressive sensing (CS) framework that is one of the first works that considers general PSFs. The fundamental idea in the proposed approach is to use sparsity on a known sparse transform domain as a powerful regularizer in both the image and blur domains. Therefore, a new cost function with respect to the unknown HR image patch and PSF kernel is presented and minimization is performed based on two subproblems that are modeled similar to that of CS. Simulation results demonstrate the effectiveness of the proposed algorithm that is competitive with methods that use multiple LR images to achieve a single HR image.


international conference on acoustics, speech, and signal processing | 2011

Construction of positive time-frequency distributions using dynamic copula

Showan Ashrafi; Hamidreza Amindavar; James A. Ritcey; Rodney Lynn Kirlin

In this paper we propose a novel approach termed as dynamic copula time-frequency distribution (DCTFD) for the construction of positive time-frequency distributions (PTFDs). DCTFD models the dependency, i.e., correlation between time and frequency by a time-varying dependence parameter in copula function, and captures time-frequency dependency/correlation more accurately compared to static copula TFD (CTFD). In addition to presenting the underlying theory of the approach, a set of simulations are proved to demonstrate the advantages over CTFD and other classical methods, namely bivariate distributions.


Archive | 2005

Optimization strategies for sparseness- and continuity- enhanced imaging : Theory

Felix J. Herrmann; Peyman P. Moghaddam; Rodney Lynn Kirlin

Two complementary solution strategies to the least-squares migration problem with sparseness& continuity constraints are proposed. The applied formalism explores the sparseness of curvelets on the reflectivity and their invariance under the demigrationmigration operator. Sparseness is enhanced by (approximately) minimizing a (weighted) `-norm on the curvelet coefficients. Continuity along imaged reflectors is brought out by minimizing the anisotropic diffusion or total variation norm which penalizes variations along and in between reflectors. A brief sketch of the theory is provided as well as a number of synthetic examples. Technical details on the implementation of the optimization strategies are deferred to an accompanying paper: implementation Introduction Least-squares migration and migration deconvolution have been topics that received a recent flare of interest [7, 8]. This interest is for a good reason because inverting for the normal operator (the demigration-migration operator) restores many of the amplitude artifacts related to acquisition and illumination imprints. However, the downside to this approach is that unregularized least-squares tends to fit noise and smear the energy. In addition, artifacts may be created due to imperfections in the model and possible null space of the normal operator [11]. Regularization by minimizing an energy functional on the reflectivity can alleviate some of these problems, but may go at the expense of resolution. Non-linear functionals such as ` minimization partly deal with the resolution problem but ignore bandwidth-limitation and continuity along the reflectors [12]. Independent of above efforts, attempts have been made to enhance the continuity along imaged reflectors by applying anisotropic diffusion to the image [4]. The beauty of this approach is that it brings out the continuity along the reflectors. However, the way this method is applied now leaves room for improvement regarding (i) the loss of resolution; (ii) the non-integral and non-data constrained aspects, i.e. this method is not constrained by the data which may lead to unnatural results and ’overfiltering’. In this paper, we make a first attempt to bring these techniques together under the umbrella of optimization theory and modern image processing with basis functions such as curvelet frames [3, 2]. Our approach is designed to (i) deal with substantial amounts of noise (SNR ≤ 0); (ii) use the optimal (sparse & local) representation properties of curvelets for reflectivity; (iii) exploit the near diagonalization of the normal operator by curvelets [2]; and (iv) use non-linear estimation, norm minimization and optimization techniques to enhance the continuity along reflectors [5]. Optimization strategies for seismic imaging After linearization the forward model has the following form d = Km+ n, (1) where K is a demigration operator given by the adjoint of the migration operator; m the model wih the reflectivity and n white Gaussian noise with standard deviation σn (colored noise can be accounted for). Irrespective of the type of migration operator (our discussion is independent of the type of migration operator and we allow for post-stack Kirchoff as well as ’wave-equation’ operators), two complementary optimization strategies are being investigated in our group. These strategies are designed to exploit the sparseness & invariance properties of curvelet frames in conjunction with the enhancement of the overall continuity along reflectors. More specifically, the first method [6, 5] preconditions the migration operator, yielding a reformulation of the normal equations into a standard denoising problem Fd = ≈Id z}|{ FFx+ Fn (2) y = x+ (3) with ∗ the adjoint; F· = KC∗Γ−1· the curvelet-frame preconditioned migration operator with Γ· = diag(CK∗KC∗)· ≈ CK∗KC∗· by virtue Theorem 1.1 of [2], which states that Green’s functions are nearly diagonalized by curvelets; C, C∗ the curvelet transform and its transpose; x the preconditioned model related to the reflectivity according m = CΓ−1x and close to white Gaussian noise (by virtue of the preconditioning). Applying a soft thresholding to Eq. 2 with a threshold proportional to the standard deviation σn of the noise on the data, gives an estimate for the preconditioned model with some sparseness [see for details


sensor array and multichannel signal processing workshop | 2000

Time-frequency matched field processing for time varying sources

R.M. Dizaji; N.B. Chapman; Rodney Lynn Kirlin

Deterministic broadband time-varying pulses such as chirp signals are widely used in seismic applications to determine the layering structure of the earth. Various broadband matched field processors (MFP) which have been already developed for use in inversion rely on only their frequency distribution so their performance fails in the presence of similar powerful broadband correlated noises. Because chirp and other deterministic time-varying signals have distinct signatures in time-frequency space, we have introduced a cross relation based MFP in time-frequency space that incorporates these signatures in the processing to counter the effect of broadband correlated noises in the received data. Additionally, white noise can be removed by using a cross-cross relation (CCR) time-frequency (TF)-MFP that contains only cross components of the received signals in its structure. We demonstrate the advantages of this process over the conventional MFPs to date.


Digital Signal Processing | 1991

Memoirs of a signal analyst

Rodney Lynn Kirlin

It was a dark and stormy late afternoon as we drove on East Marginal Way past Boeing for the first time. It was late January 1965, and all afternoons in Seattle at that time of year are like that. It would seem foreboding, but it was actually exciting. My wife and I were both 24 years old and had been married 5 months. At Utah State I had just finished the first two courses toward my Ph.D. I had also finished a Bachelor’s and Master’s in Electrical Engineering at University of Wyoming, and had spent a year at Martin Co. (now Martin Marietta) in Denver in a data transmission set design group and on another project having to do with EM pulse simulation and analysis. It was pretty good work and I learned a lot about circuit design and reliability (never having had a formal course in probability . . . who wanted to learn how to draw black balls out of urns?). In our data transmission design group we were given data communications system design specs from somebody somewhere else in the company. I wanted to do what those guys did. Like most young engineers at the time, I had left school for a good paying job in an exciting area of the space program. (They leave school for the same reasons today; the new marriage was not economically compatible with graduate school.) Boeing generously gave us 3 days per diem to find a place to live. (A friend of mine there bought his first house after having seen it once at night in the rain and without his wife; it was to become a mistake, of course.) The work that I would do at Boeing for the next 2 years was called advanced space communications systems design. Through this work and some of the coursework I was going to take part time at University of Washington, I would acquire some of the tools that I would use years later, when some of what I had done at Boeing would come to be called signal processing. What sorts of things? Well, a lot of phase lock loop theory, some coding theory, statistical performance analysis methods, nonlinearity analysis, image coding, speech coding (linear prediction was just in conception at the time), pseudo-random noise andcode generators, communication channel loss budgets, analytic signal (pre-envelope) operations, Wiener optimization methods, antenna gains, and noise figures. Additive noise can linearize nonlinearities. Thank you Boeing for all that and 18 quarter hours at the University of Washington, and more, but after almost 2 years, back to Utah State to finish the doctoral program (my supervisor at Boeing had said, “I thought you were one of that type”). It was 1973 while I was interviewing at Honeywell Marine that the chief engineer referred to their man who did this work as a signal analyst. From that point on that’s what I preferred to call myself, a signal analyst. I had never heard of this particular title before. The same day I visited the Applied Physics Lab at University of Washington. They too had a signal analyst on staff. Signal analysts often held positions as staff engineers, not working on single projects, but acting as in-house consultants to many projects. (I always liked the idea of consulting. It sounded like you knew something others thought might be valuable.) I recall learning two other important things that day. The chief at Honeywell mentioned that most great discoveries were made at poor facilities in an underequipped laboratory. I put that in the same file with something I learned years later, that most work is done by people who don’t feel well. The other important information I picked up was that APL was interested in something called maximum entropy spectral estimation. They were doing acoustic signal processing for the Navy then, and they still do. After returning from Honeywell to the University of Wyoming (I had been teaching there since 1969), I

Collaboration


Dive into the Rodney Lynn Kirlin's collaboration.

Top Co-Authors

Avatar

Felix J. Herrmann

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Kaufhold

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peyman P. Moghaddam

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge