Odile Macchi
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Odile Macchi.
IEEE Transactions on Information Theory | 1984
Odile Macchi; Eweda Eweda
A theoretical analysis of self-adaptive equalization for data-transmission is carried out starting from known convergence results for the corresponding trained adaptive filter. The development relies on a suitable ergodicity model for the sequence of observations at the output of the transmission channel. Thanks to the boundedness of the decision function used for data recovery, it can be proved that the algorithm is bounded. Strong convergence results can be reached when a perfect (noiseless) equalizer exists: the algorithm will converge to it if the eye pattern is initially open. Otherwise convergence may take place towards certain other stationary points of the algorithm for which domains of attraction have been defined. Some of them will result in a poor error rate. The case of a noisy channel exhibits limit points for the algorithm that differ from those of the classical (trained) algorithm. The stronger the noise, the greater the difference is. One of the principal results of this study is the proof of the stability of the usual decision feedback algorithms once the learning period is over.
IEEE Transactions on Communications | 1998
Joel Labat; Odile Macchi; Christophe Laot
This paper presents a novel unsupervised (blind) adaptive decision feedback equalizer (DFE). It can be thought of as the cascade of four devices, whose main components are a purely recursive filter (/spl Rscr/) and a transversal filter (/spl Tscr/). Its major feature is the ability to deal with severe quickly time-varying channels, unlike the conventional adaptive DFE. This result is obtained by allowing the new equalizer to modify, in a reversible way, both its structure and its adaptation according to some measure of performance such as the mean-square error (MSE). In the starting mode, /spl Rscr/ comes first and whitens its own output by means of a prediction principle, while /spl Tscr/ removes the remaining intersymbol interference (ISI) thanks to the Godard (1980) (or Shalvi-Weinstein (1990)) algorithm. In the tracking mode the equalizer becomes the classical DFE controlled by the decision-directed (DD) least-mean-square (LMS) algorithm. With the same computational complexity, the new unsupervised equalizer exhibits the same convergence speed, steady-state MSE, and bit-error rate (BER) as the trained conventional DFE, but it requires no training. It has been implemented on a digital signal processor (DSP) and tested on underwater communications signals-its performances are really convincing.
IEEE Transactions on Automatic Control | 1983
Odile Macchi; Eweda Eweda
The convergence of an adaptive filtering vector is studied, when it is governed by the mean-square-error gradient algorithm with constant step size. We consider the mean-square deviation between the optimal filter and the actual one during the steady state. This quantity is known to be essentially proportional to the step size of the algorithm. However, previous analyses were either heuristic, or based upon the assumption that successive observations were independent, which is far from being realistic. Actually, in most applications, two successive observation vectors share a large number of components and thus they are strongly correlated. In this work, we deal with the case of correlated observations and prove that the mean-square deviation is actually of the same order (or less) than the step size of the algorithm. This result is proved without any boundedness or barrier assumption for the algorithm, as it has been done previously in the literature to ensure the nondivergence. Our assumptions are reduced to the finite strong-memory assumption and the finite-moments assumption for the observation. They are satisfied in a very wide class of practical applications.
IEEE Transactions on Signal Processing | 1991
Neil J. Bershad; Odile Macchi
The authors study the ability of the exponentially weighted recursive least square (RLS) algorithm to track a complex chirped exponential signal buried in additive white Gaussian noise (power P/sub n/). The signal is a sinusoid whose frequency is drifting at a constant rate Psi . lt is recovered using an M-tap adaptive predictor. Five principal aspects of the study are presented: the methodology of the analysis; proof of the quasi-deterministic nature of the data-covariance estimate R(k); a new analysis of RLS for an inverse system modeling problem; a new analysis of RLS for a deterministic time-varying model for the optimum filter; and an evaluation of the residual output mean-square error (MSE) resulting from the nonoptimality of the adaptive predictor (the misadjustment) in terms of the forgetting rate ( beta ) of the RLS algorithm. It is shown that the misadjustment is dominated by a lag term of order beta /sup -2/ and a noise term of order beta . Thus, a value beta /sub opt/ exists which yields a minimum misadjustment. It is proved that beta /sub opt/=((M+1) rho Psi /sup 2/)/sup 1/3/, and the minimum misadjustment is equal to (3/4)P/sub n/(M+1) beta /sub opt/, where rho is the input signal-to-noise ratio (SNR). >
Automatica | 1985
Eweda Eweda; Odile Macchi
Adaptive filtering with error gradient algorithm and constant step-size is analyzed for a deterministic time variable optimum filtering vector. The unrealistic assumption of independent observations is replaced by a bounded memory model, largely justifiable in applications. Then the mean square tracking deviation (MSD) between the optimum vector and the algorithm output is proved to include two contributions; the stationary mode error, characteristic of convergence accuracy, which is proportional to the step-size; and the transient mode error, reflecting the rapidity of tracking, which is proportional to the squared ratio of the maximum optimum estimator increment to the step-size. This result agrees with the common intuition that there exists an optimum step-size which compromises between convergence accuracy and tracking speed.
IEEE Transactions on Information Theory | 1981
Odile Macchi; Louis L. Scharf
The problem of simultaneously estimating phase and decoding data symbols from baseband data is posed. The phase sequence is assumed to be a random sequence on the circle, and the symbols are assumed to be equally likely symbols transmitted over a perfectly equalized channel. A dynamic programming algorithm (Viterbi algorithm) is derived for decoding a maximum {\em a posteriori} (MAP) phase-symbol sequence on a finite dimensional phase-symbol trellis. A new and interesting principle of Optimality for simultaneously estimating phase and decoding phase-amplitude coded symbols leads to an efficient two-step decoding procedure for decoding phase-symbol sequences. Simulation results for binary, 8 -ary phase shift keyed (PSK), and 16-quadrature amplitude shift keyed (QASK) symbol sets transmitted over random walk and sinusoidal jitter channels are presented and compared with results one may obtain with a decision-directed algorithm or with the binary Viterbi algorithm introduced by Ungerboeck. When phase fluctuations are severe and when occasional large phase fluctuations exist, MAP phase-symbol sequence decoding on circles is superior to Ungerboecks technique, which in turn is superior to decision-directed techniques.
international conference on acoustics, speech, and signal processing | 1994
Eric Moreau; Odile Macchi
In order to perform separation of a mixture of sources, an interesting approach is to maximise a contrast function: e.g. the contrast of Comon. This paper brings two novel contributions (i) a novel algorithm as proposed in order to adaptively maximise Comons contrast. However it requires a preprocessing whitening operation which is awkward when the mixture is ill-conditioned. (ii) A new criterion is defined that is free of the prewhitening step. In the case of two sources it can be proved that this criterion is a contrast. This contrast can also be adaptively maximized and has the additional advantage not to require identical signs for the fourth-order cumulants of the sources. Achievement of these two adaptive algorithms is demonstrated using a new performance index.<<ETX>>
IEEE Transactions on Neural Networks | 1998
Zied Malouche; Odile Macchi
Extracting one specific component of a linear mixture is to isolate it due to the observation of several mixtures of all the components. This is done in an unsupervised way, based on the sole knowledge that the components are independent. The classical solution is independent component analysis which extracts the components all at the same time. In this paper, given at least as many sensors as components, we propose a simpler approach which independently extracts each component with one neuron. The weights of the neuron are optimized by minimizing an even polynomial of its output. The corresponding adaptive algorithm is an extended anti-Hebbian rule with very low complexity. It can extract any specific negative kurtosis component. Global stability of the algorithm is investigated as well as steady-state fluctuations. The influence of additive noise is also considered. These theoretical results are thoroughly confirmed by computer simulations.
International Journal of Circuit Theory and Applications | 1992
Sylvie Marcos; Odile Macchi; Christophe Vignat; Gérard Dreyfus; L. Personnaz; Pierre Roussel-Ragot
In this paper we present in a unified framework the gradient algorithms employed in the adaptation of linear time filters (TF) and the supervised training of (non-linear) neural networks (NN). the optimality criteria used to optimize the parameters H of the filter or network are the least squares (LS) and least mean squares (LMS) in both contexts. They respectively minimize the total or the mean squares of the error e(k) between an (output) reference sequence d(k) and the actual system output y(k) corresponding to the input X(k). Minimization is performed iteratively by a gradient algorithm. the index k in (TF) is time and it runs indefinitely. Thus iterations start as soon as reception of X(k) begins. the recursive algorithm for the adaptation H(k – 1) H(k) of the parameters is implemented each time a new input X(k) is observed. When training a (NN) with a finite number of examples, the index k denotes the example and it is upper-bounded. Iterative (block) algorithms wait until all K examples are received to begin the network updating. However, K being frequently very large, recursive algorithms are also often preferred in (NN) training, but they raise the question of ordering the examples X(k). Except in the specific case of a transversal filter, there is no general recursive technique for optimizing the LS criterion. However, X(k) is normally a random stationary sequence; thus LS and LMS are equivalent when k becomes large. Moreover, the LMS criterion can always be minimized recursively with the help of the stochastic LMS gradient algorithm, which has low computational complexity. In (TF), X(k) is a sliding window of (time) samples, whereas in the supervised training of (NN) with arbitrarily ordered examples, X(k – 1) and X(k) have nothing to do with each other. When this (major) difference is rubbed out by plugging a time signal at the network input, the recursive algorithms recently developed for (NN) training become similar to those of adaptive filtering. In this context the present paper displays the similarities between adaptive cascaded linear filters and trained multilayer networks. It is also shown that there is a close similarity between adaptive recursive filters and neural networks including feedback loops. The classical filtering approach is to evaluate the gradient by ‘forward propagation’, whereas the most popular (NN) training method uses a gradient backward propagation method. We show that when a linear (TF) problem is implemented by an (NN), the two approaches are equivalent. However, the backward method can be used for more general (non-linear) filtering problems. Conversely, new insights can be drawn in the (NN) context by the use of a gradient forward computation. The advantage of the (NN) framework, and in particular of the gradient backward propagation approach, is evidently to have a much larger spectrum of applications than (TF), since (i) the inputs are arbitrary and (ii) the (NN) can perform non-linear (TF).
IEEE Transactions on Signal Processing | 1998
Eric Moreau; Odile Macchi
For pt.I see ibid., vol.45, p.918-26 (1997). Macchi and Moreau (1997) investigated stability and convergence of a new direct linear adaptive neural network intended for separating independent sources when it is controlled by the well-known Herault-Jutten algorithm. In this second part, we study the corresponding feedback adaptive network. For two globally sub-Gaussian sources, the network achieves quasi-convergence in the mean square sense toward a separating state. A novel mixed adaptive direct/feedback network that is free of implementation constraints is investigated from the points of view of stability and convergence and compared with the direct and feedback networks. The three networks have the same (low) complexity. The mixed one achieves the best trade-off between convergence speed and steady-state separation performance, independently of the specific mixture.