Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where S. de Waele is active.

Publication


Featured researches published by S. de Waele.


IEEE Transactions on Instrumentation and Measurement | 2002

Autoregressive spectral estimation by application of the Burg algorithm to irregularly sampled data

Robert Bos; S. de Waele; P.M.T. Broersen

Many methods have been developed for spectral analysis of irregularly sampled data. Currently, popular methods such as Lomb-Scargle and resampling tend to be biased at higher frequencies. Slotting methods fail to consistently produce a spectrum that is positive for all frequencies. In this paper, a new estimator is introduced that applies the Burg algorithm for autoregressive spectral estimation to unevenly spaced data. The new estimator can be perceived as searching for sequences of data that are almost equidistant, and then analyzing those sequences using the Burg algorithm for segments. The estimated spectrum is guaranteed to be positive. If a sufficiently large data set is available, results can be accurate up to relatively high frequencies.


IEEE Transactions on Signal Processing | 2003

Order selection for vector autoregressive models

S. de Waele; P.M.T. Broersen

Order-selection criteria for vector autoregressive (AR) modeling are discussed. The performance of an order-selection criterion is optimal if the model of the selected order is the most accurate model in the considered set of estimated models: here vector AR models. Suboptimal performance can be a result of underfit or overfit. The Akaike (1969) information criterion (AIC) is an asymptotically unbiased estimator of the Kullback-Leibler discrepancy (KLD) that can be used as an order-selection criterion. AIC is known to suffer from overfit: The selected model order can be greater than the optimal model order. Two causes of overfit are finite sample effects and asymptotic effects. As a consequence of finite sample effects, AIC underestimates the KLD for higher model orders, leading to overfit. Asymptotically, overfit is the result of statistical variations in the order-selection criterion. To derive an accurate order-selection criterion, both causes of overfit have to be addressed. Moreover, the cost of underfit has to be taken into account. The combined information criterion (CIC) for vector signals is robust to finite sample effects and has the optimal asymptotic penalty factor. This penalty factor is the result of a tradeoff of underfit and overfit. The optimal penalty factor depends on the number of estimated parameters per model order. The CIC is compared to other criteria such as the AIC, the corrected Akaike information criterion (AICc), and the consistent minimum description length (MDL).


IEEE Transactions on Instrumentation and Measurement | 2000

Error measures for resampled irregular data

S. de Waele; P.M.T. Broersen

With resampling, a regularly sampled signal is extracted from observations which are irregularly spaced in time. Resampling methods can be divided into simple and complex methods. Simple methods such as Sample and Hold (S and H) and Nearest Neighbor Resampling (NNR) use only one irregular sample for one resampled observation. A theoretical analysis of the simple methods is given. The various resampling methods are compared using the new error measure SD/sub T/: the spectral distortion at interval T. SD/sub T/ is zero when the time domain properties of the signal are conserved. Using the time domain approach, an antialiasing filter is no longer necessary: the best possible estimates are obtained by using the data themselves. In the frequency domain approach, both allowing aliasing and applying antialiasing leads to distortions in the spectrum. The error measure SD/sub T/ has been compared to the reconstruction error. A small reconstruction error does not necessarily result in an accurate estimate of the statistical signal properties as expressed by SD/sub T/.


IEEE Transactions on Signal Processing | 2000

The Burg algorithm for segments

S. de Waele; P.M.T. Broersen

In many applications, the duration of an uninterrupted measurement of a time series is limited. However, it is often possible to obtain several separate segments of data. The estimation of an autoregressive model from this type of data is discussed. A straightforward approach is to take the average of models estimated from each segment separately. In this way, the variance of the estimated parameters is reduced. However, averaging does not reduce the bias in the estimate. With the Burg algorithm for segments, both the variance and the bias in the estimated parameters are reduced by fitting a single model to all segments simultaneously. As a result, the model estimated with the Burg algorithm for segments is more accurate than models obtained with averaging. The new weighted Burg algorithm for segments allows combining segments of different amplitudes.


IEEE Transactions on Instrumentation and Measurement | 2005

Automatic identification of time-series models from long autoregressive models

P.M.T. Broersen; S. de Waele

Identification is the selection of the model type and of the model order by using measured data of a process with unknown characteristics. If the observations themselves are used, it is possible to identify automatically a good time-series model for stochastic data. The selected model is an adequate representation of the statistically significant spectral details in the observed process. Sometimes, identification has to be based on many less than N characteristics of the data. The reduced statistics information is assumed to consist of a long autoregressive (AR) model. That AR model has to be used for the estimation of moving average (MA) and of combined ARMA models and for the selection of the best model orders. The accuracy of ARMA models is improved by using four different types of initial estimates in a first stage. After a second stage, it is possible to select automatically which initial estimates were most favorable in the present case by using the fit of the estimated ARMA models to the given long AR model. The same principle is used to select the best type of the time-series models and the best model order. No spectral information is lost in using only the long AR representation instead of all data. The quality of the model identified from a long AR model is comparable to that of the best time-series model that can be computed if all observations are available.


IEEE Transactions on Instrumentation and Measurement | 2004

Finite sample properties of ARMA order selection

P.M.T. Broersen; S. de Waele

The cost of order selection is defined as the loss in model quality due to selection. It is the difference between the quality of the best of all available candidate models that have been estimated from a finite sample of N observations and the quality of the model that is actually selected. The order selection criterion itself has an influence on the cost because of the penalty factor for each additionally selected parameter. Also, the number of competitive candidate models for the selection is important. The number of candidates is, of itself, small for the nested and hierarchical autoregressive/moving average (ARMA) models. However, intentionally reducing the number of selection candidates can be beneficial in combined ARMA(p,q) models, where two separate model orders are involved: the AR order p and the MA order q. The selection cost can be diminished by creating a nested sequence of ARMA(r,r-1) models. Moreover, not evaluating every combination (p,q) of the orders considerably reduces the required computation time. The disadvantage may be that the true ARMA(p,q) model is no longer among the nested candidate models. However, in finite samples, this disadvantage is largely compensated for by the reduction in the cost of order selection by considering fewer candidates. Thus, the quality of the selected model remains acceptable with only hierarchically nested ARMA(r,r-1) models as candidates.


IEEE Transactions on Instrumentation and Measurement | 2004

Application of autoregressive spectral analysis to missing data problems

P.M.T. Broersen; S. de Waele; Robert Bos

Time series solutions for spectral analysis in missing data problems use reconstruction of the missing data, or a maximum likelihood approach that analyzes only the available measured data. Maximum likelihood estimation yields the most accurate spectra. An approximate maximum likelihood algorithm is presented that uses only previous observations falling in a finite interval to compute the likelihood, instead of all previous observations. The resulting nonlinear estimation algorithm requires no user-provided initial solution, is suited for order selection, and can give very accurate spectra even if less than 10% of the data remains.


instrumentation and measurement technology conference | 1998

Reliable LDA-spectra by resampling and ARMA-modeling

S. de Waele; P.M.T. Broersen

Laser-Doppler Anemometry (LDA) is used to measure the velocity of gases and liquids with observations irregularly spaced in time. Linear interpolation of the data followed by equidistant resampling turns out to be better than slotting techniques. After resampling, two ways of spectral estimation are compared. The estimates are a windowed periodogram and a time series model with an ARMA process whose orders are automatically selected from the data with an objective statistical criterion. Typically, the ARMA spectrum is better than the best of all windowed periodograms.


IEEE Transactions on Instrumentation and Measurement | 2003

Generating data with prescribed power spectral density

P.M.T. Broersen; S. de Waele

Data generation is straightforward if the parameters of a time series model define the prescribed spectral density or covariance function. Otherwise, a time series model has to be determined. An arbitrary prescribed spectral density will be approximated by a finite number of equidistant samples in the frequency domain. This approximation becomes accurate by taking more and more samples. Those samples can be inversely Fourier transformed into a covariance function of finite length. The covariance in turn is used to compute a long autoregressive (AR) model with the Yule-Walker relations. Data can be generated with this long AR model. The long AR model can also be used to estimate time series models of different types to search for a parsimonious model that attains the required accuracy with less parameters. It is possible to derive objective rules to choose a preferred type with a minimal order for the generating time series model. That order will generally depend on the number of observations to be generated. The quality criterion for the generating time series model is that the spectrum estimated from the generated number of observations cannot be distinguished from the prescribed spectrum.


instrumentation and measurement technology conference | 1999

A time domain error measure for resampled irregular data

S. de Waele; P.M.T. Broersen

Resampling methods for irregularly sampled data are examined. A distinction is made between simple and complex methods. Simple methods such as sample & hold (S&H) and nearest neighbor resampling (NNR) use only one irregular sample for one resampled observation. The advantage of simple methods is that they are robust and do not introduce a bias in the variance. A theoretical analysis as well as simulations show that NNR is more accurate than S&H. The various resampling methods are compared using the time domain error measure MET. The time domain approach has the advantage that the best possible estimates are obtained by using the data themselves. In the frequency domain approach, both allowing aliasing and applying anti-aliasing leads to distortions in the spectrum.

Collaboration


Dive into the S. de Waele's collaboration.

Top Co-Authors

Avatar

P.M.T. Broersen

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Bos

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge