M.M. Van Hulle
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M.M. Van Hulle.
IEEE Transactions on Computers | 2012
Karl Pauwels; Matteo Tomasi; Javier Díaz Alonso; Eduardo Ros; M.M. Van Hulle
Low-level computer vision algorithms have extreme computational requirements. In this work, we compare two real-time architectures developed using FPGA and GPU devices for the computation of phase-based optical flow, stereo, and local image features (energy, orientation, and phase). The presented approach requires a massive degree of parallelism to achieve real-time performance and allows us to compare FPGA and GPU design strategies and trade-offs in a much more complex scenario than previous contributions. Based on this analysis, we provide suggestions to real-time system designers for selecting the most suitable technology, and for optimizing system development on this platform, for a number of diverse applications.
IEEE Transactions on Neural Networks | 2011
Yili Xia; B Jelfs; M.M. Van Hulle; Jose C. Principe; Danilo P. Mandic
A novel complex echo state network (ESN), utilizing full second-order statistical information in the complex domain, is introduced. This is achieved through the use of the so-called augmented complex statistics, thus making complex ESNs suitable for processing the generality of complex-valued signals, both second-order circular (proper) and noncircular (improper). Next, in order to deal with nonstationary processes with large nonlinear dynamics, a nonlinear readout layer is introduced and is further equipped with an adaptive amplitude of the nonlinearity. This combination of augmented complex statistics and enhanced adaptivity within ESNs also facilitates the processing of bivariate signals with strong component correlations. Simulations in the prediction setting on both circular and noncircular synthetic benchmark processes and real-world noncircular and nonstationary wind signals support the analysis.
IEEE Transactions on Biomedical Engineering | 2004
Temujin Gautama; Danilo P. Mandic; M.M. Van Hulle
The delay vector variance (DVV) method, which analyzes the nature of a time series with respect to the prevalence of deterministic or stochastic components, is introduced. Due to the standardization within the DVV method, it is possible both to statistically test for the presence of nonlinearities in a time series, and to visually inspect the results in a DVV scatter diagram. This approach is convenient for interpretation as it conveys information about the linear or nonlinear nature, as well as about the prevalence of deterministic or stochastic components in the time series, thus unifying the existing approaches which deal either with only deterministic versus stochastic, or the linear versus nonlinear aspect. The results on biomedical time series, namely heart rate variability (HRV) and functional Magnetic Resonance Imaging (fMRI) time series, illustrate the applicability of the proposed DVV-method.
IEEE Transactions on Vehicular Technology | 2011
Reinhard Klette; Norbert Krüger; Tobi Vaudrey; Karl Pauwels; M.M. Van Hulle; Sandino Morales; Farid I. Kandil; Ralf Haeusler; Nicolas Pugeault; Clemens Rabe; Markus Lappe
This paper discusses options for testing correspondence algorithms in stereo or motion analysis that are designed or considered for vision-based driver assistance. It introduces a globally available database, with a main focus on testing on video sequences of real-world data. We suggest the classification of recorded video data into situations defined by a cooccurrence of some events in recorded traffic scenes. About 100-400 stereo frames (or 4-16 s of recording) are considered a basic sequence, which will be identified with one particular situation. Future testing is expected to be on data that report on hours of driving, and multiple hours of long video data may be segmented into basic sequences and classified into situations. This paper prepares for this expected development. This paper uses three different evaluation approaches (prediction error, synthesized sequences, and labeled sequences) for demonstrating ideas, difficulties, and possible ways in this future field of extensive performance tests in vision-based driver assistance, particularly for cases where the ground truth is not available. This paper shows that the complexity of real-world data does not support the identification of general rankings of correspondence techniques on sets of basic sequences that show different situations. It is suggested that correspondence techniques should adaptively be chosen in real time using some type of statistical situation classifiers.
computer vision and pattern recognition | 2008
Karl Pauwels; M.M. Van Hulle
Phase-based optical flow algorithms are characterized by high precision and robustness, but also by high computational requirements. Using the CUDA platform, we have implemented a phase-based algorithm that maps exceptionally well on the GPUpsilas architecture. This optical flow algorithm revolves around a reliability measure that evaluates the consistency of phase information over time. By exploiting efficient filtering operations, the high internal bandwidth of the GPU, and the texture units, we obtain dense and reliable optical flow estimates in realtime at high resolutions (640 times 512 pixels and beyond). Even though the algorithm is local and does not involve iterative regularization, highly accurate results are obtained on synthetic and complex real-world sequences.
Neural Networks | 1999
M.M. Van Hulle
Abstract Topographic map algorithms that are aimed at building “faithful representations” also yield maps that transfer the maximum amount of information available about the distribution from which they receive input. The weight density (magnification factor) of these maps is proportional to the input density, or the neurons of these maps have an equal probability to be active (equiprobabilistic map). As MSE minimization is not compatible with equiprobabilistic map formation in general, a number of heuristics have been devised in order to compensate for this discrepancy in competitive learning schemes, e.g. by adding a “conscience” to the neurons’ firing behavior. However, rather than minimizing a modified MSE criterion, we introduce a new unsupervised competitive learning rule, called the kernel-based Maximum Entropy learning Rule (kMER), for topographic map formation, that optimizes an information-theoretic criterion directly. To each neuron a radially symmetric kernel is associated, with a given center and radius, and the two are updated in such a way that the (unconditional) information-theoretic entropy of the neurons’ outputs is maximized. We review a number of competitive learning rules for building equiprobabilistic maps. As benchmark tests for the faithfulness of the representations, we consider two types of distributions and compare the performances of these rules and kMER, for batch and incremental learning. As a first example application, we consider non-parametric density estimation where the maps are used for generating “pilot” estimates in kernel-based density estimation. The second application we envisage for kMER is “on-line” adaptive filtering of speech signals, using Gabor functions as wavelet filters. The topographic feature maps that are developed in this way differ in several respects from those obtained with Kohonens Adaptive-Subspace SOM algorithm.
Neural Networks | 1993
M.M. Van Hulle; T. Tollenaere
This paper presents a new network-based model for segregating broadband noise textures. The model starts with the oriented local energy maps obtained from filtering the textures with a bank of quadrature pair Gabor filters with different preferred orientations and spatial frequencies, and squaring and summing the quadrature pair filter outputs point-wise. Rather than detecting differences in first-order statistics from these maps, a sequence of two network modules is used for each spatial frequency channel. The modules are based on the Entropy Driven Artificial Neural Network (EDANN) model, a previously developed adaptive network module for line- and edge detection. The first EDANN module performs orientation extraction and the second performs filling-in of missing orientation information. The aim of both network modules is to produce a reliable texture segregation based on an enlarged local difference in first-order statistics in the mean and at the same time a reduced importance of differences in spatial variability; the texture boundary is detected using a third EDANN module, following the second one. Other major features of the model are: (a) texture segregation proceeds in each spatial frequency/orientation channel separately, and (b) texture segregation as well as texture boundary detection can be performed using the same core network module.
Neurocomputing | 2012
Adrien Combaz; Nikolay Chumerin; Nikolay V. Manyakov; Arne Robben; Johan A. K. Suykens; M.M. Van Hulle
A P300 Speller is a brain-computer interface (BCI) that enables subjects to spell text on a computer screen by detecting P300 Event-Related Potentials in their electroencephalograms (EEG). This BCI application is of particular interest to disabled patients who have lost all means of verbal and motor communication. Error-related Potentials (ErrPs) in the EEG are generated by the subjects perception of an error. We report on the possibility of using these ErrPs for improving the performance of a P300 Speller. Overall nine subjects were tested, allowing us to study their EEG responses to correct and incorrect performances of the BCI, compare our findings to previous studies, explore the possibility of detecting ErrPs and discuss the integration of ErrP classifiers into the P300 Speller system.
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468) | 1999
M.M. Van Hulle
Recently, a number of heuristic techniques, mostly based on topographic maps, have been introduced in order to overcome (some) of the limitations of the blind source separation (BSS) algorithms that are rooted in the theory of independent component analysis. Here, we introduce a new heuristic that relies on the tendency of the mixture samples to cluster around the source directions in mixture space. We use linear mixtures of speech signals and consider BSS problems with not only square but also non-square mixing matrices (less mixtures than sources).Recently, a number of heuristic techniques, mostly based on topographic maps, have been introduced in order to overcome (some) of the limitations of the blind source separation (BSS) algorithms that are rooted in the theory of independent component analysis. Here, we introduce a new heuristic that relies on the tendency of the mixture samples to cluster around the source directions in mixture space. We use linear mixtures of speech signals and consider BSS problems with not only square but also non-square mixing matrices (less mixtures than sources).
IEEE Transactions on Computational Intelligence and Ai in Games | 2013
Nikolay Chumerin; Nikolay V. Manyakov; M. van Vliet; Arne Robben; Adrien Combaz; M.M. Van Hulle
In this paper, we introduce a game in which the player navigates an avatar through a maze by using a brain-computer interface (BCI) that analyzes the steady-state visual evoked potential (SSVEP) responses recorded with electroencephalography (EEG) on the players scalp. The four-command control game, called The Maze, was specifically designed around an SSVEP BCI and validated in several EEG setups when using a traditional electrode cap with relocatable electrodes and a consumer-grade headset with fixed electrodes (Emotiv EPOC). We experimentally derive the parameter values that provide an acceptable tradeoff between accuracy of game control and interactivity, and evaluate the control provided by the BCI during gameplay. As a final step in the validation of the game, a population study on a broad audience was conducted with the EPOC headset in a real-world setting. The study revealed that the majority (85%) of the players enjoyed the game in spite of its intricate control (mean accuracy 80.37%, mean mission time ratio 0.90). We also discuss what to take into account while designing BCI-based games.