Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karl Pauwels is active.

Publication


Featured researches published by Karl Pauwels.


IEEE Transactions on Computers | 2012

A Comparison of FPGA and GPU for Real-Time Phase-Based Optical Flow, Stereo, and Local Image Features

Karl Pauwels; Matteo Tomasi; Javier Díaz Alonso; Eduardo Ros; M.M. Van Hulle

Low-level computer vision algorithms have extreme computational requirements. In this work, we compare two real-time architectures developed using FPGA and GPU devices for the computation of phase-based optical flow, stereo, and local image features (energy, orientation, and phase). The presented approach requires a massive degree of parallelism to achieve real-time performance and allows us to compare FPGA and GPU design strategies and trade-offs in a much more complex scenario than previous contributions. Based on this analysis, we provide suggestions to real-time system designers for selecting the most suitable technology, and for optimizing system development on this platform, for a number of diverse applications.


IEEE Transactions on Vehicular Technology | 2011

Performance of Correspondence Algorithms in Vision-Based Driver Assistance Using an Online Image Sequence Database

Reinhard Klette; Norbert Krüger; Tobi Vaudrey; Karl Pauwels; M.M. Van Hulle; Sandino Morales; Farid I. Kandil; Ralf Haeusler; Nicolas Pugeault; Clemens Rabe; Markus Lappe

This paper discusses options for testing correspondence algorithms in stereo or motion analysis that are designed or considered for vision-based driver assistance. It introduces a globally available database, with a main focus on testing on video sequences of real-world data. We suggest the classification of recorded video data into situations defined by a cooccurrence of some events in recorded traffic scenes. About 100-400 stereo frames (or 4-16 s of recording) are considered a basic sequence, which will be identified with one particular situation. Future testing is expected to be on data that report on hours of driving, and multiple hours of long video data may be segmented into basic sequences and classified into situations. This paper prepares for this expected development. This paper uses three different evaluation approaches (prediction error, synthesized sequences, and labeled sequences) for demonstrating ideas, difficulties, and possible ways in this future field of extensive performance tests in vision-based driver assistance, particularly for cases where the ground truth is not available. This paper shows that the complexity of real-world data does not support the identification of general rankings of correspondence techniques on sets of basic sequences that show different situations. It is suggested that correspondence techniques should adaptively be chosen in real time using some type of statistical situation classifiers.


computer vision and pattern recognition | 2008

Realtime phase-based optical flow on the GPU

Karl Pauwels; M.M. Van Hulle

Phase-based optical flow algorithms are characterized by high precision and robustness, but also by high computational requirements. Using the CUDA platform, we have implemented a phase-based algorithm that maps exceptionally well on the GPUpsilas architecture. This optical flow algorithm revolves around a reliability measure that evaluates the consistency of phase information over time. By exploiting efficient filtering operations, the high internal bandwidth of the GPU, and the texture units, we obtain dense and reliable optical flow estimates in realtime at high resolutions (640 times 512 pixels and beyond). Even though the algorithm is local and does not involve iterative regularization, highly accurate results are obtained on synthetic and complex real-world sequences.


computer vision and pattern recognition | 2013

Real-Time Model-Based Rigid Object Pose Estimation and Tracking Combining Dense and Sparse Visual Cues

Karl Pauwels; Leonardo Rubio; Javier Díaz; Eduardo Ros

We propose a novel model-based method for estimating and tracking the six-degrees-of-freedom (6DOF) pose of rigid objects of arbitrary shapes in real-time. By combining dense motion and stereo cues with sparse key point correspondences, and by feeding back information from the model to the cue extraction level, the method is both highly accurate and robust to noise and occlusions. A tight integration of the graphical and computational capability of Graphics Processing Units (GPUs) results in pose updates at frame rates exceeding 60 Hz. Since a benchmark dataset that enables the evaluation of stereo-vision-based pose estimators in complex scenarios is currently missing in the literature, we have introduced a novel synthetic benchmark dataset with varying objects, background motion, noise and occlusions. Using this dataset and a novel evaluation methodology, we show that the proposed method greatly outperforms state-of-the-art methods. Finally, we demonstrate excellent performance on challenging real-world sequences involving object manipulation.


workshop on applications of computer vision | 2012

Depth-supported real-time video segmentation with the Kinect

Alexey Abramov; Karl Pauwels; Jeremie Papon; Florentin Wörgötter; Babette Dellen

We present a real-time technique for the spatiotemporal segmentation of color/depth movies. Images are segmented using a parallel Metropolis algorithm implemented on a GPU utilizing both color and depth information, acquired with the Microsoft Kinect. Segments represent the equilibrium states of a Potts model, where tracking of segments is achieved by warping obtained segment labels to the next frame using real-time optical flow, which reduces the number of iterations required for the Metropolis method to encounter the new equilibrium state. By including depth information into the framework, true objects boundaries can be found more easily, improving also the temporal coherency of the method. The algorithm has been tested for videos of medium resolutions showing human manipulations of objects. The framework provides an inexpensive visual front end for visual preprocessing of videos in industrial settings and robot labs which can potentially be used in various applications.


NeuroImage | 2013

Simulated self-motion in a visual gravity field: sensitivity to vertical and horizontal heading in the human brain.

Iole Indovina; Vincenzo Maffei; Karl Pauwels; Emiliano Macaluso; Guy A. Orban; Francesco Lacquaniti

Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical).


Journal of Vision | 2010

A cortical architecture on parallel hardware for motion processing in real time

Karl Pauwels; Norbert Krüger; Markus Lappe; Florentin Wörgötter; Marc M. Van Hulle

Walking through a crowd or driving on a busy street requires monitoring your own movement and that of others. The segmentation of these other, independently moving, objects is one of the most challenging tasks in vision as it requires fast and accurate computations for the disentangling of independent motion from egomotion, often in cluttered scenes. This is accomplished in our brain by the dorsal visual stream relying on heavy parallel-hierarchical processing across many areas. This study is the first to utilize the potential of such design in an artificial vision system. We emulate large parts of the dorsal stream in an abstract way and implement an architecture with six interdependent feature extraction stages (e.g., edges, stereo, optical flow, etc.). The computationally highly demanding combination of these features is used to reliably extract moving objects in real time. This way-utilizing the advantages of parallel-hierarchical design-we arrive at a novel and powerful artificial vision system that approaches richness, speed, and accuracy of visual processing in biological systems.


intelligent robots and systems | 2015

SimTrack: A simulation-based framework for scalable real-time object pose detection and tracking

Karl Pauwels; Danica Kragic

We propose a novel approach for real-time object pose detection and tracking that is highly scalable in terms of the number of objects tracked and the number of cameras observing the scene. Key to this scalability is a high degree of parallelism in the algorithms employed. The method maintains a single 3D simulated model of the scene consisting of multiple objects together with a robot operating on them. This allows for rapid synthesis of appearance, depth, and occlusion information from each camera viewpoint. This information is used both for updating the pose estimates and for extracting the low-level visual cues. The visual cues obtained from each camera are efficiently fused back into the single consistent scene representation using a constrained optimization method. The centralized scene representation, together with the reliability measures it enables, simplify the interaction between pose tracking and pose detection across multiple cameras. We demonstrate the robustness of our approach in a realistic manipulation scenario. We publicly release this work as a part of a general ROS software framework for real-time pose estimation, SimTrack, that can be integrated easily for different robotic applications.


NeuroImage | 2013

Dissimilar processing of emotional facial expressions in human and monkey temporal cortex

Qi-Yong Zhu; Koen Nelissen; Jan Van den Stock; François-Laurent De Winter; Karl Pauwels; Beatrice de Gelder; Wim Vanduffel; Mathieu Vandenbulcke

Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2×2×2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions.


Image and Vision Computing | 2009

Optic flow from unstable sequences through local velocity constancy maximization

Karl Pauwels; Marc M. Van Hulle

We introduce a novel video stabilization method that enables the extraction of optic flow from short unstable sequences. Contrary to traditional stabilization techniques that use approximative global motion models to estimate the full camera motion, our method estimates the unstable component of the camera motion only. This allows for the use of simpler global motion models, and at the same time extends the validity to more complex environments, such as close scenes that contain independently moving objects. The unstable component of the camera motion is derived from a maximization of the temporal local velocity constancy over the entire short sequence. The method, embedded within a phase-based optic flow algorithm, is tested on both synthetic and complex real-world sequences. The optic flow obtained using our technique is denser than that extracted directly from the original sequence, and from a sequence stabilized with a more traditional stabilization technique.

Collaboration


Dive into the Karl Pauwels's collaboration.

Top Co-Authors

Avatar

Marc M. Van Hulle

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danica Kragic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Norbert Krüger

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M.M. Van Hulle

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge