Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronald Poppe is active.

Publication


Featured researches published by Ronald Poppe.


international conference on automatic face and gesture recognition | 2006

Comparison of silhouette shape descriptors for example-based human pose recovery

Ronald Poppe; Mannes Poel

Automatically recovering human poses from visual input is useful but challenging due to variations in image space and the high dimensionality of the pose space. In this paper, we assume that a human silhouette can be extracted from monocular visual input. We compare three shape descriptors that are used in the encoding of silhouettes: Fourier descriptors, shape contexts and Hu moments. An example-based approach is taken to recover upper body poses from these descriptors. We perform experiments with deformed silhouettes to test each descriptors robustness against variations in body dimensions, viewpoint and noise. It is shown that Fourier descriptors and shape context histograms outperform Hu moments for all deformations


international conference on multimodal interfaces | 2014

Touching the Void -- Introducing CoST: Corpus of Social Touch

Merel Madeleine Jung; Ronald Poppe; Mannes Poel; Dirk Heylen

Touch behavior is of great importance during social interaction. To transfer the tactile modality from interpersonal interaction to other areas such as Human-Robot Interaction (HRI) and remote communication automatic recognition of social touch is necessary. This paper introduces CoST: Corpus of Social Touch, a collection containing 7805 instances of 14 different social touch gestures. The gestures were performed in three variations: gentle, normal and rough, on a sensor grid wrapped around a mannequin arm. Recognition of the rough variations of these 14 gesture classes using Bayesian classifiers and Support Vector Machines (SVMs) resulted in an overall accuracy of 54% and 53%, respectively. Furthermore, this paper provides more insight into the challenges of automatic recognition of social touch gestures, including which gestures can be recognized more easily and which are more difficult to recognize.


Pattern Recognition | 2016

Weighted local intensity fusion method for variational optical flow estimation

Zhigang Tu; Ronald Poppe; Remco C. Veltkamp

Estimating a dense motion field of successive video frames is a fundamental problem in image processing. The multi-scale variational optical flow method is a critical technique that addresses this issue. Despite the considerable progress over the past decades, there are still some challenges such as dealing with large displacements and estimating the smoothness parameter. We present a local intensity fusion (LIF) method to tackle these difficulties. By evaluating the local interpolation error in terms of L1 block match on the corresponding set of images, we fuse flow proposals which are obtained from different methods and from different parameter settings integrally under a unified LIF. This approach has two benefits: (1) the incorporated matching information is helpful to recover large displacements; and (2) the obtained optimal fusion solution gives a tradeoff between the data term and the smoothness term. In addition, a selective gradient based weight is introduced to improve the performance of the LIF. Finally, we propose a corrected weighted median filter (CWMF), which applies the motion information to correct errors of the color distance weight to denoise the intermediate flow fields during optimization. Experiments demonstrate the effectiveness of our method. HighlightsA LIF is proposed to handle both large and small motion, and to estimate smoothness parameter.A selective gradient is introduced to the LIF to reduce errors that are caused by outliers.A CWMF is designed to overcome the defect of traditional WMF.


Proceedings of the International Workshop on Human Behavior Understanding (HBU) | 2014

Dyadic Interaction Detection from Pose and Flow

Coert van Gemeren; Robby T. Tan; Ronald Poppe; Remco C. Veltkamp

We propose a method for detecting dyadic interactions: fine-grained, coordinated interactions between two people. Our model is capable of recognizing interactions such as a hand shake or a high five, and locating them in time and space. At the core of our method is a pictorial structures model that additionally takes into account the fine-grained movements around the joints of interest during the interaction. Compared to a bag-of-words approach, our method not only allows us to detect the specific type of actions more accurately, but it also provides the specific location of the interaction. The model is trained with both video data and body joint estimates obtained from Kinect. During testing, only video data is required. To demonstrate the efficacy of our approach, we introduce the ShakeFive dataset that consists of videos and Kinect data of hand shake and high five interactions. On this dataset, we obtain a mean average precision of 49.56%, outperforming a bag-of-words approach by 23.32%. We further demonstrate that the model can be learned from just a few interactions.


Pattern Recognition | 2017

Variational method for joint optical flow estimation and edge-aware image restoration

Zhigang Tu; Wei Xie; Jun Cao; Coert van Gemeren; Ronald Poppe; Remco C. Veltkamp

The most popular optical flow algorithms rely on optimizing the energy function that integrates a data term and a smoothness term. In contrast to this traditional framework, we derive a new objective function that couples optical flow estimation and image restoration. Our method is inspired by the recent successes of edge-aware constraints (EAC) in preserving edges in general gradient domain image filtering. By incorporating an EAC image fidelity term (IFT) in the conventional variational model, the new energy function can simultaneously estimate optical flow and restore images with preserved edges, in a bidirectional manner. For the energy minimization, we rewrite the EAC into gradient form and optimize the IFT with Euler-Lagrange equations. We can thus apply the image restoration by analytically solving a system of linear equations. Our EAC-combined IFT is easy to implement and can be seamlessly integrated into various optical flow functions suggested in literature. Extensive experiments on public optical flow benchmarks demonstrate that our method outperforms the current state-of-the-art in optical flow estimation and image restoration. HighlightsIncorporating an EAC added IFT to the variational model to form a new energy function, which can estimate optical flow and restore images jointly.The EAC can be rewritten into the first-order gradient form, and is beneficial for preserving edges and minimization.Input images can be fast restored by optimizing the Euler-Lagrange equations of the EAC integrated IFT.


Entertainment Computing | 2016

Augmenting playspaces to enhance the game experience: A tag game case study

Alejandro Moreno; Robby van Delden; Ronald Poppe; Dennis Reidsma; Dirk Heylen

Introducing technology into games can improve players’ game experience. However, it can also reduce the amount of physical activity and social interaction. In this article, we discuss how we enhance the game of tag with technology such that physical and social characteristics of the game are retained. We first present an analysis of the behavior of children playing traditional tag games. Based on these observations, we designed the Interactive Tag Playground (ITP), an interactive installation that uses tracking and floor projections to enhance the game of tag. We evaluate the ITP in one user study with adults and one with children. We compare players’ reported experiences when playing both traditional and interactive tag. Players report significantly higher engagement and immersion when playing interactive tag. We also use tracking data collected automatically to quantitatively analyze player behavior in both tag games. Players exhibit similar patterns of physical activity and interactions in both game types. We can therefore conclude that interactive technology can be used to make traditional games more engaging, without losing social and physical character of the game.


Journal of Electronic Imaging | 2015

Estimating accurate optical flow in the presence of motion blur

Zhigang Tu; Ronald Poppe; Remco C. Veltkamp

Abstract. Spatially varying motion blur in video results from the relative motion of a camera and the scene. How to estimate accurate optical flow in the presence of spatially varying motion blur has received little attention so far. We extend the classical warping-based variational optical flow method to deal with this issue. First, we modify the data term by matching the identified nonuniform motion blur between the input images according to a fast blur detection and deblurring technique. Importantly, a downsample-interpolation technique is proposed to improve the blur detection efficiency, which saves 75% or more running time. Second, we improve the edge-preserving regularization term at blurry motion boundaries to reduce boundary errors that are caused by blur. The proposed method is evaluated on both synthetic and real sequences, and yields improved overall performance compared to the state-of-the-art in handling motion blur.


Signal Processing | 2016

Adaptive guided image filter for warping in variational optical flow computation

Zhigang Tu; Ronald Poppe; Remco C. Veltkamp

The variational optical flow method is considered to be the standard method to calculate an accurate dense motion field between successive frames. It assumes that the energy function has spatiotemporal continuities and appearance motions are small. However, for real image sequences, the temporal continuity assumption is often violated due to outliers and occlusions, causing inaccurate flow vectors at these regions. After each warping operation, errors are generated at the corresponding regions of the warped interpolation image. This results in an inaccurate discrete approximation of the temporal derivative and thus ends up affecting the accuracy of the estimated flow field. In this paper, we propose an adaptive guided image filter to correct these errors in the warped interpolation image. A guidance image is reconstructed by considering both the feature of the reference image as well as the difference between the warped interpolation image and the reference image, to guide the filtering of the warped interpolation image. To adjust the smoothing degree, the regularization parameter in the guided image filter is adaptively selected based on a confidence measure. Extensive experiments on different datasets and comparison with state-of-the-art variational optical flow algorithms demonstrate the effectiveness of our method. Introducing an adaptive guided image filter to correct errors of the intermediate warped interpolation image.Reconstructing a guidance image as a combination of the reference image and the warped image.The regularization parameter is adaptively selected based on a confidence measure to adjust the smoothing degree.


international conference on image analysis and processing | 2015

Automated Recognition of Social Behavior in Rats: The Role of Feature Quality

Malte Lorbach; Ronald Poppe; Elsbeth A. van Dam; Lucas P. J. J. Noldus; Remco C. Veltkamp

We investigate how video-based recognition of rat social behavior is affected by the quality of the tracking data and the derived feature set. We look at the impact of two common tracking errors – animal misidentification and inaccurate localization of body parts. We further examine how the complexity of representing the articulated body in the features influences the recognition accuracy. Our analyses show that correct identification of the rats is required to accurately recognize their interactions. Precise localization of multiple body points is beneficial for recognizing interactions that are described by a distinct pose. Including pose features only leads to improvement if the tracking algorithm can provide that data reliably.


Pattern Recognition | 2018

Multi-stream CNN: Learning representations based on human-related regions for action recognition

Zhigang Tu; Wei Xie; Qianqing Qin; Ronald Poppe; Remco C. Veltkamp; Baoxin Li; Junsong Yuan

Abstract The most successful video-based human action recognition methods rely on feature representations extracted using Convolutional Neural Networks (CNNs). Inspired by the two-stream network (TS-Net), we propose a multi-stream Convolutional Neural Network (CNN) architecture to recognize human actions. We additionally consider human-related regions that contain the most informative features. First, by improving foreground detection, the region of interest corresponding to the appearance and the motion of an actor can be detected robustly under realistic circumstances. Based on the entire detected human body, we construct one appearance and one motion stream. In addition, we select a secondary region that contains the major moving part of an actor based on motion saliency. By combining the traditional streams with the novel human-related streams, we introduce a human-related multi-stream CNN (HR-MSCNN) architecture that encodes appearance, motion, and the captured tubes of the human-related regions. Comparative evaluation on the JHMDB, HMDB51, UCF Sports and UCF101 datasets demonstrates that the streams contain features that complement each other. The proposed multi-stream architecture achieves state-of-the-art results on these four datasets.

Collaboration


Dive into the Ronald Poppe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge