Ralph Braspenning
Philips
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ralph Braspenning.
ieee international conference on automatic face & gesture recognition | 2008
Tommaso Gritti; Caifeng Shan; Vincent Jeanne; Ralph Braspenning
In this paper, we extensively investigate local features based facial expression recognition with face registration errors, which has never been addressed before. Our contributions are three fold. Firstly, we propose and experimentally study the histogram of oriented gradients (HOG) descriptors for facial representation. Secondly, we present facial representations based on local binary patterns (LBP) and local ternary patterns (LTP) extracted from overlapping local regions. Thirdly, we quantitatively study the impact of face registration errors on facial expression recognition using different facial representations. Overall LBP with overlapping gives the best performance (92.9% recognition rate on the Cohn-Kanade database), while maintaining a compact feature vector and best robustness against face registration errors.
international conference on image processing | 2001
Christian Hentschel; Ralph Braspenning; Maria Gabrani
Since programmable platforms have a fixed number of resources, the number of algorithms that can run in parallel is limited. We propose to overcome this by introducing scalable algorithms that are capable of trading resource usage for output quality. We show the feasibility of this approach by means of an implementation example, namely scalable sharpness enhancement for video signals.
ambient intelligence | 2010
Caifeng Shan; Ralph Braspenning
Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person’s internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.
visual communications and image processing | 2002
Ralph Braspenning; Gerard De Haan; Christian Hentschel
Complexity scalable algorithms are capable of trading resource usage for output quality in a near-optimal way. We present a complexity scalable motion estimation algorithm based on the 3-D recursive search block matcher. We introduce data prioritizing as a new approach to scalability. With this approach, we achieve a near-constant complexity and a continuous quality-resource distribution. While maintaining acceptable quality, it is possible to vary the resource usage from below 1 match-error calculation per block on the average to more than 5 match-error calculations per block on the average.
systems, man and cybernetics | 2010
Mirela C. Popa; Léon J. M. Rothkrantz; Zhenke Yang; Pascal Wiggers; Ralph Braspenning; Caifeng Shan
Closed Circuit Television systems in shopping malls could be used to monitor the shopping behavior of people. From the tracked path, features can be extracted such as the relation with the shopping area, the orientation of the head, speed of walking and direction, pauses which are supposed to be related to the interest of the shopper. Once the interest has been detected the next step is to assess the shoppers positive or negative appreciation to the focused products by analyzing the (non-verbal) behavior of the shopper. Ultimately the system goal is to assess the opportunities for selling, by detecting if a customer needs support. In this paper we present our methodology towards developing such a system consisting of participating observation, designing shopping behavioral models, assessing the associated features and analyzing the underlying technology. In order to validate our observations we made recordings in our shop lab. Next we describe the used tracking technology and the results from experiments.
visual communications and image processing | 2006
Ahmet Ekin; Ralph Braspenning
This paper proposes a purely image-based TV channel logo detection algorithm that can detect logos independently from their motion and transparency features. The proposed algorithm can robustly detect any type of logos, such as transparent and animated, without requiring any temporal constraints whereas known methods have to wait for the occurrence of large motion in the scene and assume stationary logos. The algorithm models logo pixels as outliers from the actual scene content that is represented by multiple 3-D histograms in the YCBCR space. We use four scene histograms corresponding to each of the four corners because the content characteristics change from one image corner to another. A further novelty of the proposed algorithm is that we define image corners and the areas where we compute the scene histograms by a cinematic technique called Golden Section Rule that is used by professionals. The robustness of the proposed algorithm is demonstrated over a dataset of representative TV content.
visual communications and image processing | 2007
André Redert; Robert-Paul Berretty; Chris Varekamp; Bart van Geest; Jan Bruijns; Ralph Braspenning; Qingqing Wei
Philips provides autostereoscopic three-dimensional display systems that will bring the next leap in visual experience, adding true depth to video systems. We identified three challenges specifically for 3D image processing: 1) bandwidth and complexity of 3D images, 2) conversion of 2D to 3D content, and 3) object-based image/depth processing. We discuss these challenges and our solutions via several examples. In conclusion, the solutions have enabled the market introduction of several professional 3D products, and progress is made rapidly towards consumer 3DTV.
international conference on distributed smart cameras | 2008
Anteneh A. Abbo; Vincent Jeanne; Martin Ouwerkerk; Caifeng Shan; Ralph Braspenning; Abhiram Ganesh; Henk Corporaal
Recent developments in the field of facial expression recognition advocate the use of feature vectors based on local binary patterns (LBP). Research on the algorithmic side addresses robustness issues when dealing with non-ideal illumination conditions. In this paper, we address the challenges related to mapping these algorithms on smart camera platforms. Algorithmic partitioning taking into account the camera architecture is investigated with a primary focus of keeping the power consumption low. Experimental results show that compute-intensive feature extraction tasks can be mapped on a massively-parallel processor with reasonable processor utilization. Although the final feature classification phase could also benefit from parallel processing, mapping on a general purpose sequential processor would suffice.
international conference on distributed smart cameras | 2008
Jorge Baranda; Vincent Jeanne; Ralph Braspenning
In this paper we investigate improvements to the efficiency of human body detection using histograms of oriented gradients (HOG). We do this without compromising the performance significantly. This is especially relevant for embedded implementations in smart camera systems, where the on-board processing power and memory is limited. We focus on applications for indoor environments such as offices and living rooms. We present different experiments to reduce both the computational complexity as well as the memory requirements for the trained model. Since the HOG feature length is large, the total memory size needed for storing the model can become more than 50 MB. We use a feature selection based on Bayesian theory to reduce the feature length. Additionally we compare the performance of the full-body detector with an upper-body only detector. For computational complexity reduction we employ a ROI-based approach.
Proceedings of the Workshop on Use of Context in Vision Processing | 2009
Hamid K. Aghajan; Ralph Braspenning; Yuri Ivanov; Louis-Philippe Morency; Anton Nijholt; Maja Pantic; Ming-Hsuan Yang
Recent efforts in defining ambient intelligence applications based on user-centric concepts, the advent of technology in different sensing modalities as well as the expanding interest in multi-modal information fusion and situation-aware and dynamic vision processing algorithms have created a common motivation across different research disciplines to utilize context as a key enabler of application-oriented vision systems design. Improved robustness, efficient use of sensing and computing resources, dynamic task assignment to different operating modules as well as adaptation to event and user behavior models are among the benefits a vision processing system can gain through the utilization of contextual information. The Workshop on Use of Context in Vision Processing (UCVP) aims to address the opportunities in incorporating contextual information in algorithm design for single or multi-camera vision systems, as well as systems in which vision is complemented with other sensing modalities, such as audio, motion, proximity, occupancy, and others.