Léon J. M. Rothkrantz
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Léon J. M. Rothkrantz.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000
Maja Pantic; Léon J. M. Rothkrantz
Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, development of an automated system that accomplishes this task is rather difficult. There are several related problems: detection of an image segment as a face, extraction of the facial expression information, and classification of the expression (e.g., in emotion categories). A system that performs these operations accurately and in real time would form a big step in achieving a human-like interaction between man and machine. The paper surveys the past work in solving these problems. The capability of the human visual system with respect to these problems is discussed, too. It is meant to serve as an ultimate goal and a guide for determining recommendations for development of an automatic facial expression analyzer.
Adaptive Behavior | 1996
Ruud Schoonderwoerd; Janet Bruten; Owen Holland; Léon J. M. Rothkrantz
This article describes a novel method of achieving load balancing in telecommunications networks. A simulated network models a typical distribution of calls between nodes; nodes carrying an excess of traffic can become congested, causing calls to be lost. In addition to calls, the network also supports a population of simple mobile agents with behaviors modeled on the trail-laying abilities of ants. The ants move across the network between randomly chosen pairs of nodes; as they move, they deposit simulated pheromone as a function of their distance from their source node and the congestion encountered on their journey. They select their path at each intermediate node according to the distribution of simulated pheromone at each node. Calls between nodes are routed as a function of the pheromone distributions at each intermediate node. The performance of the network is measured by the proportion of calls that are lost. The results of using ant-based control (ABC) are compared with those achieved by using fixed shortest-path routes, and also those achieved by using an alternative algorithmically based type of mobile agent previously proposed for use in network management. The ABC system is shown to result in fewer call failures than the other methods, while exhibiting many attractive features of distributed control.
Proceedings of the IEEE | 2003
Maja Pantic; Léon J. M. Rothkrantz
The ability to recognize affective states of a person we are communicating with is the core of emotional intelligence. Emotional intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for successful interpersonal social interaction. This paper argues that next-generation human-computer interaction (HCI) designs need to include the essence of emotional intelligence - the ability to recognize a users affective states-in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI-an automatic personalized analyzer of a users nonverbal affective feedback.
Image and Vision Computing | 2000
Maja Pantic; Léon J. M. Rothkrantz
Abstract This paper discusses our expert system called Integrated System for Facial Expression Recognition (ISFER), which performs recognition and emotional classification of human facial expression from a still full-face image. The system consists of two major parts. The first one is the ISFER Workbench, which forms a framework for hybrid facial feature detection. Multiple feature detection techniques are applied in parallel. The redundant information is used to define unambiguous face geometry containing no missing or highly inaccurate data. The second part of the system is its inference engine called HERCULES, which converts low level face geometry into high level facial actions, and then this into highest level weighted emotion labels.
systems man and cybernetics | 2004
Maja Pantic; Léon J. M. Rothkrantz
Automatic recognition of facial gestures (i.e., facial muscle activity) is rapidly becoming an area of intense interest in the research field of machine vision. In this paper, we present an automated system that we developed to recognize facial gestures in static, frontal- and/or profile-view color face images. A multidetector approach to facial feature localization is utilized to spatially sample the profile contour and the contours of the facial components such as the eyes and the mouth. From the extracted contours of the facial features, we extract ten profile-contour fiducial points and 19 fiducial points of the contours of the facial components. Based on these, 32 individual facial muscle actions (AUs) occurring alone or in combination are recognized using rule-based reasoning. With each scored AU, the utilized algorithm associates a factor denoting the certainty with which the pertinent AU has been scored. A recognition rate of 86% is achieved.
computer systems and technologies | 2008
Robert Horlings; Dragos Datcu; Léon J. M. Rothkrantz
Our project focused on recognizing emotion from human brain activity, measured by EEG signals. We have proposed a system to analyze EEG signals and classify them into 5 classes on two emotional dimensions, valence and arousal. This system was designed using prior knowledge from other research, and is meant to assess the quality of emotion recognition using EEG signals in practice. In order to perform this assessment, we have gathered a dataset with EEG signals. This was done by measuring EEG signals from people that were emotionally stimulated by pictures. This method enabled us to teach our system the relationship between the characteristics of the brain activity and the emotion. We found that the EEG signals contained enough information to separate five different classes on both the valence and arousal dimension. However, using a 3-fold cross validation method for training and testing, we reached classification rates of 32% for recognizing the valence dimension from EEG signals and 37% for the arousal dimension. Much better classification rates were achieved when using only the extreme values on both dimensions, the rates were 71% and 81%.
ieee intelligent transportation systems | 2001
Patrick A.M. Ehlert; Léon J. M. Rothkrantz
Microscopic traffic simulators can model traffic flow in a realistic manner and are ideal for agent-based vehicle control. In this paper we describe a model of a reactive agent that is used to control a simulated vehicle. The agent is capable of tactical-level driving and has different driving styles. To ensure fast reaction times, the agents driving task is divided into several competing and reactive behavior rules. The agent is implemented in and tested with a prototype traffic simulator program. The simulator consists of an urban environment with multi-lane roads, intersections, traffic lights, and vehicles. Every vehicle is controlled by a driving agent and all agents have individual behavior settings. Preliminary experiments show that the agents exhibit human-like behavior ranging from slow and careful to fast and aggressive driving behavior.
text speech and dialogue | 2004
Léon J. M. Rothkrantz; Pascal Wiggers; J.W.A. van Wees; R.J. van Vark
The nonverbal content of speech carries information about the physiological and psychological condition of the speaker. Psychological stress is a pathological element of this condition, of which the cause is accepted to be “workload”. Objective, quantifiable correlates of stress are searched for by means of measuring the acoustic modifications of the voice brought about by workload. Different voice features from the speech signal to be influenced by stress are: loudness, fundamental frequency, jitter, zero-crossing rate, speech rate and high-energy frequency ratio. To examine the effect of workload on speech production an experiment was designed. 108 native speakers of Dutch were recruited to participate in a stress test (Stroop test). The experiment and the analysis of the test results will be reported in this paper.
computer systems and technologies | 2007
Dragos Datcu; Léon J. M. Rothkrantz
The paper highlights the performance of video sequence-oriented facial expression recognition using Active Appearance Model -- AAM, in a comparison with the analysis based on still pictures. The AAM is used to extract relevant information regarding the shapes of the faces to be analyzed. Specific key points from a Facial Characteristic Point - FCP model are used to derive the set of features. These features are used for the classification of the expressions of a new face sample into the prototypic emotions. The classification method uses Support Vector Machines.
international conference on tools with artificial intelligence | 1999
Maja Pantic; Léon J. M. Rothkrantz
This paper discusses the Integrated System for Facial Expression Recognition (ISFER), which performs facial expression analysis from a still dual facial view image. The system consists of three major parts: a facial data generator, a facial data evaluator and a facial data analyser. While the facial data generator applies fairly conventional techniques for facial feature extraction, the rest of the system represents a novel way of performing a reliable identification of 30 different face actions and a multiple classification of expressions into the six basic emotion categories. An expert system has been utilised to convert low level face geometry into high level face actions, and then this into highest level weighted emotion labels. The system evaluation results demonstrated rather high concurrent validity with human coding of facial expressions using FACS and formal instructions in emotion signals.