Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolas Pugeault is active.

Publication


Featured researches published by Nicolas Pugeault.


international conference on computer vision | 2011

Spelling it out: Real-time ASL fingerspelling recognition

Nicolas Pugeault; Richard Bowden

This article presents an interactive hand shape recognition user interface for American Sign Language (ASL) finger-spelling. The system makes use of a Microsoft Kinect device to collect appearance and depth images, and of the OpenNI+NITE framework for hand detection and tracking. Hand-shapes corresponding to letters of the alphabet are characterized using appearance and depth images and classified using random forests. We compare classification using appearance and depth images, and show a combination of both lead to best results, and validate on a dataset of four different users. This hand shape detection works in real-time and is integrated in an interactive user interface allowing the signer to select between ambiguous detections and integrated with an English dictionary for efficient writing.


Robotics and Autonomous Systems | 2010

A strategy for grasping unknown objects based on co-planarity and colour information

Mila Popovic; Dirk Kraft; Leon Bodenhagen; Emre Baseski; Nicolas Pugeault; Danica Kragic; Tamim Asfour; Norbert Krüger

In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi-modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information in terms of a co-planarity constraint as well as appearance based information in terms of co-occurrence of colour properties. We show that our algorithm, although making use of such rather simple constraints, is able to grasp objects with a reasonable success rate in rather complex environments (i.e., cluttered scenes with multiple objects). Moreover, we have embedded the algorithm within a cognitive system that allows for autonomous exploration and learning in different contexts. First, the system is able to perform long action sequences which, although the grasping attempts not being always successful, can recover from mistakes and more importantly, is able to evaluate the success of the grasps autonomously by haptic feedback (i.e., by a force torque sensor at the wrist and proprioceptive information about the distance of the gripper after a gasping attempt). Such labelled data is then used for improving the initially hard-wired algorithm by learning. Moreover, the grasping behaviour has been used in a cognitive system to trigger higher level processes such as object learning and learning of object specific grasping.


Journal of Machine Learning Research | 2012

Sign language recognition using sub-units

Helen Cooper; Eng-Jon Ong; Nicolas Pugeault; Richard Bowden

This paper discusses sign language recognition using linguistic sub-units. It presents three types of sub-units for consideration; those learnt from appearance data as well as those inferred from both 2D or 3D tracking data. These sub-units are then combined using a sign level classifier; here, two options are presented. The first uses Markov Models to encode the temporal changes between sub-units. The second makes use of Sequential Pattern Boosting to apply discriminative feature selection at the same time as encoding temporal information. This approach is more robust to noise and performs well in signer independent tests, improving results from the 54% achieved by the Markov Chains to 76%.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

A Probabilistic Framework for 3D Visual Object Representation

Renaud Detry; Nicolas Pugeault; Justus H. Piater

We present an object representation framework that encodes probabilistic spatial relations between 3D features and organizes these features in a hierarchy. Features at the bottom of the hierarchy are bound to local 3D descriptors. Higher level features recursively encode probabilistic spatial configurations of more elementary features. The hierarchy is implemented in a Markov network. Detection is carried out by a belief propagation algorithm, which infers the pose of high-level features from local evidence and reinforces local evidence from globally consistent knowledge, effectively producing a likelihood for the pose of the object in the detection scene. We also present a simple learning algorithm that autonomously builds hierarchies from local object descriptors. We explain how to use our framework to estimate the pose of a known object in an unknown scene. Experiments demonstrate the robustness of hierarchies to input noise, viewpoint changes, and occlusions.


IEEE Transactions on Vehicular Technology | 2011

Performance of Correspondence Algorithms in Vision-Based Driver Assistance Using an Online Image Sequence Database

Reinhard Klette; Norbert Krüger; Tobi Vaudrey; Karl Pauwels; M.M. Van Hulle; Sandino Morales; Farid I. Kandil; Ralf Haeusler; Nicolas Pugeault; Clemens Rabe; Markus Lappe

This paper discusses options for testing correspondence algorithms in stereo or motion analysis that are designed or considered for vision-based driver assistance. It introduces a globally available database, with a main focus on testing on video sequences of real-world data. We suggest the classification of recorded video data into situations defined by a cooccurrence of some events in recorded traffic scenes. About 100-400 stereo frames (or 4-16 s of recording) are considered a basic sequence, which will be identified with one particular situation. Future testing is expected to be on data that report on hours of driving, and multiple hours of long video data may be segmented into basic sequences and classified into situations. This paper prepares for this expected development. This paper uses three different evaluation approaches (prediction error, synthesized sequences, and labeled sequences) for demonstrating ideas, difficulties, and possible ways in this future field of extensive performance tests in vision-based driver assistance, particularly for cases where the ground truth is not available. This paper shows that the complexity of real-world data does not support the identification of general rankings of correspondence techniques on sets of basic sequences that show different situations. It is suggested that correspondence techniques should adaptively be chosen in real time using some type of statistical situation classifiers.


computer vision and pattern recognition | 2012

Sign Language Recognition using Sequential Pattern Trees

Eng-Jon Ong; Helen Cooper; Nicolas Pugeault; Richard Bowden

This paper presents a novel, discriminative, multi-class classifier based on Sequential Pattern Trees. It is efficient to learn, compared to other Sequential Pattern methods, and scalable for use with large classifier banks. For these reasons it is well suited to Sign Language Recognition. Using deterministic robust features based on hand trajectories, sign level classifiers are built from sub-units. Results are presented both on a large lexicon single signer data set and a multi-signer Kinect™ data set. In both cases it is shown to out perform the non-discriminative Markov model approach and be equivalent to previous, more costly, Sequential Pattern (SP) techniques.


international conference on advanced robotics | 2007

Early Reactive Grasping with Second Order 3D Feature Relations

Daniel Aarno; Johan Sommerfeld; Danica Kragic; Nicolas Pugeault; Sinan Kalkan; Florentin Wörgötter; Dirk Kraft; Norbert Krüger

One of the main challenges in the field of robotics is to make robots ubiquitous. To intelligently interact with the world, such robots need to understand the environment and situations around them and react appropriately, they need context-awareness. But how to equip robots with capabilities of gathering and interpreting the necessary information for novel tasks through interaction with the environment and by providing some minimal knowledge in advance? This has been a longterm question and one of the main drives in the field of cognitive system development.


International Journal of Humanoid Robotics | 2010

VISUAL PRIMITIVES: LOCAL, CONDENSED, SEMANTICALLY RICH VISUAL DESCRIPTORS AND THEIR APPLICATIONS IN ROBOTICS

Nicolas Pugeault; Florentin Wörgötter; Norbert Krüger

We present a novel representation of visual information, based on local symbolic descriptors, that we call visual primitives. These primitives: (1) combine different visual modalities, (2) associate semantic to local scene information, and (3) reduce the bandwidth while increasing the predictability of the information exchanged across the system. This representation leads to the concept of early cognitive vision that we define as an intermediate level between dense, signal-based early vision and high-level cognitive vision. The frameworks potential is demonstrated in several applications, in particular in the area of robotics and humanoid robotics, which are briefly outlined.


IEEE Transactions on Autonomous Mental Development | 2010

Development of Object and Grasping Knowledge by Robot Exploration

Dirk Kraft; Renaud Detry; Nicolas Pugeault; Emre Başeski; Frank Guerin; Justus H. Piater; Norbert Krüger

We describe a bootstrapping cognitive robot system that-mainly based on pure exploration-acquires rich object representations and associated object-specific grasp affordances. Such bootstrapping becomes possible by combining innate competences and behaviors by which the system gradually enriches its internal representations, and thereby develops an increasingly mature interpretation of the world and its ability to act within it. We compare the systems prior competences and developmental progress with human innate competences and developmental stages of infants.


international conference on computer vision systems | 2009

Learning Objects and Grasp Affordances through Autonomous Exploration

Dirk Kraft; Renaud Detry; Nicolas Pugeault; Emre Baseski; Justus H. Piater; Norbert Krüger

We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation, which are then augmented by continuous characterizations of grasp affordances generated through biased, random exploration. Thus, based on a careful balance of generic prior knowledge encoded in (1) the embodiment of the system, (2) a vision system extracting structurally rich information from stereo image sequences as well as (3) a number of built-in behavioral modules on the one hand, and autonomous exploration on the other hand, the system is able to generate object and grasping knowledge through interaction with its environment.

Collaboration


Dive into the Nicolas Pugeault's collaboration.

Top Co-Authors

Avatar

Norbert Krüger

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dirk Kraft

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Emre Baseski

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Sinan Kalkan

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mila Popovic

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge