Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Séverin Lemaignan is active.

Publication


Featured researches published by Séverin Lemaignan.


international conference on robotics and automation | 2011

Modular open robots simulation engine: MORSE

Gilberto Echeverria; Nicolas Lassabe; Arnaud Degroote; Séverin Lemaignan

This paper presents MORSE, a new open-source robotics simulator. MORSE provides several features of interest to robotics projects: it relies on a component-based architecture to simulate sensors, actuators and robots; it is flexible, able to specify simulations at variable levels of abstraction according to the systems being tested; it is capable of representing a large variety of heterogeneous robots and full 3D environments (aerial, ground, maritime); and it is designed to allow simulations of multiple robots systems. MORSE uses a “Software-in-the-Loop” philosophy, i.e. it gives the possibility to evaluate the algorithms embedded in the software architecture of the robot within which they are to be integrated. Still, MORSE is independent of any robot architecture or communication framework (middleware). MORSE is built on top of Blender, using its powerful features and extending its functionality through Python scripts. Simulations are executed on Blenders Game Engine mode, which provides a realistic graphical display of the simulated environments and allows exploiting the reputed Bullet physics engine. This paper presents the conception principles of the simulator and some use-case illustrations.


International Journal of Social Robotics | 2012

Grounding the Interaction: Anchoring Situated Discourse in Everyday Human-Robot Interaction

Séverin Lemaignan; Raquel Ros; E. Akin Sisbot; Rachid Alami; Michael Beetz

This paper presents how extraction, representation and use of symbolic knowledge from real-world perception and human-robot verbal and non-verbal interaction can actually enable a grounded and shared model of the world that is suitable for later high-level tasks such as dialogue understanding. We show how the anchoring process itself relies on the situated nature of human-robot interactions. We present an integrated approach, including a specialized symbolic knowledge representation system based on Description Logics, and case studies on several robotic platforms that demonstrate these cognitive capabilities.


intelligent robots and systems | 2010

ORO, a knowledge management platform for cognitive architectures in robotics

Séverin Lemaignan; Raquel Ros; Lorenz Mösenlechner; Rachid Alami; Michael Beetz

This paper presents an embeddable knowledge processing framework, along with a common-sense ontology, designed for robotics. We believe that a direct and explicit integration of cognition is a compulsory step to enable human-robots interaction in semantic-rich human environments like our houses. The OpenRobots Ontology (ORO) kernel allows to turn previously acquired symbols into concepts linked to each other. It enables in turn reasoning and the implementation of other advanced cognitive functions like events, categorization, memory management and reasoning on parallel cognitive models. We validate this framework on several cognitive scenarii that have been implemented on three different robotic architectures.


human-robot interaction | 2015

When Children Teach a Robot to Write: An Autonomous Teachable Humanoid Which Uses Simulated Handwriting

Deanna Hood; Séverin Lemaignan; Pierre Dillenbourg

This article presents a novel robotic partner which children can teach handwriting. The system relies on the learning by teaching paradigm to build an interaction, so as to stimulate meta-cognition, empathy and increased self-esteem in the child user. We hypothesise that use of a humanoid robot in such a system could not just engage an unmotivated student, but could also present the opportunity for children to experience physically-induced benefits encountered during human-led handwriting interventions, such as motor mimicry.By leveraging simulated handwriting on a synchronised tablet display, a NAo humanoid robot with limited fine motor capabilities has been configured as a suitably embodied handwriting partner. Statistical shape models derived from principal component analysis of a dataset of adult-written letter trajectories allow the robot to draw purposefully deformed letters. By incorporating feedback from user demonstrations, the system is then able to learn the optimal parameters for the appropriate shape models.Preliminary in situ studies have been conducted with primary school classes to obtain insight into children’s use of the novel system. Children aged 6–8 successfully engaged with the robot and improved its writing to a level which they were satisfied with. The validation of the interaction represents a significant step towards an innovative use for robotics which addresses a widespread and socially meaningful challenge in education.


international conference on robotics and automation | 2010

GenoM3: Building middleware-independent robotic components

Anthony Mallet; Cédric Pasteur; Matthieu Herrb; Séverin Lemaignan; François Felix Ingrand

The topic of reusable software in robotics is now largely addressed. Components based architectures, where components are independent units that can be reused accross applications, have become more popular. As a consequence, a long list of middlewares and integration tools is available in the community, often in the form of open-source projects. However, these projects are generally self contained with little reuse between them. This paper presents a software engineering approach that intends to grant middleware independance to robotic software components so that a clear separation of concerns is achieved between highly reusable algorithmic parts and integration frameworks. Such a decoupling let middle-wares be used interchangeably, while fully benefitting from their specific, individual features. This work has been integrated into a new version of the open-source GenоM component generator tool: GenоM3


robot and human interactive communication | 2010

Which one? Grounding the referent based on efficient human-robot interaction

Raquel Ros; Séverin Lemaignan; E. Akin Sisbot; Rachid Alami; Jasmin Steinwender; Katharina Hamann; Felix Warneken

In human-robot interaction, a robot must be prepared to handle possible ambiguities generated by a human partner. In this work we propose a set of strategies that allow a robot to identify the referent when the human partner refers to an object giving incomplete information, i.e. an ambiguous description. Moreover, we propose the use of an ontology to store and reason on the robots knowledge to ease clarification, and therefore, improve interaction. We validate our work through both simulation and two real robotic platforms performing two tasks: a daily-life situation and a game.


human-robot interaction | 2014

Which robot behavior can motivate children to tidy up their toys?: design and evaluation of "ranger"

Julia Fink; Séverin Lemaignan; Pierre Dillenbourg; Philippe Rétornaz; Florian Christopher Vaussard; Alain Berthoud; Francesco Mondada; Florian Wille; Karmen Franinovic

We present the design approach and evaluation of our prototype called “Ranger”. Ranger is a robotic toy box that aims to motivate young children to tidy up their room. We evaluated Ranger in 14 families with 31 children (2-10 years) using the Wizard-of-Oz technique. This case study explores two different robot behaviors (proactive vs. reactive) and their impact on children’s interaction with the robot and the tidying behavior. The analysis of the video recorded scenarios shows that the proactive robot tended to encourage more playful and explorative behavior in children, whereas the reactive robot triggered more tidying behavior. Our findings hold implications for the design of interactive robots for children, and may also serve as an example of evaluating an early version of a prototype in a real-world setting. Categories and Subject Descriptors H.4 [Information Systems Applications]: Miscellaneous; I.2.9 [Artificial Intelligence]: Robotics— Commercial robots and applications


IEEE Transactions on Autonomous Mental Development | 2012

Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and Executing Actions and Shared Plans

Stéphane Lallée; Ugo Pattacini; Séverin Lemaignan; Alexander Lenz; Chris Melhuish; Lorenzo Natale; Sergey Skachek; Katharina Hamann; Jasmin Steinwender; Emrah Akin Sisbot; Giorgio Metta; Julien Guitton; Rachid Alami; Matthieu Warnier; Tony Pipe; Felix Warneken; Peter Ford Dominey

Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.


intelligent robots and systems | 2010

Towards a platform-independent cooperative human-robot interaction system: I. Perception

Stéphane Lallée; Séverin Lemaignan; Alexander Lenz; Chris Melhuish; Lorenzo Natale; Sergey Skachek; Tijn van Der Zant; Felix Warneken; Peter Ford Dominey

One of the long term objectives of robotics and artificial cognitive systems is that robots will increasingly be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. In such situations, an important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing humans. At least two significant challenges can be identified in this context. The first challenge concerns development of methods to allow the characterization of human actions such that robotic systems can observe and learn new actions, and more complex behaviors made up of those actions. The second challenge is associated with the immense heterogeneity and diversity of robots and their perceptual and motor systems. The associated question is whether the identified methods for action perception can be generalized across the different perceptual systems inherent to distinct robot platforms. The current research addresses these two challenges. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms. Within this architecture, the physical details of the perceptual system (e.g. video camera vs IR video with reflecting markers) are encapsulated at the lowest level. Actions are then automatically characterized in terms of perceptual primitives related to motion, contact and visibility. The resulting system is demonstrated to perform robust object and action learning and recognition on two distinct robotic platforms. Perhaps most interestingly, we demonstrate that knowledge acquired about action recognition with one robot can be directly imported and successfully used on a second distinct robot platform for action recognition. This will have interesting implications for the accumulation of shared knowledge between distinct heterogeneous robotic systems.


Artificial Intelligence | 2017

Artificial cognition for social humanrobot interaction

Séverin Lemaignan; Mathieu Warnier; E. Akin Sisbot; Aurélie Clodic; Rachid Alami

HumanRobot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human.We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; humanrobot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for humanrobot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural humanrobot interactions by pushing for pervasive, human-level semantics within the robots deliberative system.

Collaboration


Dive into the Séverin Lemaignan's collaboration.

Top Co-Authors

Avatar

Pierre Dillenbourg

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tony Belpaeme

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Senft

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

Paul Baxter

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

James Kennedy

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

Raquel Ros

University of Toulouse

View shared research outputs
Top Co-Authors

Avatar

Francesco Mondada

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Julia Fink

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge