Paulo Menezes
University of Coimbra
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paulo Menezes.
international conference on robotics and automation | 2004
Ludovic Brèthes; Paulo Menezes; Frédéric Lerasle; Jean-Bernard Hayet
The interaction between man and machines has become an important topic for the robotic community as it can generalise the use of robots. For active H/R interaction scheme, the robot needs to detect human faces in its vicinity and then interpret canonical gestures of the tracked person assuming this interlocutor has been beforehand identified. In this context, we depict functions suitable to detect and recognise faces in video stream and then focus on face or hand tracking functions. An efficient colour segmentation based on a watershed on the skin-like coloured pixels is proposed. A new measurement model is proposed to take into account both shape and colour cues in the particle filter to track face or hand silhouettes in video stream. An extension of the basic condensation algorithm is proposed to achieve recognition of the current hand posture and automatic switching between multiple templates in the tracking loop. Results of tracking and recognition using are illustrated in the paper and show the process robustness in cluttered environments and in various light conditions. The limits of the method and future works are also discussed.
international conference on robotics and automation | 2004
José Carlos Barreto; Paulo Menezes; Jorge Dias
This paper describes a machine learning approach for visual object detection and recognition which is capable of processing images rapidly and achieving high detection and recognition rates. This framework is demonstrated on, and in part motivated by, the task of human-robot interaction. There are three main parts on this framework. The first is the persons face detection used as a preprocessing system to the second stage which is the recognition of the face of the person interacting with the robot, and the third one is the hand detection. The detection technique is based on Haar-like features introduced by Viola et al. and then improved by Lienhart et al. The eigenimages and PCA are used in the recognition stage of the system. Used in real-time human-robot interaction applications the system is able to detect and recognise faces at 10.9 frames per second in a PIV 2.2 GHz equipped with a USB camera.
robot and human interactive communication | 2006
Aurélie Clodic; Sara Fleury; Rachid Alami; Raja Chatila; Gérard Bailly; Ludovic Brèthes; Maxime Cottret; Patrick Danès; Xavier Dollat; Frédéric Elisei; Isabelle Ferrané; Matthieu Herrb; Guillaume Infantes; Christian Lemaire; Frédéric Lerasle; Jérôme Manhes; Patrick Marcoul; Paulo Menezes; Vincent Montreuil
Rackham is an interactive robot-guide that has been used in several places and exhibitions. This paper presents its design and reports on results that have been obtained after its deployment in a permanent exhibition. The project is conducted so as to incrementally enhance the robot functional and decisional capabilities based on the observation of the interaction between the public and the robot. Besides robustness and efficiency in the robot navigation abilities in a dynamic environment, our focus was to develop and test a methodology to integrate human-robot interaction abilities in a systematic way. We first present the robot and some of its key design issues. Then, we discuss a number of lessons that we have drawn from its use in interaction with the public and how that will serve to refine our design choices and to enhance robot efficiency and acceptability
IFAC Proceedings Volumes | 2004
Paulo Menezes; José Carlos Barreto; Jorge Dias
Abstract This paper describes an algorithm for human tracking using vision sensing, specially designed for a human machine interface of a mobile robotic platforms or autonomous vehicles. The solution presents a clear improvement on a tracking algorithm achieved by using a machine learning approach for visual object detection and recognition fordata association. The system is capable of processing images rapidly and achieving high detection and recognition rates. This framework is demonstrated on the task of human-robot interaction. There are three key parts on this framework. The first is the persons face detection used as input for the second stage which is the recognition of the face of the person interacting with the robot, and the third one is the tracking of this face along the time. The detection technique is based on Haar-like features, whereas eigenimages and PCA are used in the recognition stage of the system. The tracking algorithm uses a Kalman filter toestimate position and scale of the persons face in the image. The data association is accelerated by using a subwindow whose dimensions are automatically defined from the covariance matrix of the estimate. Used in realtime human-robot interaction applications, the system is able to detect, recognise and track faces up to 24 frames per second in a conventional 1GHz Pentium III laptop.
Biomechanics / Robotics | 2011
João Quintas; Paulo Menezes; Jorge Dias
This paper proposes an approach for an automated system composed by mobile robots and a smart-room following service oriented architecture, aiming to undertake complex and heavily computational tasks to aid the user in the execution of determined tasks. The proposed approach is inspired by the principles of Service Oriented Architecture, relying in cloud computing to provide an increased degree of scalability to the system. The robotic system will complement the group of virtual networks that the user may already be a part of, contributing as a connection bridge between virtual and real “worlds”. The objective of this work targets the implementation of a service robotic system that allows distant groups of robots to share and exchange learned skills and improve cooperation with human agents. The connection to the cloud plays the role of knowledge repository for the robotic system. This will allow for distant groups of robots to share and exchange each other’s learned skills and adapt to new situations of cooperation with human agents. A use case scenario is presented and suggests the application of the system in Assisted Living situations. In this scenario the context aware ability orchestrate the system towards providing health care services.
international workshop on advanced motion control | 1998
J.P. Barreto; A. Trigo; Paulo Menezes; João Miguel Dias; A.T. de Almeida
The free body diagram method, based on the dynamic equations of isolated rigid bodies, is used to overcome the difficulties in dynamic modeling of legged robots. The article presents a simulator for a six leg machine. Both kinematic and dynamic models are developed. Kinematic equations are derived with the Denavit-Hartenberg method. The free body diagram method is used to obtain the dynamic model. Some results of simulation are presented.
IEEE Computer Graphics and Applications | 2017
J.C. García; Bruno Patrão; Luis Almeida; Javier Ruiz Pérez; Paulo Menezes; Jorge Dias; Pedro J. Sanz
Human-machine interfaces play a crucial role in intervention robotic systems operated in hazardous environments, such as deep sea conditions. This article introduces a user interface abstraction layer to enhance reconfigurability. It also describes a VR-based interface that utilizes immersive technologies to reduce user faults and mental fatigue. The goal is to show the user only the most relevant information about the current mission.
global engineering education conference | 2014
Teresa Restivo; Fátima Chouzal; José A. Rodrigues; Paulo Menezes; J. Bernardino Lopes
This paper presents an exploratory study about educational potentialities of an augmented reality (AR) application developed for DC circuit fundamentals. Particularly the study aims to characterize student involvement using the application as well as its use as an additional experimental tool and to characterize how students perceive their experience and their learning through the use of this AR application. It is also briefly described how this application was developed and how the exploratory study was implemented involving STEM students. The AR application confirmed to be manageable and students have explored its configurations intuitively. Additionally, the AR tool usability according to our preliminary results showed to be effective for the AR developed application purposes, has induced student satisfaction and revealed very good student perceptions about learning perspectives. So, this study showed this AR application for DC circuits has a great educational potential.
international symposium on safety, security, and rescue robotics | 2013
Rui P. Rocha; David Portugal; Micael S. Couceiro; Filipe Araujo; Paulo Menezes; Jorge Lobo
Mobile robots can be an invaluable aid to human first responders (FRs) in catastrophic incidents, as they are expendable and can be used to reduce human exposure to risk in search and rescue (SaR) missions, as well as attaining a more effective response. Moreover, parallelism and robustness yielded by multi-robot systems (MRS) may be very useful in this kind of spatially distributed tasks, which are distributed in space, providing augmented situation awareness (SA). However, this requires adequate cooperative behaviors, both within MRS teams and between human and robotic teams. Collaborative context awareness between both teams is crucial to assess information utility, efficiently share information and build a common and consistent SA. This paper presents the foreseen research within the CHOPIN research project, which aims to address these scientific challenges and provide a proof of concept for the cooperation between human and robotic teams in SaR scenarios.
Image and Vision Computing | 2011
Paulo Menezes; Frédéric Lerasle; Jorge Dias
This article describes a multiple feature data fusion applied to a particle filter for marker-less human motion capture (HMC) by using a single camera devoted to an assistant mobile robot. Particle filters have proved to be well suited to this robotic context. Like numerous approaches, the principle relies on the projection of the models silhouette of the tracked human limbs and appearance features located on the model surface, to validate the particles (associated configurations) which correspond to the best model-to-image fits. Our particle filter based HMC system is improved and extended in two ways. First, our estimation process is based on the so-called AUXILIARY scheme which has been surprisingly seldom exploited for tracking purpose. This scheme is shown to outperform conventional particle filters as it limits drastically the well-known burst in term of particles when considering high dimensional state-space. The second line of investigation concerns data fusion. Data fusion is considered both in the importance and measurement functions with some degree of adaptability depending on the current human posture and the environmental context encountered by the robot. Implementation and experiments on indoor sequences acquired by an assistant mobile robot highlight the relevance and versatility of our HMC system. Extensions are finally discussed.