Pedro Vicente
Instituto Superior Técnico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pedro Vicente.
world congress on services | 2011
Pedro Vicente; Miguel Mira da Silva
Due to increasing requirements, standards and tight oversight from governments, along with the immediate need to effectively manage the increasing business and operational risks inherent to competing in a complex global market, integrated Governance, Risk and Compliance (GRC) is becoming one of the most important business requirements for organizations. In particular, IT requirements, standards and best practices play a crucial role in IT organizations/departments. The lack of guidance in this domain, namely scientific research, results in unaided attempts to improve efficiency and effectiveness in organizations. In this paper we propose a business architecture that describes the integration of the main processes for IT Governance, IT Risk Management and IT Compliance (IT GRC). Based on a process model for IT GRC and a conceptual model for GRC, we use ArchiMate to model the behavioural, structural and informational structure of the business viewpoint - business processes, roles and business objects respectively. To end with, we discuss the final result and draw some conclusions about the constructed artifact.
ieee international conference on autonomous robot systems and competitions | 2015
Pedro Vicente; Ricardo Ferreira; Lorenzo Jamone; Alexandre Bernardino
Humanoid robots have complex kinematic chains that are difficult to model with the precision required to reach and/or grasp objects properly. In this paper we propose a GPU-enabled vision based 3D hand pose estimation method that runs during robotic reaching tasks to calibrate in real time the kinematic chain of the robot arm. This is achieved by combining: i) proprioceptive and visual sensing, and ii) a kinematic and computer graphics model of the system. We use proprioceptive input to create visual hypotheses about the hand appearance in the image using a 3D CAD model inside the game engine from Unity Technologies. These hypotheses are compared with the actual visual input using particle filter techniques. The outcome of this processing is the best hypothesis for the hand pose and a set of joint offsets to calibrate the arm. We tested our approach in a simulation environment and verified that the angular error is reduced 3 times and the position error about 12 times comparing with the non-calibrated case (proprioception only). The used GPU implementation techniques ensures a performance 2.5 times faster than performing the computations on the CPU.
Journal of Intelligent and Robotic Systems | 2016
Pedro Vicente; Lorenzo Jamone; Alexandre Bernardino
Humanoid robots have complex kinematic chains whose modeling is error prone. If the robot model is not well calibrated, its hand pose cannot be determined precisely from the encoder readings, and this affects reaching and grasping accuracy. In our work, we propose a novel method to simultaneously i) estimate the pose of the robot hand, and ii) calibrate the robot kinematic model. This is achieved by combining stereo vision, proprioception, and a 3D computer graphics model of the robot. Notably, the use of GPU programming allows to perform the estimation and calibration in real time during the execution of arm reaching movements. Proprioceptive information is exploited to generate hypotheses about the visual appearance of the hand in the camera images, using the 3D computer graphics model of the robot that includes both kinematic and texture information. These hypotheses are compared with the actual visual input using particle filtering, to obtain both i) the best estimate of the hand pose and ii) a set of joint offsets to calibrate the kinematics of the robot model. We evaluate two different approaches to estimate the 6D pose of the hand from vision (silhouette segmentation and edges extraction) and show experimentally that the pose estimation error is considerably reduced with respect to the nominal robot model. Moreover, the GPU implementation ensures a performance about 3 times faster than the CPU one, allowing real-time operation.
Frontiers in Robotics and AI | 2016
Pedro Vicente; Lorenzo Jamone; Alexandre Bernardino
In this paper, we describe a novel approach to obtain automatic adaptation of the robot body schema and to improve the robot perceptual and motor skills based on this body knowledge. Predictions obtained through a mental simulation of the body are combined with the real sensory feedback to achieve two objectives simultaneously: body schema adaptation and markerless 6D hand pose estimation. The body schema consists of a computer graphics simulation of the robot, which includes the arm and head kinematics (adapted online during the movements) and an appearance model of the hand shape and texture. The mental simulation process generates predictions on how the hand will appear in the robot camera images, based on the body schema and the proprioceptive information (i.e. motor encoders). These predictions are compared to the actual images using Sequential Monte Carlo techniques to feed a particle-based Bayesian estimation method to estimate the parameters of the body schema. The updated body schema will improve the estimates of the 6D hand pose, which is then used in a closed-loop control scheme (i.e. visual servoing), enabling precise reaching. We report experiments with the iCub humanoid robot that support the validity of our approach. A number of simulations with precise ground-truth were performed to evaluate the estimation capabilities of the proposed framework. Then, we show how the use of high-performance GPU programming and an edge-based algorithm for visual perception allow for real-time implementation in real world scenarios.
international conference on robotics and automation | 2017
Pedro Vicente; Lorenzo Jamone; Alexandre Bernardino
Vision-based grasping for humanoid robots is a challenging problem due to a multitude of factors. First, humanoid robots use an “eye-to-hand” kinematics configuration that, on the contrary to the more common “eye-in-hand” configuration, demands a precise estimate of the position of the robots hand. Second, humanoid robots have a long kinematic chain from the eyes to the hands, prone to accumulate the calibration errors of the kinematics model, which offsets the measured hand-to-object relative pose from the real one. In this paper, we propose a method able to solve these two issues jointly. A robust pose estimation of the robots hand is achieved via a 3D model-based stereo-vision algorithm, using an edge-based distance transform metric and synthetically generated images of a robots arm-hand internal computer-graphics model (kinematics and appearance). Then, a particle-based optimization method adapts on-line the robots internal model to match the real and the synthetically generated images, effectively compensating the kinematics calibration errors. We evaluate the proposed approach using a position-based visual-servoing method on the iCub robot, showing the importance of the continuous visual feedback in humanoid grasping tasks.
Frontiers in Robotics and AI | 2018
Pedro Vicente; Lorenzo Jamone; Alexandre Bernardino
Humanoid robots are resourceful platforms and can be used in diverse application scenarios. However, their high number of degrees of freedom (i.e., moving arms, head and eyes) deteriorates the precision of eye-hand coordination. A good kinematic calibration is often difficult to achieve, due to several factors, e.g., unmodeled deformations of the structure or backlash in the actuators. This is particularly challenging for very complex robots such as the iCub humanoid robot, which has 12 degrees of freedom and cable-driven actuation in the serial chain from the eyes to the hand. The exploitation of real-time robot sensing is of paramount importance to increase the accuracy of the coordination, for example, to realize precise grasping and manipulation tasks. In this code paper, we propose an online and markerless solution to the eye-hand kinematic calibration of the iCub humanoid robot. We have implemented a sequential Monte Carlo algorithm estimating kinematic calibration parameters (joint offsets) which improve the eye-hand coordination based on the proprioception and vision sensing of the robot. We have shown the usefulness of the developed code and its accuracy on simulation and real-world scenarios. The code is written in C++ and CUDA, where we exploit the GPU to increase the speed of the method. The code is made available online along with a Dataset for testing purposes.
ieee international conference on autonomous robot systems and competitions | 2017
Pedro Vicente; Alexandre Bernardino
In this work, we propose to study a social robot in a wedding context, where it plays the role of a wedding ring bearer. We focus on the interaction with the audience, their expectations, and reactions, rather than in technical details. We collect data from 121 individuals belonging to two different groups, those who have seen the robot behaviour (live or recorded versions) and those who did not see the robot performance. We divide the study into three parts: i) the reactions of the guests at the wedding, ii) a comparison between subjects which were exposed or not to the robot behaviour, and iii) a within-subjects experiment where after filling a survey, they are asked to see the recorded robot behaviour. The guests reacted positively to the experiment. The robot was considered likeable, lively and safe by the majority of the participants in the study. The group that observed the robots behaviour had a better opinion on the use of robots in wedding ceremonies than the group that did not observe the experience. This may suggest that a higher presence of robots in social activities will increase the acceptance of robots in society.
conference on advanced information systems engineering | 2011
Pedro Vicente; Miguel Mira da Silva
Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014
Pedro Vicente; Ricardo Ferreira; Lorenzo Jamone; Alexandre Bernardino
joint ieee international conference on development and learning and epigenetic robotics | 2017
Giovanni Saponaro; Pedro Vicente; Atabak Dehban; Lorenzo Jamone; Alexandre Bernardino; José Santos-Victor