Diego R. Faria
University of Coimbra
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Diego R. Faria.
robot and human interactive communication | 2014
Diego R. Faria; Cristiano Premebida; Urbano Nunes
In this work, we propose an approach that relies on cues from depth perception from RGB-D images, where features related to human body motion (3D skeleton features) are used on multiple learning classifiers in order to recognize human activities on a benchmark dataset. A Dynamic Bayesian Mixture Model (DBMM) is designed to combine multiple classifier likelihoods into a single form, assigning weights (by an uncertainty measure) to counterbalance the likelihoods as a posterior probability. Temporal information is incorporated in the DBMM by means of prior probabilities, taking into consideration previous probabilistic inference to reinforce current-frame classification. The publicly available Cornell Activity Dataset [1] with 12 different human activities was used to evaluate the proposed approach. Reported results on testing dataset show that our approach overcomes state of the art methods in terms of precision, recall and overall accuracy. The developed work allows the use of activities classification for applications where the human behaviour recognition is important, such as human-robot interaction, assisted living for elderly care, among others.
Robotics and Autonomous Systems | 2012
Diego R. Faria; Ricardo Martins; Jorge Lobo; Jorge Dias
Humans excel in manipulation tasks, a basic skill for our survival and a key feature in our manmade world of artefacts and devices. In this work, we study how humans manipulate simple daily objects, and construct a probabilistic representation model for the tasks and objects useful for autonomous grasping and manipulation by robotic hands. Human demonstrations of predefined object manipulation tasks are recorded from both the human hand and object points of view. The multimodal data acquisition system records human gaze, hand and fingers 6D pose, finger flexure, tactile forces distributed on the inside of the hand, colour images and stereo depth map, and also object 6D pose and object tactile forces using instrumented objects. From the acquired data, relevant features are detected concerning motion patterns, tactile forces and hand-object states. This will enable modelling a class of tasks from sets of repeated demonstrations of the same task, so that a generalised probabilistic representation is derived to be used for task planning in artificial systems. An object centred probabilistic volumetric model is proposed to fuse the multimodal data and map contact regions, gaze, and tactile forces during stable grasps. This model is refined by segmenting the volume into components approximated by superquadrics, and overlaying the contact points used taking into account the task context. Results show that the features extracted are sufficient to distinguish key patterns that characterise each stage of the manipulation tasks, ranging from simple object displacement, where the same grasp is employed during manipulation (homogeneous manipulation) to more complex interactions such as object reorientation, fine positioning, and sequential in-hand rotation (dexterous manipulation). The framework presented retains the relevant data from human demonstrations, concerning both the manipulation and object characteristics, to be used by future grasp planning in artificial systems performing autonomous grasping.
intelligent robots and systems | 2009
Diego R. Faria; Jorge Dias
In this work we present the segmentation and classification of 3D hand trajectory. Curvatures features are acquired by (r, θ, h) and the hand orientation is acquired by approximating the hand plane in 3D space. The 3D positions of the hand movement are acquired by markers of a magnetic tracking system [6]. Observing humans movements we perform a learning phase using histogram techniques. Based on the learning phase is possible classify reach-to-grasp movements applying Bayes rule to recognize the way that a human grasps an object by continuous classification based on multiplicative updates of beliefs. We are classifying the hand trajectory by its curvatures and by hand orientation along the trajectory individually. Both results are compared after some trials to verify the best classification between these two kinds of segmentation. Using entropy as confidence level, we can give weights for each kind of classification to combine both, acquiring a new classification for results comparison. Using these techniques we developed an application to estimate and classify two possible types of grasping by the reach-to-grasp movements performed by humans. These reported steps are important to understand some human behaviors before the object manipulation and can be used to endow a robot with autonomous capabilities (e.g. reaching objects for handling).
robot and human interactive communication | 2015
Diego R. Faria; Mario Vieira; Cristiano Premebida; Urbano Nunes
In this work, we present a human-centered robot application in the scope of daily activity recognition towards robot-assisted living. Our approach consists of a probabilistic ensemble of classifiers as a dynamic mixture model considering the Bayesian probability, where each base classifier contributes to the inference in proportion to its posterior belief. The classification model relies on the confidence obtained from an uncertainty measure that assigns a weight for each base classifier to counterbalance the joint posterior probability. Spatio-temporal 3D skeleton-based features extracted from RGB-D sensor data are modeled in order to characterize daily activities, including risk situations (e.g.: falling down, running or jumping in a room). To assess our proposed approach, challenging public datasets such as MSR-Action3D and MSR-Activity3D [1] [2] were used to compare the results with other recent methods. Reported results show that our proposed approach outperforms state-of-the-art methods in terms of overall accuracy. Moreover, we implemented our approach using Robot Operating System (ROS) environment to validate the DBMM running on-the-fly in a mobile robot with an RGB-D sensor onboard to identify daily activities for a robot-assisted living application.
intelligent robots and systems | 2010
Diego R. Faria; Ricardo Martins; Jorge Lobo; Jorge Dias
This work presents a representation of 3D object shape using a probabilistic volumetric map derived from in-hand exploration. The exploratory procedure is based on contour following through the fingertip movements on the object surface. We first consider the simple case of having single hand exploration of a static object. The cumulative pose data provides a 3D point cloud that is quantized to the probabilistic volumetric map. For each voxel we have a probability distribution for the occupancy percentage. This is then extended to in-hand exploration of non-static objects. Since the object is moving during the in-hand exploration, and we also consider the use of the other hand for re-grasping, object pose has to be tracked. By keeping track of object motion we can register data to the initial pose to build a consistent object representation. An object centered representation is implemented using the computed object center of mass to define its frame of reference. Results are presented for in-hand exploration of both static and non-static objects that show that valid models can be obtained. The 3D object probabilistic representation can be used in several applications related with grasp generation tasks.
Robotics and Autonomous Systems | 2014
Diego R. Faria; Pedro Trindade; Jorge Lobo; Jorge Dias
Humans excel when dealing with everyday manipulation tasks, being able to learn new skills, and to adapt to different complex environments. This results from a lifelong learning, and also observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multi-sensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. Based on this strategy, we show in this paper that learning from human experiences is a way to accomplish our goal of robot grasp synthesis for unknown objects. In this article we address an artificial system that relies on knowledge from previous human object grasping demonstrations. A learning process is adopted to quantify probabilistic distributions and uncertainty. These distributions are combined with preliminary knowledge towards inference of proper grasps given a point cloud of an unknown object. In this article, we designed a method that comprises a twofold process: object decomposition and grasp synthesis. The decomposition of objects into primitives is used, across which similarities between past observations and new unknown objects can be made. The grasps are associated with the defined object primitives, so that feasible object regions for grasping can be determined. The hand pose relative to the object is computed for the pre-grasp and the selected grasp. We have validated our approach on a real robotic platform-a dexterous robotic hand. Results show that the segmentation of the object into primitives allows to identify the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches, contributing for autonomous grasping.
Pattern Recognition Letters | 2017
Urbano Miguel Nunes; Diego R. Faria; Paulo Peixoto
This paper presents a novel framework for human daily activity recognition that is intended to rely on few training examples evidencing fast training times, making it suitable for real-time applications. The proposed framework starts with a feature extraction stage, where the division of each activity into actions of variable-size, based on key poses, is performed. Each action window is delimited by two consecutive and automatically identified key poses, where static (i.e. geometrical) and max-min dynamic (i.e. temporal) features are extracted. These features are first used to train a random forest (RF) classifier which was tested using the CAD-60 dataset, obtaining relevant overall average results. Then in a second stage, an extension of the RF is proposed, where the differential evolution meta-heuristic algorithm is used, as splitting node methodology. The main advantage of its inclusion is the fact that the differential evolution random forest has no thresholds to tune, but rather a few adjustable parameters with well-defined behavior.
intelligent robots and systems | 2016
Claudio Coppola; Diego R. Faria; Urbano Nunes; Nicola Bellotto
Social activity based on body motion is a key feature for non-verbal and physical behavior defined as function for communicative signal and social interaction between individuals. Social activity recognition is important to study human-human communication and also human-robot interaction. Based on that, this research has threefold goals: (1) recognition of social behavior (e.g. human-human interaction) using a probabilistic approach that merges spatio-temporal features from individual bodies and social features from the relationship between two individuals; (2) learn priors based on physical proximity between individuals during an interaction using proxemics theory to feed a probabilistic ensemble of activity classifiers; and (3) provide a public dataset with RGB-D data of social daily activities including risk situations useful to test approaches for assisted living, since this type of dataset is still missing. Results show that using the proposed approach designed to merge features with different semantics and proximity priors improves the classification performance in terms of precision, recall and accuracy when compared with other approaches that employ alternative strategies.
intelligent robots and systems | 2015
Cristiano Premebida; Diego R. Faria; Francisco A. de Souza; Urbano Nunes
In this paper a study is made of the problem of classifying scenarios, in terms of semantic categories, based on data gathered from sensors mounted on-board mobile robots operating indoors. Once the data are transformed to feature space, supervised classification is performed by a probabilistic approach called Dynamic Bayesian Mixture Models (DBMM). This approach combines class-conditional probabilities from supervised learning models and incorporates past inferences. In this work, several experiments on multi-class semantic place classification are reported based on publicly available datasets. Such experiments were conducted in a such way that generalization aspects are emphasized, which is particularly important in real-world applications. Benchmark results show the effectiveness and competitive performance of the DBMM method, in terms of classification rates, using features extracted from 2D range data and from a RGB-D (Kinect) sensor.
Robot | 2016
Mario Vieira; Diego R. Faria; Urbano Nunes
In this work, we present a real-time application in the scope of human daily activity recognition for robot-assisted living as an extension of our previous work [1]. We implemented our approach using Robot Operating System (ROS) environment, combining different modules to enable a robot to perceive the environment using different sensor modalities. Thus, the robot can move around, detect, track and follow a person to monitor daily activities wherever the person is. We focus our attention mainly on the robotic application by integrating several ROS modules for navigation, activity recognition and decision making. Reported results show that our framework accurately recognizes human activities in a real time application, triggering proper robot (re)actions, including spoken feedback for warnings and/or appropriate robot navigation tasks. Results evidence the potential of our approach for robot-assisted living applications.