Featured Researches

Robotics

A Deep Learning-Based Autonomous RobotManipulator for Sorting Application

Robot manipulation and grasping mechanisms have received considerable attention in the recent past, leading to the development of wide range of industrial applications. This paper proposes the development of an autonomous robotic grasping system for object sorting application. RGB-D data is used by the robot for performing object detection, pose estimation, trajectory generation, and object sorting tasks. The proposed approach can also handle grasping certain objects chosen by users. Trained convolutional neural networks are used to perform object detection and determine the corresponding point cloud cluster of the object to be grasped. From the selected point cloud data, a grasp generator algorithm outputs potential grasps. A grasp filter then scores these potential grasps, and the highest-scored grasp is chosen to execute on a real robot. A motion planner generates collision-free trajectories to execute the chosen grasp. The experiments on AUBO robotic manipulator show the potentials of the proposed approach in the context of autonomous object sorting with robust and fast sorting performance.

Read more
Robotics

A Differentiable Contact Model to Extend Lagrangian and Hamiltonian Neural Networks for Modeling Hybrid Dynamics

The incorporation of appropriate inductive bias plays a critical role in learning dynamics from data. A growing body of work has been exploring ways to enforce energy conservation in the learned dynamics by incorporating Lagrangian or Hamiltonian dynamics into the design of the neural network architecture. However, these existing approaches are based on differential equations, which does not allow discontinuity in the states, and thereby limits the class of systems one can learn. Real systems, such as legged robots and robotic manipulators, involve contacts and collisions, which introduce discontinuities in the states. In this paper, we introduce a differentiable contact model, which can capture contact mechanics, both frictionless and frictional, as well as both elastic and inelastic. This model can also accommodate inequality constraints, such as limits on the joint angles. The proposed contact model extends the scope of Lagrangian and Hamiltonian neural networks by allowing simultaneous learning of contact properties and system properties. We demonstrate this framework on a series of challenging 2D and 3D physical systems with different coefficients of restitution and friction.

Read more
Robotics

A Dual-arm Robot that Autonomously Lifts Up and Tumbles Heavy Plates Using Crane Pulley Blocks

This paper develops a planner that plans the action sequences and motion for a dual-arm robot to lift up and flip heavy plates using crane pulley blocks. The problem is motivated by the low payload of modern collaborative robots. Instead of directly manipulating heavy plates that collaborative robots cannot afford, the paper develops a planner for collaborative robots to operate crane pulley blocks. The planner assumes a target plate is pre-attached to the crane hook. It optimizes dual-arm action sequences and plans the robot's dual-arm motion that pulls the rope of the crane pulley blocks to lift up the plate. The crane pulley blocks reduce the payload that each robotic arm needs to bear. When the plate is lifted up to a satisfying pose, the planner plans a pushing motion for one of the robot arms to tumble over the plate while considering force and moment constraints. The article presents the technical details of the planner and several experiments and analysis carried out using a dual-arm robot made by two Universal Robots UR3 arms. The influence of various parameters and optimization goals are investigated and compared in depth. The results show that the proposed planner is flexible and efficient.

Read more
Robotics

A Fleet Learning Architecture for Enhanced Behavior Predictions during Challenging External Conditions

Already today, driver assistance systems help to make daily traffic more comfortable and safer. However, there are still situations that are quite rare but are hard to handle at the same time. In order to cope with these situations and to bridge the gap towards fully automated driving, it becomes necessary to not only collect enormous amounts of data but rather the right ones. This data can be used to develop and validate the systems through machine learning and simulation pipelines. Along this line this paper presents a fleet learning-based architecture that enables continuous improvements of systems predicting the movement of surrounding traffic participants. Moreover, the presented architecture is applied to a testing vehicle in order to prove the fundamental feasibility of the system. Finally, it is shown that the system collects meaningful data which are helpful to improve the underlying prediction systems.

Read more
Robotics

A Graph Neural Network to Model Disruption in Human-Aware Robot Navigation

Autonomous navigation is a key skill for assistive and service robots. To be successful, robots have to minimise the disruption caused to humans while moving. This implies predicting how people will move and complying with social conventions. Avoiding disrupting personal spaces, people's paths and interactions are examples of these social conventions. This paper leverages Graph Neural Networks to model robot disruption considering the movement of the humans and the robot so that the model built can be used by path planning algorithms. Along with the model, this paper presents an evolution of the dataset SocNav1 [25] which considers the movement of the robot and the humans, and an updated scenario-to-graph transformation which is tested using different Graph Neural Network blocks. The model trained achieves close-to-human performance in the dataset. In addition to its accuracy, the main advantage of the approach is its scalability in terms of the number of social factors that can be considered in comparison with handcrafted models. The dataset and the model are available in a public repository (this https URL).

Read more
Robotics

A Hierarchical Architecture for Human-Robot Cooperation Processes

In this paper we propose FlexHRC+, a hierarchical human-robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on (i) an in-the-loop decision making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and (ii) the representation level, integrating a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.

Read more
Robotics

A Hierarchical Multi-Robot Mapping Architecture Subject to Communication Constraints

Multi-robot systems are an efficient method to explore and map an unknown environment. The simulataneous localization and mapping (SLAM) algorithm is common for single robot systems, however multiple robots can share respective map data in order to merge a larger global map. This thesis contributes to the multi-robot mapping problem by considering cases in which robots have communication range limitations. The architecture coordinates a team of robots and the central server to explore an unknown environment by exploiting a hierarchical choice structure. The coordination algorithms ensure that the hierarchy of robots choose frontier points that provide maximum information gain, while maintaining viable communication amongst themselves and the central computer through an ad-hoc relay network. In addition, the robots employ a backup choice algorithm in cases when no valid frontier points remain by arranging the communication relay network as a fireline back to the source. This work contributes a scalable, efficient, and robust architecture towards hybrid multi-robot mapping systems that take into account communication range limitations. The architecture is tested in a simulation environment using various maps.

Read more
Robotics

A Hybrid Learner for Simultaneous Localization and Mapping

Simultaneous localization and mapping (SLAM) is used to predict the dynamic motion path of a moving platform based on the location coordinates and the precise mapping of the physical environment. SLAM has great potential in augmented reality (AR), autonomous vehicles, viz. self-driving cars, drones, Autonomous navigation robots (ANR). This work introduces a hybrid learning model that explores beyond feature fusion and conducts a multimodal weight sewing strategy towards improving the performance of a baseline SLAM algorithm. It carries out weight enhancement of the front end feature extractor of the SLAM via mutation of different deep networks' top layers. At the same time, the trajectory predictions from independently trained models are amalgamated to refine the location detail. Thus, the integration of the aforesaid early and late fusion techniques under a hybrid learning framework minimizes the translation and rotation errors of the SLAM model. This study exploits some well-known deep learning (DL) architectures, including ResNet18, ResNet34, ResNet50, ResNet101, VGG16, VGG19, and AlexNet for experimental analysis. An extensive experimental analysis proves that hybrid learner (HL) achieves significantly better results than the unimodal approaches and multimodal approaches with early or late fusion strategies. Hence, it is found that the Apolloscape dataset taken in this work has never been used in the literature under SLAM with fusion techniques, which makes this work unique and insightful.

Read more
Robotics

A Modeled Approach for Online Adversarial Test of Operational Vehicle Safety (extended version)

The scenario-based testing of operational vehicle safety presents a set of principal other vehicle (POV) trajectories that seek to force the subject vehicle (SV) into a certain safety-critical situation. Current scenarios are mostly (i) statistics-driven: inspired by human driver crash data, (ii) deterministic: POV trajectories are pre-determined and are independent of SV responses, and (iii) overly simplified: defined over a finite set of actions performed at the abstracted motion planning level. Such scenario-based testing (i) lacks severity guarantees, (ii) has predefined maneuvers making it easy for an SV with intelligent driving policies to game the test, and (iii) is inefficient in producing safety-critical instances with limited and expensive testing effort. We propose a model-driven online feedback control policy for multiple POVs which propagates efficient adversarial trajectories while respecting traffic rules and other concerns formulated as an admissible state-action space. The approach is formulated in an anchor-template hierarchy structure, with the template model planning inducing a theoretical SV capturing guarantee under standard assumptions. The planned adversarial trajectory is then tracked by a lower-level controller applied to the full-system or the anchor model. The effectiveness of the methodology is illustrated through various simulated examples with the SV controlled by either parameterized self-driving policies or human drivers.

Read more
Robotics

A Multisensory Learning Architecture for Rotation-invariant Object Recognition

This study presents a multisensory machine learning architecture for object recognition by employing a novel dataset that was constructed with the iCub robot, which is equipped with three cameras and a depth sensor. The proposed architecture combines convolutional neural networks to form representations (i.e., features) for grayscaled color images and a multi-layer perceptron algorithm to process depth data. To this end, we aimed to learn joint representations of different modalities (e.g., color and depth) and employ them for recognizing objects. We evaluate the performance of the proposed architecture by benchmarking the results obtained with the models trained separately with the input of different sensors and a state-of-the-art data fusion technique, namely decision level fusion. The results show that our architecture improves the recognition accuracy compared with the models that use inputs from a single modality and decision level multimodal fusion method.

Read more

Ready to get started?

Join us today