Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamad Bdiwi is active.

Publication


Featured researches published by Mohamad Bdiwi.


intelligent robots and systems | 2012

Library automation using different structures of vision-force robot control and automatic decision system

Mohamad Bdiwi; Jozef Suchy

Most of robot applications require robots to interact with environment, objects or even with human, which need to integrate vision and force information. Generally, there are five types of vision-force control: pure position control, pure force control, traded control, shared control and hybrid control. The important questions here are: How to define the most appropriate control mode for every part of different tasks and when the control system should switch from one control mode to the other. In this work an automatic decision system is suggested to define the most appropriate control mode for successive tasks and to choose the optimal structure of vision/force control without human intervention. This research will focus on the operations of library automation as real application for the proposed control system such as shelving, storage and sorting of inaccurately placed objects.


international multi-conference on systems, signals and devices | 2011

Traded and shared vision-force robot control for improved impact control

Mohamad Bdiwi; Alexander Winkler; Jozef Suchy; G. Zschocke

Many automated manufacturing processes require robots to interact with environment and to perform force/torque interaction such as mechanical assembly. Importantly, impact forces occur while robot and environment are in contact. Robot manipulators and control systems can experience instability or poor control performance after impacting with an environment. In this paper we combine vision and force feedback in shared/traded vision-force control to reduce the impact forces and to increase the performance of robot, by calculating the distance between robots end-effector and the environment and reducing the speed according to it. Experimental results are presented to illustrate the performance of robot in two cases with and without vision feedback and to compare these results with simulation.


international multi-conference on systems, signals and devices | 2013

Robot visual servoing using the example of the inverted pendulum

Alexey Kolker; Alexander Winkler; Mohamad Bdiwi; Jozef Suchy

In this paper the control of the well known inverted pendulum will be performed by using visual servoing algorithms. The pendulum will be mounted on the flange of an articulated robot arm and it will be observed by 2D-camera. The inclination angle of the pendulum will be computed in the real time with the help of image processing algorithms. In order to swing-up the pendulum, a special algorithm will be designed to generate the coordinated robot motions. Whereas, the balancing of the pendulum will be performed by a state space controller. Both algorithms will be implemented in the standard robot controller with a relatively high cycle time. Furthermore, the calculation of the inclination angle of the pendulum by the camera, which is integrated in the closed loop controller, instead of an angular transmitter will make the successful achievement of the task more difficult. All the proposed algorithms are verified successfully in the practical experiments.


international multi-conference on systems, signals and devices | 2013

Handing-over model-free objects to human hand with the help of vision/force robot control

Mohamad Bdiwi; Jozef Suchy; Alexander Winkler

Handing-over objects from robot to humans is essential step to perform different tasks especially those requiring physical interaction between the robot and the human, e.g. service robots can help elderly, blind and disabled people or human-robot teamwork could work together in factories. This work will propose a new robot system which combines visual servoing and force control in order to hand over model-free objects from undefined place to human hand. This work will present: 1. vision algorithms which help the robot system to detect and to segment the objects without any information about their model. 2. The possibility for visual tracking of human hand with the help of Kinect camera. 3. The importance of fusion vision and force control in order to ensure the safety during the human-robot interaction. 4. How the robot will deliver the objects to the human hand with the help of vision and force control. This work will be supported with experimental results.


international conference on advanced robotics | 2013

Segmentation of model-free objects carried by human hand: Intended for human-robot interaction applications

Mohamad Bdiwi; Alexey Kolker; Jozef Suchy; Alexander Winkler

Most of robot applications which require interaction with human intend to hand over different objects between the both parties. In general, most of previous systems have supposed that the transfer task will be exclusively performed by human. However, how the task will be performed if the human is blind, elderly, disabled or concentrating on something else? In this case the transfer task should be exclusively controlled by robot, where the human could be considered as the weakest part. This paper will present vision algorithms which help the robot to recognize and to segment any object which is carried by the human hand in order to hand it over from the human hand even if the human is disabled, blind etc. The proposed vision algorithms are able to detect and to segment the carried objects with no idea about the model of the object in a very short cycle time, even if they are textureless, if the objects have the same color as the human skin and if the light conditions are changed strongly. The proposed algorithms are supported by successful results on different kinds of object.


ieee international conference on technologies for practical robot applications | 2012

Robot control system with integrated vision/force feedback for automated sorting system

Mohamad Bdiwi; Jozef Suchy

Sorting systems are required in many places with different types of applications such as in manufacturing industry, libraries, factories, warehouses, pharmacies, supermarkets etc. In this work it is suggested that an automated sorting system shelves and rearranges imprecisely placed objects using integrated vision/force robot control. The system will sort the objects not only depending on their shapes and forms but also using their alphabetic/numeric codes with the help of SIFT features. Vision system will find out the form, position, orientation and the alphabetic/numeric code of the object and it will define the most appropriated control mode for different cases of grasping. By combining vision and force feedback the grasping and transporting tasks of the objects will be performed. This research will focus on the control system and vision algorithms for library automation and book sorting as real application of this work.


international conference on social robotics | 2013

Automated Assistance Robot System for Transferring Model-Free Objects From/To Human Hand Using Vision/Force Control

Mohamad Bdiwi; Alexey Kolker; Jozef Suchý; Alexander Winkler

This paper will propose an assistance robot system which is able to transfer model-free objects from/to human hand with the help of visual servoing and force control. The proposed robot system is fully automated, i.e. the handing-over task is performed exclusively by the robot and the human will be considered as the weakest party, e.g. elderly, disabled, blind, etc. The proposed system is supported with different real time vision algorithms to detect, to recognize and to track: 1. Any object located on flat surface or conveyor. 2. Any object carried by human hand. 3. The loadfree human hand. Furthermore, the proposed robot system has integrated vision and force feedback in order to: 1. Perform the handing-over task successfully starting from the free space motion until the full physical human-robot integration. 2. Guarantee the safety of the human and react to the motion of the human hand during the handing-over task. The proposed system has shown a great efficiency during the experiments.


international multi-conference on systems, signals and devices | 2015

Improved peg-in-hole (5-pin plug) task: Intended for charging electric vehicles by robot system automatically

Mohamad Bdiwi; Jozef Suchy; Michael Jockesch; Alexander Winkler

This paper deals with establishing of the electrical connection between a plug and a receptacle by a robot manipulator for the purpose of charging electrical vehicles. In general, the task of the robot for automatic charging of vehicles consists of two phases. In the first phase, the robot system defines the position of the charging receptacle of the vehicle using vision or infrared system. After that in the next phase, it starts to interact with the environment by connecting the charger plug to the charging receptacle (socket) of the vehicle. However, this phase is not always performed successfully, especially when the socket has complicated shape or consists of multi cores with different sizes. In this paper we will use robot force control to build up the connection. Additionally, an algorithm will proposed which improves the peg-in-hole task by generating spiral motion. The proposed algorithm has shown promising results performed on 5-pin industrial charger plug which is very hard to peg in the socket, even for the human, because it is secure and weatherproof (the plug should cover the whole socket cavity), moreover it has multi cores (5 pins) and it is provided with multiple notches to avoid mismatching between similar pins. In addition to that, the proposed algorithm has assumed that a small vision error could be occurred during estimating the initial position of vehicles receptacle.


international multi-conference on systems, signals and devices | 2014

Transferring model-free objects between human hand and robot hand using vision/force control

Mohamad Bdiwi; Kolker Alexey; Jozef Suchy

This paper will present a new proposed system which combines vision and force control in order to transfer model-free objects between human hand and robot hand. The proposed system will improve robots facilities to interact with human, especially during handing-over tasks. The contributions of this work are: 1. The robot system will be able to detect all kinds of objects which are carried out by human hand without any a priori information about the model of the objects. 2. The robot system will track the object visually and grasp it with the help of force control. 3. In the proposed system, vision and force sensors are integrated in order to guarantee the safety issues, to guarantee the fulfillment of the grasping task and to react to the motion of human hand during the interaction phase. The proposed system is supported by experimental results which illustrate the capability of the proposed algorithms in different cases with different objects.


international symposium on robotics | 2014

Integration of Vision/force Robot Control using Automatic Decision System for Performing Different Successive Tasks

Mohamad Bdiwi; Jozef Suchy

Collaboration


Dive into the Mohamad Bdiwi's collaboration.

Top Co-Authors

Avatar

Jozef Suchy

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Winkler

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexey Kolker

Novosibirsk State Technical University

View shared research outputs
Top Co-Authors

Avatar

G. Zschocke

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jozef Suchý

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Jockesch

Chemnitz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge