Zhen Deng
University of Hamburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhen Deng.
international conference on robotics and automation | 2014
Haiyang Jin; Ying Hu; Zhen Deng; Peng Zhang; Zhangjun Song; Jianwei Zhang
Screw path drilling is an important process among many orthopedic surgeries. To guarantee the safety and correctness of this process, a model-based drilling state recognition method is proposed in this paper. The thrust force in the drilling process is modeled based on an accurate 3D bone model restructured by means of Micro-CT images. In theoretical modeling of the thrust force, the resistance and the elasticity of the bone tissues are considered. The cutting energy and elastic modulus are defined as the material parameters in the theoretical model, which are identified via a least square method. Some key parameters are proposed to support the state recognition: the peak forces in the first and the second cortical layers, the average force in the cancellous layer and the thickness of each layer. Based on these key parameters in the model, a state recognition strategy with a robotic orthopedic surgery system is proposed to recognize the switch position of each layer. Experiments are performed to demonstrate the effectiveness of the modeling approach and the state recognition method.
robotics and biomimetics | 2016
Zhen Deng; Jinpeng Mi; Zhixian Chen; Lasse Einig; Cheng Zou; Jianwei Zhang
Robot manipulation is one of prerequisites capability for service robot. However, autonomous manipulation remains a challenging problem for robot to implement the task, where robot has physical interactions and mechanical contacts with its environment. To date, learning from demonstration (LFD) has been successfully applied to enable robot to acquire new manipulation skill. Researches on LFD mainly focus on representing movement trajectory from demonstration and then transferring the new reproduced trajectory to robot. In this paper, a learning framework is introduced to learn compliant behavior from human demonstration and transferring it to robot. Multivariable, position and interaction force, will be simultaneously encoded in a probability model. The control mode of each axis in C-frame is estimated to decouple the position and force control. And an external DMP is presented to reproduce the new desired position or force for new situations. Furthermore, the compliant parameters are analyzed and estimated by combining probability modeling approach and dynamic system approach. After human compliant behavior learning, a hybrid external position/force control is presented to enable robot to produce a human-like compliant behavior. Experiments are performed to demonstrate the effectiveness of the presented learning framework.
Sensor Review | 2018
Dong Han; Hong Nie; Jinbao Chen; Meng Chen; Zhen Deng; Jianwei Zhang
Purpose This paper aims to improve the diversity and richness of haptic perception by recognizing multi-modal haptic images. Design/methodology/approach First, the multi-modal haptic data collected by BioTac sensors from different objects are pre-processed, and then combined into haptic images. Second, a multi-class and multi-label deep learning model is designed, which can simultaneously learn four haptic features (hardness, thermal conductivity, roughness and texture) from the haptic images, and recognize objects based on these features. The haptic images with different dimensions and modalities are provided for testing the recognition performance of this model. Findings The results imply that multi-modal data fusion has a better performance than single-modal data on tactile understanding, and the haptic images with larger dimension are conducive to more accurate haptic measurement. Practical implications The proposed method has important potential application in unknown environment perception, dexterous grasping manipulation and other intelligent robotics domains. Originality/value This paper proposes a new deep learning model for extracting multiple haptic features and recognizing objects from multi-modal haptic images.
Robotics and Autonomous Systems | 2018
Zhen Deng; Xiaoxiang Zheng; Liwei Zhang; Jianwei Zhang
Abstract The ability to implement semantic Reach-to-grasp (RTG) tasks successfully is a crucial skill for robots. Given unknown objects in an unstructured environment, finding an feasible grasp configuration and generating a constraint-satisfied trajectory to reach it are challenging. In this paper, a learning framework which combines semantic grasp planning with trajectory generation is presented to implement semantic RTG tasks. Firstly, the object of interest is detected by using an object detection model trained by deep learning. A Bayesian-based search algorithm is proposed to find the grasp configuration with highest probability of success from the segmented image of the object using a trained quality network. Secondly, for robotic reaching movements, a model-based trajectory generation method inspired by the human internal model theory is designed to generate a constraint-satisfied trajectory. Finally, the presented framework is validated both in comparative analysis and on real-world experiments. Experimental results demonstrated that the proposed learning framework enables the robots to implement semantic RTG tasks in unstructured environments.
2017 Design of Medical Devices Conference | 2017
Jun Liu; Zhen Deng; Yu Sun; Ying Hu
In clinical ophthalmic surgery, patient’s eye is not fixed and non-steady state. Surgical operation will be extremely complex and dangerous, which need doctors have a good hand-eye coordination and operating accuracy. Recently, the development of surgical training system promotes the learning speed of clinical experience. Some of them are based on animal specimens. The operating habits and perceptions are similar to the real situation. However, the live animal experiments are more and more challenged by animal ethics. the live animal experiments will be unable to meet the growing demand for training. Others are based on Virtual Reality (VR), which use software to produce realistic images, haptic and other sensations [1]. However, the operating habits and perceptions are quite different from the real situation, and the operating object is stationary. Therefore, it is necessary to develop a device that can simulate the physiological movement of eye to make the result of training can be as close as possible to the actual surgical procedure.In this paper, an eye movement simulator based on 3-DOF parallel mechanism is presented. The simulator is also equipped with flexure joint, which simulates the biomechanical properties of extraocular muscles. The design and analysis of mechanical will be described.Copyright
robotics and biomimetics | 2016
Jinpeng Mi; Yu Sun; Yu Wang; Zhen Deng; Liang Li; Jianwei Zhang; Guangming Xie
Robots attract strong interest from human beings, and ordinary people seriously expect to acquire intuitive understanding from the process of interacting with robots. In this paper, a teleoperation framework based on gesture recognition was developed and the recognized human gestures were mapped to corresponding swimming behaviors of underwater robotic fish. By this means, the robotic fish can be remotely controlled by hand gestures. Most significantly, the teleoperation framework offers the opportunity for onlookers to directly interact with the robotic fish, and the intuitive experience of onlookers about human-robot interaction can be augmented. Compared with traditional control structures of underwater robotic fish systems, the presented teleoperation framework can be built quickly, the influence of light condition can be eliminated entirely, and the onlookers can interact with robotic fish directly rather than need to learn about the system architecture and control strategy. Several tests were taken in a water pool to verify the performance of the presented teleoperation framework. The experimental results showed that the developed teleoperation framework is suitable for remote controlling underwater robotic fish, and the teleoperation framework can be widely applied to other application scenarios. The experiment setup was exhibited in IROS2015, Hamburg, the described teleoperation framework greatly attracted onlookers interest.
robotics and biomimetics | 2015
Yu Wang; Zhen Deng; Yu Sun; Binsheng Yu; Peng Zhang; Ying Hu; Jianwei Zhang
To address the safety issues of bone drilling, especially bone screw path drilling, this paper proposes a new method to detect the bone drilling state. The proposed method performs pattern recognition based on the results of multi-sensor information fusion. A support vector machine is selected as the pattern classifier, and the adopted signals include the force, current, feed speed, rotation speed and deflection of the robotic arm. Four different drilling states, i.e., the cortical, cortical-transit-cancellous, almost-break-cortical and cancellous states, are detected, and then help the surgical robot system to achieve safe bone drilling. The proposed method is validated and analyzed through an experiments on pig scapula, and found to have potential clinical application to the bone drilling process in vertebral, leg, ear bone, mandible, and other related orthopedic surgeries.
Mechatronics | 2016
Zhen Deng; Haiyang Jin; Ying Hu; Yucheng He; Peng Zhang; Wei Tian; Jianwei Zhang
international conference on information and automation | 2013
Zhen Deng; Hong Zhang; Baoqiang Guo; Haiyang Jin; Peng Zhang; Ying Hu; Jianwei Zhang
robotics and biomimetics | 2017
Zhen Deng; Jinpeng Mi; Dong Han; Rui Huang; Xiaofeng Xiong; Jianwei Zhang