Haptic-enabled Mixed Reality System for Mixed-initiative Remote Robot Control
Yuan Tian, Lianjun Li, Andrea Fumagalli, Yonas Tadesse, Balakrishnan Prabhakaran
HHaptic-enabled Mixed Reality System for Mixed-initiative Remote Robot Control
Yuan Tian † OPPO U.S. Research Palo Alto, California, U.S. [email protected] Lianjun Li, Andrea Fumagalli, Yonas Tanesse, Balakrishnan Prabhakaran University of Texas at Dallas Richardson, Texas, U.S. {lianjun.li1, andreaf, yonas.tadesse, bprabhakaran}@utdallas.edu
ABSTRACT
Robots assist in many areas that are considered unsafe for humans to operate. For instance, in handling pandemic diseases such as the recent Covid-19 outbreak and other outbreak like Ebola, robots can assist in reaching areas dangerous for humans and do simple tasks such as pick up the correct medicine (among a set of bottles prescribed) and deliver to patients. In such cases, it might not be good to rely on fully autonomous operation of robots. Since many mobile robots are fully functional with the low-level tasks such as grabbing and moving, we consider the mixed-initiative control where the user can guide the robot remotely to finish such tasks. For this mixed-initiative control, the user controlling the robot needs to visualize 3D scene as seen by the robot and guide it. Mixed reality can virtualize the reality and immerse users in the 3D scene that is reconstructed from the real-world environment. This technique provides the user more freedom such as choosing viewpoints at view time. In recent years, benefiting from the high-quality data from Light Detection and Ranging (LIDAR) and RGBD cameras, mixed reality is widely used to build networked platforms to improve the performance of robot teleoperations and robot-human collaboration, and enhanced feedback for mixed-initiative control. In this paper, we proposed a novel haptic-enabled mixed reality system, that provides haptic interfaces to interact with virtualized environment and give remote guidance for mobile robots towards high-level tasks. The system testbed includes the local site with a mobile robot equipped with RGBD sensor, and a remote site with a user operating a haptic device. A 3D virtualized real-world static scene is generated using real-time dense mapping. The user can use haptic device to “touch" the scene, mark the scene, add virtual fixtures, and perform physics simulation. The experimental results show the effectiveness and flexibility of the proposed haptic-enabled mixed reality system.
CCS CONCEPTS • Human-centered computing~Human computer interaction (HCI)~Interaction paradigms~Mixed/augmented reality •Computer systems organization~Embedded and cyber-physical systems~Robotics
KEYWORDS
Haptic Rendering, KinectFusion, Haptic Guidance, Mixed Reality, Robots Control
1 Introduction
Networked mixed reality has become popular for all kinds of the applications such as distributed collaborations (Pena Rios, 2016), training (Gonzalez-Franco, Pizarro, Cermeron, Li, Thorn, Hutabarat, Tiwari and Bermell-Garcia, 2017), and video streaming (Han, Farin and de With, 2010). Such a networked mixed reality system can merge the real and virtual worlds to produce the new environments where physical and virtual objects interact with each other in real time, as shown in Figure 1. Many researchers have applied mixed reality for robot teleoperations (Chouiten, Domingues, Didier, Otmane and Mallem, 2012; Le Ligeour, Otmane, Mallem and Richard, 2006; Kelly, Chan, Herman, Huber, Meyers, Rander, Warner, Ziglar and Capstick, 2011), human-robot interaction control (Robert, Wistorrt, Gray and Breazeal, 2010 ; Wang and Zhu, 2011.) and mixed-initiative control (Cacace, Finzi and Lippiello, 2014; Sauer, 2011). Among all these robot control, mixed-initiative control is a hot topic that has drawn much attention. Some authors (Carlone, Micalizio, Nuzzolo, Scala and Tedone, 2010; Heger and Singh, 2006; Sellner, Heger, Hiatt, Simmons and Singh, 2006) introduced the different levels of autonomy: full autonomy (robot-initiative), mixed-initiative and teleoperation (human-initiative), as shown in Figure 1. In reality, many control systems are designed with the capability to “sliding autonomy", which means the system support the seamless transfer to different levels of control. The key point of mixed-initiative control is to give the robot high-level commands instead of teleoperation. It becomes more important in particular for improving situational awareness, decreasing the workload of the human operator and at the same time guaranteeing safe operations. Some previous methods have introduced haptic feedback and mixed reality for mixed-initiative control (Cacace, Finzi and Lippiello, 2014; Sauer, 2011), most of them applied haptic device as a multidimensional teleoperation controller or used haptic guidance force and environment force as the feedback of robot motions. In this paper, we consider how 3D haptic interaction in . the mixed reality could be expanded to help the robot mixed-initiative control. Haptic interaction with 3D virtual environment is very popular in computer graphics applications to provide immersive experiences, which is to use a haptic avatar to explore freely in 3D world and interact with 3D objects (push, touch, pull) and feel the force feedback. Combining haptic interaction with mixed-initiative control will provide more flexibility towards the control. Several high-quality sensors can be used to provide the mapping and localization information for mobile robots, such as Light Detection and Ranging (LIDAR), RGBD cameras, etc. The sensor data can generate the virtualized real-world scene with dense geometry (Kelly, Chan, Herman, Huber, Meyers, Rander, Warner, Ziglar and Capstick, 2011). User can use mouse cursor, joystick or any other input devices to operate the virtualized objects in mixed reality environment to realize some goals (Le Ligeour, Otmane, Mallem and Richard, 2006). Introduction of the haptic interaction into the mixed reality environment will add more flexibility of operations. Haptic devices provide more degrees of freedoms of cursor motion, and provide the force feedback for more immersive experiences. Using haptic devices, user can remotely “touch” and mark the virtualized environment from the streaming data (Tian, Li, Guo and Prabhakaran, 2017). Furthermore, the haptic interface can be integrated with physics simulation of objects, the virtualized object can be moved in the scene. These operations can provide more flexible control and guidance for the mobile robot, because the robot also uses the dense mapping for localization and navigation( Ganganath, and Leung, 2012).
Figure 1: The mixed reality is divided into augmented reality and virtualized reality, and the robot control has three levels.
In this paper, we assume the robots have the full functionality for low-level tasks such as grabbing, moving based on the input 3D objects and positions. To have such a haptic-enabled mixed reality system for mixed-initiative remote control, there are several challenges: (i) The first challenge is real-time streaming of virtualized data over a network that is susceptible to data loss and delays. (ii) The second challenge comes from the requirement of the haptic rendering with the 3D virtualized environment, that needs to be robust, efficient and smooth. (iii) Object segmentation from the 3D scene that will be needed. The segmented object will be the input for robot grabbing. (iv) Furthermore, network latency will delay the guidance commands from the server, and hurt the haptic interactions, which might lead to disparity of goal motions and real-world motions for robot. To address these challenges, we have proposed a novel haptic- enabled mixed reality system for mixed-initiative remote control. The system provides a haptic interface to interact with virtualized environment and gives remote guidance for mobile robots towards high-level tasks. The system includes a local site with a mobile robot equipped with RGBD sensor, and a remote site with a user operating a haptic device. A 3D virtualized real-world static scene is generated using real-time dense mapping. The user can use haptic device to remotely “touch" the scene, mark the scene, add virtual fixtures, and perform physics simulation. Specifically, the technical contributions of our method are as follows: • A real-time efficient and robust mixed reality platform for mixed- initiative control is proposed to enable haptic interactions with streaming data. • A TSDF-based (Truncated Signed Distance Function) haptic rendering method with streaming surface is proposed to ensure the smooth and robust haptic interaction with virtualized static scene. • A superpixel-enhanced instance segmentation method is proposed to segment objects fast and accurately. • Different types of haptic interfaces are introduced in the mixed reality platform, and a robot state prediction method is proposed to compensate network delays.
2 Related Work
Networked mixed reality platform is widely used to provide remote immersive platform to improve the robot control and enable the interactions between the physical robots and virtual objects (Chouiten, Domingues, Didier, Otmane and Mallem, 2012; Kelly, Chan, Herman, Huber, Meyers, Rander, Warner, Ziglar and Capstick, 2011; Le Ligeour, Otmane, Mallem and Richard, 2006). Robert, Wistorrt, Gray and Breazeal (2010) presented a mixed reality (MR) platform: an integrated physical and virtual environment, in which the user interacts with a teleoperated robot by passing a graphical object. Wang and Zhu, (2011) proposed a mixed reality interfaced for remote robot using both real and virtual data acquired by a mobile equipped with an omnidirectional camera and a laser scanner. The MR interface can enhance the current remote robot teleoperation visual interface. In (Kelly, Chan, Herman, Huber, Meyers, Rander, Warner, Ziglar and Capstick, 2011), dense geometry and appearance data was used to generate a photorealistic synthetic exterior line-of-sight view of the robot including the context of its surrounding terrain. This technique converted remote teleoperation into line-of-sight remote control with the capacity to remove latency. Chouiten, Domingues, Didier, Otmane and Mallem (2012) proposed a distributed mixed reality system that implemented the real time display of digital video stream to web users, by mixing 3D entities with 2D live videos by a teleoperated ROV. Some methods introduced the haptic feedback into the mixed reality control platforms. Sauer (2011) proposed a mixed reality system with GUI interfaces and force feedback
Figure 2: The simplified haptic rendering pipeline [35]: After the TSDF update, the proxy update method is used to find proxy, ray casting is used to generate the point cloud. including path guidance forces, collision preventing forces, or environmental force to improve the performance of high-level tasks operations. Cacace, Finzi and Lippiello (2014) proposed a mixed-initiative control system to add human loop to control the velocity of aerial service vehicle, and force feedback is used to enhance the control experience. Different from these methods, our system introduced the haptic interaction with 3D scene into mixed reality, which will bring more flexibilities. Recently, many robots are equipped the depth sensors for localization and mapping (Cunha, Pedrosa, Cruz, Neves and Lau, 2011; Ganganath and Leung, 2012; Henry, Krainin, Herbst, Ren and Fox, 2012). KinectFusion (Izadi, Kim, Hilliges, Molyneaux, Newcombe, Kohli, Shotton, Hodges, Freeman, Davison and Fitzgibbon, 2011; Oleynikova, Taylor, Fehr, Siegwart and Nieto, 2017) is one of the most popular methods, which fuses the streaming RGBD data from Kinect camera, and saves as a Truncated Signed Distance Function (TSDF). KinectFusion can provide the full-scene dense geometry to enable the mixed reality. Tian, Li, Guo and Prabhakaran (2017) firstly introduced the real-time haptic rendering with streaming deformable surface generated by KinectFusion. Besides the simulation of surface deformation, this method provided a haptic rendering pipeline including collision detection, proxy update and force computation, as shown in Figure 2. The collision detection is performed by ray casting in TSDF data structure that is saved in GPU. At each time step, based on the haptic interaction position (HIP), this method finds the corresponding proxy: the nearest surface point to HIP. Finally, the haptic force is computed by the positions of proxy and HIP. This method is computationally efficient and integrated well with KinectFusion framework. However, the force rendering of this method only works well with the planes, and it will be unstable at the intersecting boundary of two or more planes. In real-world scene, the complex geometry nature harnesses the stability using this method. We borrowed the idea of haptic interaction with the streaming dense surface, and proposed a new pipeline of haptic rendering to keep both the stability and efficiency. Object segmentation from images have been an essential topic in computer vision society (Adams and Bischof, 1994; Deng and Manjunath, 2001). Later, RGBD images are used for semantic mapping: which includes both dense mapping (like Kinect fusion) and object detection, semantic classification (Gupta, Arbeláez, Girshick and Malik, 2015; He, Chiu, Keuper and Fritz, 2017; Hermans, Floros and Leibe, 2014; Müller and Behnke, 2014; Silberman, Hoiem, Kohli and Fergus, 2012). Our method aims to interact with the reconstructed object surface, therefore we chose to only segment the object in real-time, rather than semantic classification. The segmentation algorithms include many categories: region growing (Adams and Bischof, 1994; Deng and Manjunath, 2001), clustering (Achanta, Shaji, Smith, Lucchi, Fua and Süsstrunk, 2010; Silberman, Hoiem, Kohli and Fergus, 2012), and deep learning-based methods (Gupta, Girshick, Arbeláez and Malik, 2014; He, Chiu, Keuper and Fritz, 2017; Valentin, Vineet, Cheng, Kim, Shotton, Kohli, Nießner, Criminisi, Izadi and Torr, 2015). In our system, haptic interaction, therefore we developed
Figure 3: The proposed system architecture. interactive region growing method for object segmentation using both color image and depth image, and integrate the information into the TSDF data structure.
3 Overview
As shown in Figure 3, our mixed reality system comprises three layers. The Robot Layer is connected with a mobile robot (in our implementation, it is KUKA youBot) and a Kinect V2 placed on the top of robot. This layer collects the color and depth images and send to Execution Layer. There is a low-level task executor to execute the control commands that are sent by the controller in Execution Layer. The Execution Layer receives the RGBD images, and performs simultaneous localization and mapping (SLAM) using KinectFusion (Izadi, Kim, Hilliges, Molyneaux, Newcombe, Kohli, Shotton, Hodges, Freeman, Davison and Fitzgibbon, 2011). Then KinectFusion generates a point cloud every time step for visual rendering. It will combine object segmentation module to segment and mark the object if necessary. The layer also includes a separate thread for haptic rendering. This module will compute the force feedback and send it to haptic device. The physics simulation module handles the situation that haptic interaction interface is enabled to interact with a virtual object. The Execution Layer also includes the path planner to generate a path based on the user’s marking or the virtual obstacles that user adds. .
The controller module is used to generate a predicted position and commands for the robot to follow. The User Layer provides all the interfaces and outputs. The user can either use the teleoperation interface to directly operate the robot or use haptic interfaces to interact with the 3D environments. Haptic guided object segmentation interface is only used for segmentation. Haptic interaction interface enables the user to manipulate haptic cursor to push the virtual object as the target or the obstacle. Haptic marking can either define a path on the ground or mark one object, then the robot will try to follow the path or approach the object. The virtual obstacle interface enables the user to add virtual obstacles (any form of geometry) into the scene, then the path planner will search a new path to avoid the obstacle. The Robot Layer and Execution Layer are connected over In- ternet. The dense mapping is done in Execution Layer instead of Robot Layer, since the RGBD image data has smaller size than the 3D point cloud. Another reason is that haptic rendering with TSDF data has very good performance (Tian, Li, Guo and Prabhakaran, 2017). In the real world, the system includes two sites: local site and remote site, that is connected with the high speed Internet (10 Mbps), TCP/IP protocol is applied for the data transfer. A Kinect v2 is placed onto the KUKA youBot, and connected with a Linux machine where the Robot Layer belongs to. The system transfers the RGBD images to the remote site by 15-20 fps. At remote site, the server machine implements Execution Layer and User Layer. The following sections describe more details of the proposed system. Sec.4 describes a new efficient method for haptic rendering with streaming surface, Sec.5 describes our interactive 3D object segmentation method, finally Sec.6 describes all the haptic interfaces, control and latency compensation.
4 TSDF-based Haptic Rendering with 3D Streaming Strategy
Figure 4: The haptic rendering pipeline: collision detection method [35] is used; a new proxy update method is used to find proxy; Friction and texture are added to simulate the properties.
Proxy update is the most important part of the constraint-based haptic rendering (Ruspini, Kolarov and Khatib, 1997), since the proxy is not only used to compute the force, but also rendered visually to the viewers. If the proxy update is not stable and smooth, the force rendering and visual rendering will not be smooth.
Figure : Left is the proxy update method proposed in [35]; right is our proxy update method using force shading. For the same movement, the left has a jump change of proxy positions, and the right has much smoother proxy updates. In (Tian, Li, Guo and Prabhakaran, 2017), the proxy update uses gradient-based method to find the nearest surface point. As shown in Figure 5, left figure shows a scenario that haptic interact with a surface with a sharp change, which is like the intersecting boundary with two flat planes. In this scenario, the haptic interaction point (HIP) is moved by the user from ℎ to ℎ , the proxy position is changed from to . Since proxy is always the nearest surface point according to HIP, the proxy has a sudden change in position. It would feel as though it “jumps" to the other side of the surface, and computed force is changed vigorously to an almost reversed direction. In this paper, we proposed a proxy update method with force shading. Force shading was first introduced into haptic rendering in (Ruspini, Kolarov and Khatib, 1997), which borrows the idea from the Phong shading rendering in computer graphics applications. Our method will handle two scenarios: If HIP is the first contact to the surface, proxy is to find the nearest surface point. Instead of the gradient-based iterative method proposed in (Tian, Li, Guo and Prabhakaran, 2017), we integrate the task of finding nearest surface point into the ray casting step in KinectFusion. The reason is that our application will not consider the deformable property for the surface, therefore the ray casting is performed after the haptic rendering. Per-pixel ray marches in TSDF to generate the point cloud for the whole surface. During this procedure, the distances between the HIP and every point on surface are computed and saved. The nearest surface point finding problem now becomes a parallel problem that find the minimum in the distance array. This problem can be solved through parallel reduction (Fatica, LeGresley, Buck, Stone, Phillips, Morton and Micikevicius, 2008). This algorithm is shown in Algorithm 1. After the HIP penetrates into the surface, the subsequent proxy position needs to be updated since the HIP will penetrate further into the volume. As shown in Figure 5, the nearest surface point is not appropriate for this scenario, a more correct way is to constrain the successive proxy. The previous time step normal is used to define a tangent plane, the normal of proxy will be computed every time step. Tracking this normal is like tracking a tangent gliding plane over the surface physically. As shown in the right of Figure 5, the tangent plane is “dragged" by the new proxy position hi while attached on the surface. So, the tangent plane can be treated as a constraint plane for the proxy. First, we drop a perpendicular from hi to this constraint plane to get a goal position , which is the first approximation of the proxy. Then, the nearest surface finding in the ray casting step will find the new proxy . This two-step method is similar with the force shading method (Ruspini, Kolarov and Khatib, 1997) . The core of the method is to use the tangent plane to constraint the new proxy in a physical plausible way, then refine it as the nearest surface point. The whole procedure is shown as Algorithm. 2. Surface properties can simulate friction force, and differential haptic textured surface. Similar to (Salisbury and Tarr, 1997), the friction force can be simulated by a simple change using the known friction cone. The angle α defines a cone starting from the current HIP ℎ , as shown in Figure 6. The friction cone has an interaction circle with the tangent plane. = ( ) , where is a user defined friction coefficient. If the previous time step proxy pi 1 is inside the circle, the new proxy will be directly set the same as before: = . If outside, then the goal position (approximated proxy) = , where is the point closest to on the circle. The two scenarios correspond to static friction and dynamic friction. The haptic texture can be easily extended by using bump texture method (Blinn, 1978). It can generate constraint for each point to change the normal.
5 Interactive 3D Object Segmentation
It is very necessary to provide the interface to segment 3D objects in the scene. Such an interface enables more flexible haptic interaction, e.g. haptic texture, material properties for different objects, and also provides the object position and orientation for robot grasping tasks. Many researchers combine the object detection and semantic classification into dense mapping (Gupta, Arbeláez, Girshick and Malik, 2015; He, Chiu, Keuper and Fritz, 2017; Hermans, Floros and Leibe, 2014; Müller and Behnke, 2014; Silberman, Hoiem, Kohli and Fergus, 2012; Valentin, Vineet, Cheng, Kim, Shotton, Kohli, Nießner, Criminisi, Izadi and Torr, 2015). Our system aims to build haptic-enabled interfaces for the mixed-initiative control, therefore the high-level semantic segmentation is beyond our scope. We propose an interactive 3D object segmentation method which is not only efficient, but also compatible with the popular high-level object semantic algorithms as the input.
Figure : Proxy update for friction. Left simulates the stick force: when the previous proxy is inside the friction cone, the proxy will not be updated; right simulates slip force: when the previous proxy is outside, the proxy will be updated as the nearest boundary point on the boundary cone. The straightforward way is to segment the 3D object from the 3D point cloud. It is also possible to use the KD-tree to speed up the neighbor search for points. This method takes extra processing time. Another way is to perform the segmentation based on TSDF data, and save the segmentation information into the TSDF. In KinectFusion pipeline, the depth image is fused for surface reconstruction at each time step. Based on this observation, we propose a two-phase algorithm. At the first phase, the 2D segmentation is performed from both depth image and color image. After the 2D segmentation, a label image L i is generated. Figure : The proposed method for interactive 3D object segmentation. At the second phase, the segmentation is fused into the TSDF together with the depth image. In this way, the segmentation is seamlessly integrated into the KinectFusion and reduce the time cost. Moreover, the segmentation information will be fused by weight, which generates robust segmentation result. The whole pipeline is shown as Figure 7. In the first phase of our method, firstly user uses the haptic avatar to touch and mark an object of interest in 3D scene. Then the 3D mark is transformed to the current color image coordinates. At the next time step, starting from the mark point in image, the pixels are clustered through a region growing method until there are no pixels to be added. The region is treated as a . cluster, then the distance between the neighboring pixels and the cluster center is computed as the combination of two Euclidean distances as shown in the Equation 1: ( , ) = ‖ ( ) − ( )‖ + ‖ ( ) − ( )‖ (1) where xi is the neighbor pixel position, and is the center of the region. is the CIELAB color space value of the pixel in color image, which is widely considered as perceptually uniform for small color distances (Achanta, Shaji, Smith, Lucchi, Fua and Süsstrunk, 2010). is the 3D points that are computed from the depth image. The values for cluster center: ( ) and ( ) are computed as the averages of the values of all pixels in this cluster. = / is a parameter that controls the compactness of a region. is the variable to control the compactness, is the grid interval. We first carried out an experiment comparing the region growing with RGBD data and only with RGB data, as shown in Figure 8. With the introduction of depth data, the boundary of object is extracted better than that only using RGB data. Figure : Interactive region growing 2D segmentation method. Left is only using RGB image, right is using both RGB image and depth image, which keep the better boundary for the object. The greater the value of , the more spatial proximity is emphasized and the more compact the cluster. This value can be in the range(Achanta, Shaji, Smith, Lucchi, Fua and Süsstrunk, 2010; Hermans, Floros and Leibe, 2014). We choose = 10 for all the results in this paper. The distance threshold can be chosen by the user.
6 Haptic-enabled Mixed-initiative Control
In most previous works (Cacace, Finzi and Lippiello, 2014; Sauer, 2011), haptic force feedback is used to generate path guidance forces, collision preventing forces, or environmental force to improve the performance of high-level tasks operations. However, our system uses haptic device in a different way. Haptic device is used as the 3D avatar to remotely “touch", explore and interact with the virtualized real-world environment. The haptic interaction provides more flexible operations similar with using "virtual hands".
The haptic interfaces can intervene the robot control procedure, and add a new path or change destinations. These interfaces will not influence the velocity, but only the paths and targets points. Haptic Marking for Path Guidance Since haptic rendering with surface is in real-time and efficient, our system provides a haptic marking interface. The user can use HIP to touch the floor to mark a path. Then the control manager takes this marked path as input to invoke path planning. The marking is saved as the ordering point sets and saved separately in the remote server. The interface is shown as Figure 9.
Haptic Marking for Object
When the user wants the robot to approach an object and grab it, the user can use the interface to first segment the object, and then set the object as the target to approach. The snapshot of this marking is shown in Figure 10.
Virtual Obstacle
Some researchers have used augmented reality to set up virtual objects (Sauer, 2011). In our system, this operation is much easier since haptic device can locate a 3D position fast and accurately. The proposed system provides an interface that user can put virtual obstacles on the ground. The ground plane is located and saved at the first several time steps. The virtual objects can be treated as obstacles, and the path planner will regenerate the new path to avoid them. The snapshot of this obstacle is shown in Figure 11.
Haptic Enabled with Physics Simulation
In computer graphics applications, the haptic-enabled physics simulation is very popular. Since virtual objects can be placed, user can use haptic avatar to interact with the virtual objects. These objects can be treated as the new visual cues, marks or obstacles. The interface is shown as Figure 12.
Our system requires the control system with the following features:
Figure : Haptic marking interface, the user can touch the floor, and mark a path (yellow color) on the point cloud. (i) High-level mix-initiative control needs to consider the net- work latency. (ii) The system supports sliding autonomy. Both autonomous and human-in-the-loop control modality need to be supported. To incorporate these features, the control architecture will be distributed in both Robot Layer and Execution Layer, which is shown in Figure 13. The Execution Layer includes Task Planner, Path Supervisor, Path Planner, and Primitive Supervisor. The Robot Layer includes Trajectory Planner, Controller and the Robot. User uses haptic interfaces to invoke high-level tasks, including haptic marking a position or an object, haptic interaction with a virtual object. These operations are passed to Task Planner. Task Planner is a high-level manager to communicate with the Plan Supervisor. It can parse the task into micro-actions plan, and then receive the replanning request. Plan Supervisor can request and receive the path between two points from Path Planner. In our framework, the path generation is based on a Rapidly-exploring Random Tree algorithm (LaValle, 1998).
Figure : After object segmentation, set this object to be target (blue). Figure : User use the haptic cursor to place a virtual obstacle (green sphere). Figure : User use the haptic cursor to push a virtual cube. In our mixed-initiative control, human-in-the-loop happens in this Primitive Supervisor module, as shown in Figure 13. The low-level Primitive Supervisor receives the path information such as waypoints and micro actions from the Plan Supervisor. It will receive the planned path and the haptic marking path, and then generate a goal position for the robot motion.
Figure : The proposed control architecture, Execution Level and Robot Level are listed as blue and orange respectively. In the Robot Layer, Trajectory Planner monitors and controls the trajectory towards the goal position. The haptic marking path provides a marking point , and the planned path provides a path point . The goal position is chosen from these two points by choosing the maximal distance between the point and current robot position. Network delays may influence the mapping and localization from KinectFusion. To compensate delay, we propose a method to generate a predicted goal position. Assuming current velocity of robot is = [ , , ] at ℎ time step, the straightforward way to predict next velocity is to compute the velocity and acceleration with last several frames. Most Kalman filters are based on an empirical model of this linear form. In this paper, we applied the general linear model to predict next velocity : = + +. . . = + +. . . = + +. . . (2) . For a given time series of points in a path, the matrix V is defined as Equation 3: = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (3) Let to be the predicted positions ( , , . . . , , . . . ) . The problem now is to solve and obtain three parameter vectors , , . The general solutions of these linear problem are shown as follows: = ( ) = ( ) = ( ) (4) Every time step, this linear prediction model generates new parameters, and then predicts the next goal position = + , where is the round time delay. This goal position will be sent to Trajectory Planner for the low-level autonomous control.
7 Haptic-enabled Mixed-initiative Control Experiment
Here, we demonstrate experimental setup and results. The remote site is an Intel i7 3.50 GHZ machine with 32 GB RAM. Equipped GPU is GeForce GTX 670. For KinectFusion, the dimension of the TSDF volume is usually set to be 512*512*512, voxel size is about 10 mm, the truncated depth distance is set to be 50 mm. In KinectFusion, 3-level iterative closest point (ICP) structure is applied by iterations 4/5/15. The local site includes a KUKA youBot mobile robot, and a Kinect V2 is placed on top of the robot. The robot can move in 4 directions, and turn by 45 degrees every time. The Kinect sensor is connected to a Linux machine with ROS for robot control. With a network operating at 10 Mbps, TCP/IP protocol is employed for the data transfer. The system transfers the RGBD images to the remote site at a rate of 15-20 fps. The testbed setup is shown as Figure 14. We used KUKA youBot mobile robot and Force Dimension Omega.3 haptic device. Omega.3 has 3 degrees of freedom, 14.5 N/mm stiffness and up to 12.0 N force.
Figure : The system setup. Left is local site: KUKA youBot with a Kinect V2 placed on top. Right is the remote site, the user interacted with the 3D reconstructed scene and guide the robot using haptic device. The TSDF resolution determines the 3D geometry density, and it will further influence the time efficiency of the proposed haptic rendering method. We carried out the experiments for haptic rendering using different TSDF resolutions. The average processing time is recorded and shown as Table. 1. Proxy update belongs to the “collision detection and handling". As shown in the results, all modules of our haptic rendering method are efficient. The proposed haptic rendering with KinectFusion can support the real-time mixed reality applications, even using a high TSDF resolution. TSDF Resolution Average Processing Time Total Time Segmentation and KinectFusion Collision Detection and Handling 64*64*64 16.4 ms ms ms ms ms ms ms ms ms ms ms ms ms ms ms Table 1: Processing time of different components using different TSDF resolution.
Methods Measurement Probabilistic Rand Index Boundary Displacement Error Global Consistency Error Normlized Cut 0.73931 17.1560 0.2232 CTM 0.7796 19.1981 0.3647 Meanshift 0.7769 13.1616 0.5811 Our method 0.73876 14.216 0.5432
Table 2: Comparison results of segmentation on BSDS500 benchmark.
The accuracy of the haptic rendering is evaluated by using HIP to move with a sharp boundary. In this experiment, a box is placed in the scene. The front of box and the floor generates a sharp concave boundary. Figure 15 shows the experimental scenario and results.
As shown in Figure 15(c), haptic rendering in (Tian, Li, Guo and Prabhakaran, 2017) cannot lead to smooth proxy update when moving over the corner. Since the force feedback is computed based on the distance between proxy and HIP, the abrupt change of proxy leads to abrupt force change or even vibration force. Our proxy update method with force shading solves this problem and provides smooth force feedback.
In our system, users are free to interact with the whole surface or one specific object surface. To evaluate the interactive object seg- mentation method, we compare the 2D image segmentation on the BSDS500 benchmark (Arbelaez, Maire, Fowlkes and Malik, 2010). This database consists of natural images with five different human groundtruth segmentations. In the com- parison, we use mouse to mark the starting pixel instead of haptic device. We compare the results with popular segmentation methods: normalized cut (Shi and Malik, 2000), the meanshift algorithm (Comaniciu and Meer, 2002) andCompression-based Texture Merging (CTM) (Yang, Wright, Ma and Sastry, 2008). The comparison result is shown in Table 2. The comparison results show our method has comparable performance with the other popular segmentation methods. We use 3 performance measures: Probabilistic Rand Index (the fraction of pairs of pixels whose labels are consistent between the computed segmentation and the ground truth), Boundary Displacement Error (the average displacement error of boundary pixels between two segmented images) and Global Consistency Error (the extent to which one segmentation can be viewed as a refinement of the other) (Deng and Manjunath, 2001).
Figure : The accuracy of haptic rendering. (a) The side view of a boundary between front face of the box and ground. (b) Sketch of the haptic interaction with a sharp corner of two planes. (c) A two-dimensional crossing section of the proxy (asteroid) positions when moving over the surface, using haptic rendering method in (Tian, Li, Guo and Prabhakaran, 2017). (d) Crossing section of proxy positions using our method. We have carried out an experiment to testify the system control and execution performance. The robot control loop is performed with small delay (less than 60 ms). The first control scenario is to approach the target after it is marked, as shown in Figure 16. The second scenario is to add the virtual obstacle in the scene, as shown in Figure 17. We involved 2 students and each subject was asked to repeat the experiment 5 times in the testbed. For both scenarios, we executed 10 times. We have collected mean, min, max, standard deviation ( ) of: time of planning ( ), time of replanning ( ), length of executed path ( ), and total time of execution ( ). The results show that the control/execution performance is compatible with the operative scenario requirements(Wen, Chng, Chui, Lim, Ong and Chang, 2012). Since we focus on haptic-enabled mixed-mediated control, the control optimization is beyond our scope. Method Control Measurements Mean STD Max Min Tp 0.082 s s s s Tr 0.614 s s s s Te 70.8 s s s s Lp 15.4 m m m m Table 3: The accuracy of haptic rendering Control and Execution Results.
Figure 16: The haptic marking control scenario. After the user set target to be the box, the robot approaches the box.
To verify the delay compensation method, we applied two delays (100ms, 200ms) over the Internet. We recorded the real-time robot positions and also the predicted positions generated form the proposed linear model. .
Figure 17: (a) The user firstly defines a target, then adds a virtual obstacle (green ball) into the scene. (b)(c)(d) The robot find the path to avoid the obstacle.
Figure 18: The comparison between the real-world robot position curve and predicted position curve with 200ms network latency.
The error is defined as the Euclidean distance along X axis since robot are moving along x axis in two experiments. If big error means the newly planned position sent from server to robot is very far from the real position. Based on the control test, if this error is beyond 20cm then the trajectory will be influenced. As shown in Figure 13, the trajectory planner might generate costly trajectory, and robot may move back and forth because of the delay. The robot positions are computed each frame from the odometry. Figure 18 shows two position curves (along X axis) over the Internet with 200ms latency. The average position error for 100ms is 2.34cm, and 3.47cm for 200ms. The maximal error for both latencies is lower than 7.0cm, which shows good control performance. To further test the latency, we have one more experiment that adding time varying network delay from 100ms to 200ms as shown in Figure 19. The difference between the two curves is very close, and lower than 7.0cm. These results show that our latency compensation can handle up to 200ms network latency.
Figure 19: The comparison between the real-world robot position curve and predicted position curve with time varying network latency from 100ms to 200ms.
8 Mixed-Initiative Control for dexterous robotic hand for various applications- a perspective.
The Kuka YouBot robot used in the previous section is great for applications requiring maneuvering in house and precise control. However, the robotic arm has limited degree of freedom and will not be able to replicate human-like actions from remote location. Here, we describe the essential elements that are needed for improving the capabilities of our current work and use them to solve engineering/healthcare problems. These are listed as follows: 1) designing a robotic hand with five fingers, 2) Use the hands for special operations such as healthcare and military applications. The general healthcare and military applications have their own requirements and hence require unique solutions.
Five-fingered robotic hands are needed in many applications to imitate human action in a remote location. We have designed such hands using 3D printing method and using parametric based CAD software (Lanigan and Tadesse, 2017;Wu, de Andrade, Saharan, Rome, Baughman and Tadesse, 2017). The parametric-based hand design can be easily customized and scaled as needed, we can print big adult size hand or small child size. It is a low-cost but highly dexterous robotic hand that can carry out complex tasks, the robotic hand can be actuated using novel actuators such as TCP muscles.
Twisted and Coiled Polymer (TCP) muscles or actuators are soft polymers that enable the realization of low-cost and high-performance humanoid robots (Wu, de Andrade, Rome, Haines, Lima, Baughman, and Tadesse, 2015;Wu, de Andrade, Saharan, Rome, Baughman and Tadesse, 2017). TCP muscles contract when heated and return to their original shape, like natural muscles. They do not produce any noise, as a result can be used for silent operations. This is the key advantage of designing robotic arms using soft actuators rather than motors and pneumatics. We have been studying the TCP materials to develop a novel musculoskeletal system. TCP muscles could provide large strain (20-49%), large stress (1-35MPa) and high mechanical work (5.3 kW/kg) (Haines, Lima, Li, Spinks, Foroughi, Madden, Kim, Fang, De Andrade, Göktepe and Göktepe, 2014; Wu, de Andrade, Saharan, Rome, Baughman and Tadesse, 2017). More importantly, the material cost for making the muscle is low compared to shape memory alloy actuators (Haines, Lima, Li, Spinks, Foroughi, Madden, Kim, Fang, De Andrade, Göktepe and Göktepe, 2014;). Therefore, it is worth studying this material further to develop high-performance and low-cost robots including the closed-loop control systems (Jafarzadeh, Gans and Tadesse, 2018). When referring to literature, several robots and robotic hands have been developed in universities and research institutes (Diftler, Mehling, Abdallah, Radford, Bridgwater, Sanders, Askew, Linn, Yamokoski, Permenter and Hargrave, 2011; Hanson, Bergs, Tadesse, White and Priya, 2006; Kajita, Kaneko, Kaneiro, Harada, Morisawa, Nakaoka, Miura, Fujiwara, Neo, Hara and Yokoi, 2011; Oh, Hanson, Kim, Han, Kim and Park, 2006; Sakagami, Watanabe, Aoyama, Matsunaga, Higaki and Fujimura, 2002). The actuators typically used in these robots are expensive and are not biomimetic. Some of the advanced humanoids include Boston Dynamics’s Atlas, ASIMO, Robonaut, HUBO and HRP-4C. These robots are extremely expensive and most of them are not available commercially. For example, ASIMO costs about $ 1 million and even $100,000 for a rent (Sofge, 2018) mimetic and affordable. We have made several efforts in the last 10 years in creating humanoids using various smart actuation technologies: piezoelectric, conducting polymer, shape memory alloy actuators and the latest humanoid hand that is actuated by TCP muscles and using additive manufacturing technology (Figure 20). The key feature of the design is that 1) it can grasp various daily used objects, 2) the actuators do not require large space , they are kept in the forearm, 3) The structure is lightweight as it is polymer based and 4) No electromagnetic noise generated. Key scientific challenges and performance of smart materials were described in our previous works (Jafarzadeh, Brooks, Yu, Prabhakaran and Tadesse, 2020; Tadesse, 2013; Tadesse, Grange and Priya, 2009; Tadesse, Hong and Priya, 2011; ). The TCP muscles are used for actuation of fingers of the robots (Wu, de Andrade, Rome, Haines, Lima, Baughman, and Tadesse, 2015;Wu, de Andrade, Saharan, Rome, Baughman and Tadesse, 2017) and hence we will investigate this hand with the proposed mixed-initiative method in the future.
Figure 20: Grasping objects using our robotic hand, UTD Hand that is made out of TCP muscles (Wu, de Andrade, Saharan, Rome, Baughman and Tadesse, 2017).
Our main objective is to show the benefit of maturing technologies presented in this paper, identifying and solving the key challenges in the healthcare. The goal is to develop a low cost but highly dexterous robotic hand that can carry out complex tasks, especially those that might be dangerous for humans such as handling contagious diseases. This aspect makes the proposed command and control of a robotic hand transformative because of the tradeoff between cost, overall system size, weight, operation noise and performance in handling objects. We would like to show the use of the proposed solution in three different cases that require their own requirements. Case1: Demonstration of Command and Control in Healthcare- Prolonged Field Care (PFC) In some applications, robots are desired to monitor and help individuals in need in remote field geographic location for extended period of time. This could be physical assistance combined with visual monitoring. Our demonstration will be focused on prolonged field care (PFC) and well suited in this area particularly for especial cases, epidemic disease that are not safe for medics, because our proposed robot can monitor the subject 24/7 as well as act as needed (such that help the patients, providing water, food and other items, (Figure 21 a-c).
Figure 21: Robots for use in healthcare innovation: (a) temperature measurement using Braun ThermoScan®, (b) pressure measurement of a subject, (c) wound cleaning. . We will have two main applications and research efforts: (i) body temperature measurement, and (ii) wound management in field care units that the military needs. Our solution is particularly useful for special missions that are difficult or unsafe for humans to do.
Figure 22: Robotic hand with TCP muscles, picking and placing an object: (a) The hand powered with TCP muscle and the inset of the muscle, (b) Our 3D printed humanoid robot actuated by servomotors and TCP powered hands; (c) holding and moving an object - as seen by the 2 fire-wire cameras in the head and the 3D camera in the chest. Figure 23: 3D printed humanoids (a) HBS 1 Biped,(b) Buddy wheeled robot, (c) Arm workspace.
Case 2: Body temperature measurement from Remote Location Another great application of the proposed method in this paper is taking a body temperature measurement from a human subject via teleoperation. We would like to experiment with the hand developed in the previous section to take temperature measurement of a patient simulator using hand held Braun ThermoScan® ( Figure 22a). Patient simulators are great way to expediate such research as human subject tests typically require multiple process of IRB approval at this initial stage. First, we will perform simple experiments to let the robot grasp ThermoScan and manipulate it by developing algorithms. Inverse kinematics (Parga, Li and Yu, 2013; Sciavicco and Siciliano, 2012; Spong and Vidyasagar, 2008), camera and hand coordination will be employed to direct the thermometer to the patient simulator’s forehead, and the robot will read the instrument display. We will use Object Character Recognition (OCR) (Mori and Malik, 2003) program to detect the actual temperature reading. The temperature and other procedures require multiple actions of the robot following particular algorithms and experiments. We will also determine the reachable positions (workspace) of the robot hand using the Denavit-Hartenburg (DH) method, a convention used to represent the relationships of linkage parameters in robotic manipulators. We have done some preliminary tests on actuation of the robot locally, without using the teleoperation, to see the practical issues and identify actuation variable. Some of the prior works are presented in (Wu, Karami, Hamidi and Tadesse, 2018). Our humanoid robotic with the specialized hands (Figure 22c) will be mounted on a mobile and it can accomplish such task by performing a series of experiments. We will customize our robots to perform these tasks. We have some representative video of our robot can be found from https://youtu.be/WKc32gcdgj0. Case 3: Low-cost and High-performance Biped Robots and Mobile 3D Printed robots- for Hazardous Substance Low-cost humanoids are needed to manipulate hazardous substances and explosives. They can be made again if it is destroyed during operations or field trials. Our team has designed and developed a 3D printed robot, HBS-1 robot (Figure 23a) that has mechanical systems, electrical and mechatronic systems. The materials cost of this robot is $10,000-$15,000 (Wu, de Andrade, Saharan, Rome, Baughman and Tadesse, 2017; Wu, Larkin, Potnuru and Tadesse, 2016). The overall dimension of robot HBS-1 is 120 cm x 33cm x 14 cm, which match closely a 7-year-old boy (Snyder, Spencer, Owings and SCHNEIDER, 1975). HBS-1 consists of two 4-DOF legs, a 2-DOF waist, two 4-DOF arms, two 15-DOF hands and a 3-DOF head (51 DOF in total). The robot is powered by a DC power supply connected through wires while tethered. HBS-1 utilizes 14 Dyanmixel servos. Shape memory alloy and TCP actuators have been used for the design of the fingers since they can be installed in the limited volume of the forearm. Otherwise, it would have been difficult to actuate all the five fingers. HBS-1 is equipped with Firewire stereo cameras which are housed in the head. The torso is equipped with an orientation sensor (UM7-LT), which combines gyroscopes, accelerometers, magnetic sensors, and an onboard 32-bit ARM Cortex processor to compute sensor orientation. Two servo controllers in the torso actuate the 21 servomotors. The other robot called Buddy (Figure 23b) consists of a wheeled mobile base, a cloud camera, an ultrasonic position sensor, battery, wireless communication module, flexible touch sensor skin, two 4-DOF arms and a 2-DOF neck (15-DOF in total). The overall dimensions of the robot are a height of 580 mm, an arm space of 925 mm, a shoulder width of 230 mm, and a chest thickness of 172.5 mm (Burns and Tadesse, 2014; Potnuru, Jafarzadeh and Tadesse, 2016). These components took 56% of the total material cost. The total material cost of this robot is $3000 including the mechanical and off-the-shelf mechatronic components. This robot can be modified and used for experimenting the teleoperation and handling dangerous substances. Overall, 3D printability, dexterity, mobility are some of the key components of low-cost, high performance teleoperated robotic systems in the three case studies presented. The other important aspect is the 3D printed robotic hand that is actuated by coiled and twisted polymer muscles. We have recently reported such an innovative robotic hand in 2017(Karami, F., Wu, L. and Tadesse, Y., 2020; Saharan, Wu and Tadesse, 2020; Wu, de Andrade, Saharan, Rome, Baughman and Tadesse, 2017; Wu, Karami, Hamidi and Tadesse, 2018). The hand called UTD Hand can grasp various objects, which was made of an inexpensive polymer muscles (Twisted and Coiled Polymer) TCP muscles. The muscles are reported in Science Magazine in (Haines, Lima, Li, Spinks, Foroughi, Madden, Kim, Fang, De Andrade, Göktepe and Göktepe, 2014; Wu, de Andrade, Saharan, Rome, Baughman and Tadesse, 2017) . The robotic hand can handle and manipulate various objects of different sizes and shapes which will guarantee the success of this project as shown by Wu, de Andrade, Saharan, Rome, Baughman and Tadesse (2017). This robotic hand was featured as unique design in the review of robotic hands in the last century (Piazza, Grioli, Catalano and Bicchi, 2019). We will use this robotic hand combined with the mixed initiative teleoperation to achieve better performance. Conclusion and Discussion
In this paper, we have proposed a haptic-enabled mixed reality system for mixed-initiative remote control. The system provides an efficient haptic rendering method with smooth and stable force feedback. The haptic rendering also supports for the simulation of the surface properties such as friction and texture. This haptic rendering enhances the immersive environments and support more haptic user interfaces for more flexible control commands such as pushing a virtual obstacle to change the robot’s motion. An interactive 3D object segmentation method is also provided to segment objects fast and accurately. This segmentation result can be treated as the input for the high-level semantic classification of objects. The system also provides different types of haptic interactions in the mixed reality platform, and a prediction method to compensate network delays. The delay can be compensated to support the complex interaction such as placing obstacles during the robot movement. The experimental results show the efficiency and functionality of our system. This system expands the user interfaces using haptic devices. In the future, a user study will be performed to compare different user interface configurations. It is very meaningful to verify which haptic interface mode contributes more for task completion. Also, the prediction method can be tuned by using more complicated methods such as a non-linear model. Haptic- guided segmentation can be augmented by incorporating a learning method for semantic labeling. Our system provides haptic guided mixed-initiative control. It can be naturally extended to more levels of autonomy using our mixed reality platform. We will also apply the approach presented here in humanoid robots, to solve some problems in healthcare and other scenarios that are difficult or unsafe for humans.
Acknowledgement
This material is based upon work supported by the US Army Research Office (ARO) Grant W911NF-17-1-0299. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the ARO.
REFERENCES
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P. and Süsstrunk, S., 2010. Slic superpixels (No. REP_WORK). Adams, R. and Bischof, L., 1994. Seeded region growing. IEEE Transactions on pattern analysis and machine intelligence, 16(6), pp.641-647. Arbelaez, P., Maire, M., Fowlkes, C. and Malik, J., 2010. Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5), pp.898-916. Blinn, J.F., 1978. Simulation of wrinkled surfaces. ACM SIGGRAPH computer graphics, 12(3), pp.286-292. Burns, A. and Tadesse, Y., 2014, March. The mechanical design of a humanoid robot with flexible skin sensor for use in psychiatric therapy. In Electroactive Polymer Actuators and Devices (EAPAD) 2014 (Vol. 9056, p. 90562H). International Society for Optics and Photonics. Cacace, J., Finzi, A. and Lippiello, V., 2014, September. A mixed-initiative control system for an aerial service vehicle supported by force feedback. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1230-1235). IEEE. Carlone, L., Micalizio, R., Nuzzolo, G., Scala, E. and Tedone, D., 2010. STEPS: PCS results on 1 st Working Prototype. In International Conference on Environmental Systems. Chouiten, M., Domingues, C., Didier, J.Y., Otmane, S. and Mallem, M., 2012, March. Distributed mixed reality for remote underwater telerobotics exploration. In Proceedings of the 2012 Virtual Reality International Conference (pp. 1-6). Comaniciu, D. and Meer, P., 2002. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence, 24(5), pp.603-619. Cunha, J., Pedrosa, E., Cruz, C., Neves, A.J. and Lau, N., 2011. Using a depth camera for indoor robot localization and navigation. DETI/IEETA-University of Aveiro, Portugal. Deng, Y. and Manjunath, B.S., 2001. Unsupervised segmentation of color-texture regions in images and video. IEEE transactions on pattern analysis and machine intelligence, 23(8), pp.800-810. Diftler, M.A., Mehling, J.S., Abdallah, M.E., Radford, N.A., Bridgwater, L.B., Sanders, A.M., Askew, R.S., Linn, D.M., Yamokoski, J.D., Permenter, F.A. and Hargrave, B.K., 2011, May. Robonaut 2-the first humanoid robot in space. In 2011 IEEE international conference on robotics and automation (pp. 2178-2183). IEEE. Fatica, M., LeGresley, P., Buck, I., Stone, J., Phillips, J., Morton, S. and Micikevicius, P., 2008. High performance computing with CUDA. SC08. Ganganath, N. and Leung, H., 2012, January. Mobile robot localization using odometry and kinect sensor. In 2012 IEEE International Conference on Emerging Signal Processing Applications (pp. 91-94). IEEE. Gonzalez-Franco, M., Pizarro, R., Cermeron, J., Li, K., Thorn, J., Hutabarat, W., Tiwari, A. and Bermell-Garcia, P., 2017. Immersive mixed reality for manufacturing training. Frontiers in Robotics and AI, 4, p.3. Gupta, S., Arbeláez, P., Girshick, R. and Malik, J., 2015. Indoor scene understanding with rgb-d images: Bottom-up segmentation, object detection and semantic segmentation. International Journal of Computer Vision, 112(2), pp.133-149. Gupta, S., Girshick, R., Arbeláez, P. and Malik, J., 2014, September. Learning rich features from RGB-D images for object detection and segmentation. In European conference on computer vision (pp. 345-360). Springer, Cham. Haines, C.S., Lima, M.D., Li, N., Spinks, G.M., Foroughi, J., Madden, J.D., Kim, S.H., Fang, S., De Andrade, M.J., Göktepe, F. and Göktepe, Ö., 2014. Artificial muscles from fishing line and sewing thread. science, 343(6173), pp.868-872. Han, J., Farin, D. and de With, P., 2010. A mixed-reality system for broadcasting sports video to mobile devices. IEEE MultiMedia, 18(2), pp.72-84. . ..