Human Interface for Teleoperated Object Manipulation with a Soft Growing Robot
Fabio Stroppa, Ming Luo, Kyle Yoshida, Margaret M. Coad, Laura H. Blumenschein, Allison M. Okamura
HHuman Interface for Teleoperated Object Manipulationwith a Soft Growing Robot
Fabio Stroppa, Ming Luo, Kyle Yoshida, Margaret M. Coad,Laura H. Blumenschein, and Allison M. Okamura
Abstract — Soft growing robots are proposed for use inapplications such as complex manipulation tasks or navigationin disaster scenarios. Safe interaction and ease of productionpromote the usage of this technology, but soft robots canbe challenging to teleoperate due to their unique degrees offreedom. In this paper, we propose a human-centered interfacethat allows users to teleoperate a soft growing robot for manip-ulation tasks using arm movements. A study was conducted toassess the intuitiveness of the interface and the performanceof our soft robot, involving a pick-and-place manipulationtask. The results show that users completed the task with asuccess rate of , achieving placement errors below cm onaverage. These results demonstrate that our body-movement-based interface is an effective method for control of a softgrowing robot manipulator. I. INTRODUCTIONSoft and continuum robots have useful features that areadvantageous in applications requiring delicate interaction,e.g. object manipulation [1]–[6], or adaptation to unknownenvironments, e.g. navigation and exploration [7], [8]. Asubset of soft and continuum robots have an additionalfeature that makes operation in confined environments easier:the ability to extend or grow as an additional degree of free-dom [7]–[10]. By extending and shortening in length, thesesystems can move their tip through cluttered environmentswithout being restricted by body parts that may collide withobstacles, such as the “elbows” on a typical rigid serial-chain robot arm. For this reason, growth can be especiallybeneficial in manipulation tasks.While the growth degree of freedom has benefits incluttered environments, designing control to leverage thosebenefits is challenging. In general, there do not exist well-defined kinematic models for soft robots, so often controlof soft robots happens in joint space instead of task space[11]. Even when approximate kinematic models exist, theoutput shape or behavior of the robot can be difficult tomeasure, and therefore hard to close a loop around. Thus,one strategy to control soft robotic systems is to use thehuman to close the loop on position and account for errorscaused by inaccurate models and lack of sensing and closed-loop control. However, dissimilarity between the degrees offreedom of the robot and the human makes it difficult to findappropriate control interfaces.
Toyota Research Institute (TRI) provided funds to assist the authors withtheir research but this article solely reflects the opinions and conclusions ofits authors and not TRI or any other Toyota entity.The authors are with the Mechanical Engineering Department, StanfordUniversity, Stanford, CA 94305, USA [email protected]
Fig. 1. An operator controls the soft growing robot with the gesture-based Body Interface. Here the operator is shown physically near the robot,which is safe due to the robot’s low inertia and soft exterior, while in ourexperimental study the operators controlled the robot from a slightly fartherdistance.
Studies have used devices such as 3D mice [12], joysticksand gamepads for gaming [12]–[14], haptic interfaces [12],[15], rigid-link manipulators [16], and even flexible joysticksspecially designed for soft robots [17]. In particular, the workof El-Hussieny et al. [17] was specifically designed for softgrowing robots and proved to be intuitive and easy to use.However, all these interfaces rely on physical devices, whichmay not be the most intuitive way for humans to control (andlearn to control) the robot. Here we remove the physicalinterface and use the human body to control the robot.In this work, we propose an interface that allows humanoperators to control the robot simply by using their arm.The gestures of the operator, tracked by a motion capturesystem, are mapped to the kinematics of the robot for an easyand intuitive teleoperation. This interface, called the “BodyInterface” (Fig. 1), was used in an experimental study toassess its effectiveness in the control of a soft growing robotin a teleoperated manipulation task. Twelve participants wereable to successfully teleoperate the robot to reach, grasp, andmove objects in the workspace.The rest of the paper is organized as follows: Sec. IIdiscusses the interface, Sec. III describes the design andcontrol of the soft growing robot, Sec. IV discusses theexperiment setup and results, and, finally, Sec. V summarizesthe work and presents possible future research.II. BODY INTERFACEThe interface for teleoperating the robot, the Body Inter-face, is based on a Motion Capture Tracking system and a a r X i v : . [ c s . R O ] O c t a) (b) (c)Fig. 2. (a) Motion capture marker layout on the operator’s upper body. (b) Commands used to control the robot based on direction of movement. (c) Theoriginal reference system is transformed to be aligned with the plane where the markers CH1 , CH2 , and
ABD lie, with origin at
CH1 . Gesture Interpreter Tool. The interface tracks the operator’sgestures, maps them to the kinematics of the robot, and sendsthe commands.
A. Motion Capture Tracking
We used the
PhaseSpace Impulse X2E (phasespace.com)system to track the operator’s movements. This accurateoptical tracking mechanism was used in order to test theeffectiveness of the interface while avoiding the performancelimitations of other types of sensors. In practice, othertracking systems such as inertial measurement units (IMUs)could be employed.Our motion capture setup includes six lightweight-linear-detector cameras monitoring seven active LED markersplaced on the forearm and the chest of the operator. Thegestures are tracked in real time at 270 Hz . As shown inFig. 2(a), the Body Interface exploits four markers on theoperator’s forearm for gesture recognition (two on the elbow EL1 and
EL2 , two on the wrist
WR1 and
WR2 ), and threeon the operator’s chest to create a body centered referencesystem (
CH1 , CH2 , and
ABD ). B. Gesture Interpreter Tool
The operator’s gestures are mapped to the kinematics ofthe robot through our custom Gesture Interpreter Tool (GIT).The GIT recognizes three types of commands: grow/retract,steer left/right/backwards/forwards, and rotate the end effec-tor. One command of each type can be given simultaneously.The communication with the robot’s microcontroller is real-ized via a serial port at 66 Hz .The specific mapping between the gestures and commandscan be customized based on the application. Fig. 2(b) showsone proposed mapping, used to control a soft robot hangingfrom the ceiling and growing in the direction of gravity. Mov-ing the forearm above and below the operator’s transverseplane (forearm flexion/extension) will make the robot retractand grow, respectively; whereas all the movements parallel tothe transverse plane are mapped as steering movements (fore-arm back and forth and medial/lateral rotation, respectively backwards/forwards and left/right); finally, pronosupinationdefines the end effector rotation.
1) Calibration and Command Mapping:
The location ofthe wrist and elbow define the sent commands. Since theinterface is based on body movements, the system needsan initial calibration to account for the operator’s reachworkspace.The steering commands are mapped to the x and z coor-dinates of the wrist. The wrist location WR (the centroid of WR1 and
WR2 ) is projected to the operator’s transverse planeto give the coordinates (cid:104) x WR , z WR (cid:105) . During the calibration, thesystem stores the limits of the operator’s reach in the fourdirections (left/right/backwards/forwards), which will thencorrespond to the limits of the robot’s workspace.The command of growth/retraction is triggered when theoperator’s hand exceeds a certain threshold of y WR . Duringcalibration, the operator defines a deadband ( [ db l , db u ] ) alongthe y axis: if the y coordinate of WR falls within this region,the robot keeps its length fixed; otherwise, it grows or retractsat a fixed speed, based on the position of the operator’s hand.In this case, the calibration of the deadband is defined byhalf of the operator’s reachable limits, to allow the operatorto easily steer and change length simultaneously.Finally, the angle θ P defines the rotation of the end effec-tor. This is the angle between the two segments WR1 − WR2 and
EL1 − EL2 when their projection lies on the operator’scoronal plane. During calibration, the offset between θ P andthe starting orientation of the end effector is stored to assurethe operator’s comfort during the teleoperation.
2) Reference System Alignment:
In order to properlyretrieve the data, the GIT needs to define a body centeredreference system. The three chest markers allow the oper-ator to be aligned to the motion capture reference system,resulting in an interface that is independent of the operator’spose in space. As shown in Fig. 2(c), the frame defined bythe calibration of the Motion Capture system (cid:104) x MC , y MC , z MC (cid:105) is transformed into the reference system of the operator (cid:104) x OP , y OP , z OP (cid:105) , such that the coordinates of the markers areexpressed with reference to the latter. In particular, CH1 , a) (b)Fig. 3. (a) The soft growing robot with its components, and the proposedtask scenario with the orientation of the operator’s reference system. (b)The soft robot during the manipulation task, moving a block to a specifictarget. CH2 , and
ABD define a plane, which the GIT transforms tobe lying on the x OP - y OP plane, with CH1 placed at the originof the new reference system. The operator can thereforecontrol the robot in whatever body pose is most comfortable.III. SOFT-GROWING ROBOTWe built a soft growing robot specifically for manipulationtasks. This section describes its design and control strategies.
A. Design
The soft growing manipulator can grow, retract, and steerin three dimensions while carrying a payload, as shown inFig. 3. The device retracts into a portable, sealed containerwhich can be easily mounted anywhere. The soft growingmanipulator everts, adding new material at the tip whenpressurized, and the DC motor inside the container pulls atthe tip, inverting the material for retraction. The fabricationof the soft growing manipulator and the design of themanipulator’s container are similar to the robot describedin our previous work [7], with the addition of two newcomponents: (i) a cable-driven steering system, and (ii) awireless gripper mount. The steering system consists of threeevenly spaced cables, driven by three gearmotors (
Pololu131:1 37DX73LM ) at the container outlet to provide 3-degreeof freedom motion. The ends of the cables are fixed to theproximal end of the manipulator for steering. The gripper isdriven by two servo motors. One controls the rotation and theother controls grasping. The gripper (
Standard Gripper Kit-Rollpaw, SunFounder ) connects to and moves along the tipof the robot with a magnetic attachment similar to the onepresented in [18], allowing for the completion of graspingand manipulation tasks. The robot is made of a heat-sealablethermoplastic polyurethane fabric sheet and can grow to upto 1 . B. Control
With reference to the parameters described in Sec. II, theBody Interface controls the following robot parameters: • the end effector position (in meters), given by (cid:104) x WR , z WR (cid:105) , these are the two coordinates of the tip ofthe robot given a certain length of the body ; • the orientation of the gripper (in radians), given by theangle θ P ; and • the direction of length change, either growing, retract-ing, or static, given by y WR relative to the growthdeadband [ db l , db u ] . When the deadband is exceeded,the robot is commanded to grow or retract at a constantrate (in radians per second) .Because the robot tip does not have tracking sensors, thecontroller relies on the human operator to close the loopand achieve the desired end effector position. More detailsabout the mapping and control strategies can be found in ourprevious work [7], which is based on the constant curvaturemodel of continuum robots [19].For the experiment described in Section IV, the operatoropens and closes the gripper with a verbal command to theinvestigator. IV. EXPERIMENTAL STUDYThe Body Interface was tested on a pick-and-place taskto evaluate its usability in terms of accuracy, timing, andworkload. A. Participants
Twelve participants took part in the experiment (sevenmales, 26 ± ± B. Task and Scenario
Figures 1 and 3 show the scenario of the experiment.The participants were asked to pick up a block placed ona starting pillar underneath the robot’s base and then movethe block onto a designated target. There were three targetsplaced over three different pillars, and all the participantsrepeated the task five times for each target, for a total offifteen repetitions. The experiment was designed such that theparticipants were randomly and equally divided to explore allthe six combinations of target ordering.The setup details, including elements, size, and layout, areas follows: • the Block , size 3 . × . • Target 1 , placed 25 cm to the left of the block, on apillar 20 cm tall; • Target 2 , placed 25 cm to the right of the block, on apillar 17 cm tall; The coordinate along the direction of growth may vary when the lengthof the robot is fixed, as a result of steering. Since growth is driven by internal pressure in addition to the containermotor, the actual robot growth is not constant but is upper bounded by thecommanded motor speed.
Target 3 , placed 25 cm in front of the block, on a pillar9 cm tall; and • the Robot’s Base , placed over the block, a distance of1 m from the ground and 70 cm from the block.The block starting location and the targets were placed in thecenter of support surfaces with an area of 12 ×
12 cm, andthe robot started each trial from an initial length of 50 cmmeasured from gripper to container. The participants wereasked to face the robot as shown by the reference systemin Fig. 3. Note that this was not a necessary constraint, asthe GIT fixes the reference system based on the operator’sposition, but it was useful to normalize the position of theparticipants among all the trials and assure consistency inthe results.All the participants performed the experiment after a five-minute training phase, in which they were familiarized withthe robot and the interface. They were instructed to movethe robot, learn how fast the commands could be performed,explore the workspace (including testing the response ofsmall and large hand movements), and grasp the block.During the training, the investigator illustrated strategies toget a good grasp on the block and retract without bucklingthe robot body.During the experiment, a trial was considered a failure ifthe block fell to the table surface, which might be due to badgrasping, or hitting the pillar or the block and causing theblock to move. This most often occurred after overshootingon the growth length. Participants were asked to repeat anyfailed trials, such that each of them performed a total offifteen good trials.After the training session, the participants started the realtask, changing the target every five trials based on theirdesigned ordering. In particular, a single trial was composedof two phases: • Grasping phase, where the operator is asked to reachthe block and grasp it; and • Placing phase, where the operator is asked to movethe block from the starting position and place it in thedesigned target.These phases were executed sequentially without a breakin the participant’s control of the robot, and both of theminvolved activities such as growing towards the targets,orienting the gripper for a proper grasp, avoiding the pillars,and retracting when needed (especially after grasping thecube to avoid dragging it on the support surface). After eachtrial, the robot was automatically reset to its starting positionand the block was manually replaced on the initial pillar bythe investigator.
C. Evaluation Metrics
We used the following metrics to evaluate the task perfor-mance: • Target Placement Error (TPE) : the distance, measuredin centimeters, between the center of block and the tar-get once the task is finished, representing the accuracyof the placement. • Task Completion Time (TCT) : the time required tocomplete a trial, measured in seconds. We broke thisparameter into: (i) the overall time of the trial, from startto end; (ii) the time of each phase within a single trial;and (iii) the time spent performing the actual grasp orplacement, excluding the time spent in reaching eitherthe block or the target. • Failure Rate (FR) : the number of trials in which theblock did not reach the target, which were then repeated. • Standard NASA Task Load Index (NASA-TLX) : a sub-jective standard assessment rating perceived workloadwhile performing a certain task [20]; the participantswere asked questions about mental load, temporal load,effort, and frustration scale for each session, and theweighted average of these was used to calculate theoverall workload.
D. Results and Discussion
Fig. 4 shows the histogram of the Target Placement Errorover all trials, considering all the participants and all thetargets: most of the trials resulted in errors lower than 2 cm,and in particular, three of them presented a minimum of0 . Fig. 4. Histogram of the Target Placement Error (TPE) over the all 180trials (15 trials each for 12 participants), including all the successful blockplacements of the participants (in blue), compared to the number of failures(in red).a) (b)Fig. 5. Performance of the manipulation task throughout the experimental study. Dots show individual participants’ performance (color corresponds toparticipant). Average performance across participants (black line) is consistent over the experiment. (a) Grasping phase performance as measured by timeto successfully grasp the object. (b) Placing phase performance as measured by Target Placement Error (TPE).(a) (b) (c)Fig. 6. Paths followed by the operator’s hand during object manipulation trials, depicting the participant who completed the task in the shortest time foreach target. Each plot indicates the starting and ending point, as well as the moment when the grasp was performed (the black × ). The planes outlined inred dashed lines represent the deadband [ db l , db u ] , indicating where the grow/retraction commands were triggered. we used Target Placement Error (TPE) as the measure ofperformance. Fig. 5(b) shows that the average placementerror remained steady and low (below 2 cm) throughout theexperiment. Both results show that the interface was intuitivefor users to operate and the majority of learning took placein the initial training block.To understand the participants’ performance of the taskbeyond these performance metrics, we examined how par-ticipants commanded the robot to reach all three targets.Fig. 6 shows three examples of the path performed by theparticipants while executing the task, one for each target.Each plot illustrates the best performance in terms of timingfor the respective target, and shows the path within theworkspace of the participant based on the Body Interface cal-ibration (the values of the axes are expressed in millimeters).Furthermore, the deadband [ db l , db u ] is also shown to showwhere on the path the growth and retraction commands weretriggered. These plots indicate that the strategy followed bythe participants was mostly consistent from target to target,and followed the instructions provided during the trainingphase. During the Grasping phase, participants started the trial growing towards the block, tuned the position of thegripper by steering and then performed the grasp (black × );subsequently, during the Placing phase, they retracted therobot to avoid any collision with the pillar, and then movedtowards the target while steering and growing, ultimatelytuning the position for the best placement.We can focus on the results shown in Fig. 6 in two ways:in the breakdown in Task Completion Time, and in thelocation of block placement in the users’ command space.As suggested by Fig. 7(a), the Placing phase took more timethan the Grasping one; this is true especially for Target 3(see also Fig. 6(c)), which was the furthest from the startingheight of the cube and required more growing time. However,as shown by Fig. 7(b), if we do not consider the timespent during eversion (growth and retraction), there are nonoticeable differences between Grasping and Placing. Thisindicates that steering was equally easy at all lengths whenusing the Body Interface.Looking at the locations in the command space whereparticipants placed the block, we can see clear clustersindicating each of the three target locations (Fig. 8). The a)(b)Fig. 7. Task Completion Time data for each target and each phase,including median, interquartile range with outliers, and max/min across allparticipants. In (a) is reported the overall time from start to end; in (b) isreported the time spent in steering, which is the time spent to tune the finalposition. color of the dots indicate the Target Placement Error forthat trial. We can see two interesting features in the data:some high error placements occurred close to the center ofthe clusters, and some low error placements occurred welloutside. The high error dots can be explained by the qualityof the grasp for that trial, since some grasps would cause theblock to roll or move significantly after being released. Onthe other hand, even though participants were instructed toplace the block on the target, the low error dots outside theclusters show where participants did not grow the robot as farand dropped the block from a height. This strategy requireda larger steering command from the participant to reach thesame location vertically over the target, putting them outsidethe cluster.Lastly, the results of the NASA-TLX showed an averageworkload value of 68 ±
11% among all the participants,which indicates that the task was challenging but not overlydemanding. The participants indicated that the Graspingphase was slightly more difficult than the Placing: it waseasy to hit the block with the gripper when trying to alignthe robot well, especially after overshooting the growthcommand. V. CONCLUSIONIn this paper, we presented an intuitive interface to tele-operate a soft growing robot with arm gestures. We demon-strated that this interface can be used to perform a pick-and-place task by users with no previous training, and that those
Fig. 8. Locations of placement command in users’ calibrated wristcoordinates for the three targets. Color of the placement location indicatesthe Target Placement Error. users can achieve placement errors below 2 cm on average.This work shows a promising first step for creating interfacesthat allow humans to control soft robots more intuitively andclose the loop around the nonlinearities between joint andtask space.In the future, we would like to improve the performance ofthe soft robot for teleoperated manipulation. Specifically, theparticipants consistently indicated that the Grasping phasewas the hardest part of the task. We believe that the primaryreason is the two-finger gripper design used, and the needto align it precisely to the block surface. A possible solutionis to integrate a more compliant and adaptable gripper, likea four-fingered soft gripper [5]; such a device would ensurea more powerful and stable grasp, and remove the need toaccurately position the gripper in the pronosupination degreeof freedom.Another important extension of this study would be tocompare the Body Interface with previous proposed controlinterfaces, specifically the flexible joystick proposed by El-Hussieny et al. [17].Finally, the last extension of the work will be to developshared autonomy protocols to improve the interaction duringteleoperation. The results of this work have shown that,although the Body Interface can achieve good performance interms of accuracy and timing, there is still room for improve-ment. By allowing the robot to participate in the executionof the task, the role of the human operator will be simplifiedand the different strengths of the human and the robot canbe exploited. Different strategies that could be examinedinclude: (i) haptic feedback through a holdable device [21],allowing the robot to provide guidance information to theoperator and suggest the correct path to reach the targets; and(ii) artificial-intelligence algorithms mimicking the assist-as-needed paradigm used in robot-based rehabilitation [22],where the robot will move autonomously towards the targetonly when the operator needs help to finalize the movementand only of a limited magnitude.
EFERENCES[1] D. Trivedi, C. D. Rahn, W. M. Kier, and I. D. Walker, “Soft robotics:Biological inspiration, state of the art, and future research,”
AppliedBionics and Biomechanics , vol. 5, no. 3, pp. 99–117, 2008.[2] M. Calisti, M. Giorelli, G. Levy, B. Mazzolai, B. Hochner, C. Laschi,and P. Dario, “An octopus-bioinspired solution to movement andmanipulation for soft robots,”
Bioinspiration & Biomimetics , vol. 6,no. 3, p. 036002, 2011.[3] E. Coevoet, A. Escande, and C. Duriez, “Soft robots locomotion andmanipulation control using fem simulation and quadratic program-ming,” in
IEEE International Conference on Soft Robotics (RoboSoft) ,2019, pp. 739–745.[4] E. Brown, N. Rodenberg, J. Amend, A. Mozeika, E. Steltz, M. R.Zakin, H. Lipson, and H. M. Jaeger, “Universal robotic gripper basedon the jamming of granular material,”
Proceedings of the NationalAcademy of Sciences , vol. 107, no. 44, pp. 18 809–18 814, 2010.[5] F. Ilievski, A. D. Mazzeo, R. F. Shepherd, X. Chen, and G. M. White-sides, “Soft robotics for chemists,”
Angewandte Chemie InternationalEdition , vol. 50, no. 8, pp. 1890–1895, 2011.[6] M. Cianchetti, T. Ranzani, G. Gerboni, T. Nanayakkara, K. Althoefer,P. Dasgupta, and A. Menciassi, “Soft robotics technologies to addressshortcomings in today’s minimally invasive surgery: the STIFF-FLOPapproach,”
Soft robotics , vol. 1, no. 2, pp. 122–131, 2014.[7] M. M. Coad, L. H. Blumenschein, S. Cutler, J. A. R. Zepeda, N. D.Naclerio, H. El-Hussieny, U. Mehmood, J.-H. Ryu, E. W. Hawkes,and A. M. Okamura, “Vine robots: Design, teleoperation, and deploy-ment for navigation and exploration,”
IEEE Robotics and AutomationMagazine, 2019, accepted. (preprint on arXiv:1903.00069) .[8] M. Wooten, C. Frazelle, I. D. Walker, A. Kapadia, and J. H. Lee,“Exploration and inspection with vine-inspired continuum robots,” in
IEEE International Conference on Robotics and Automation (ICRA) ,2018, pp. 1–5.[9] E. W. Hawkes, L. H. Blumenschein, J. D. Greer, and A. M. Okamura,“A soft robot that navigates its environment through growth,”
ScienceRobotics , vol. 2, no. 8, p. eaan3028, 2017.[10] H. B. Gilbert, D. C. Rucker, and R. J. Webster III, “Concentric tuberobots: The state of the art and future directions,” in
Robotics Research .Springer, 2016, pp. 253–269.[11] D. Rus and M. T. Tolley, “Design, fabrication and control of softrobots,”
Nature , vol. 521, no. 7553, p. 467, 2015.[12] C. Fellmann, D. Kashi, and J. Burgner-Kahrs, “Evaluation of inputdevices for teleoperation of concentric tube continuum robots forsurgical tasks,” in
Medical Imaging 2015: Image-Guided Procedures,Robotic Interventions, and Modeling , vol. 9415. International Societyfor Optics and Photonics, 2015, p. 94151O.[13] M. Csencsits, B. A. Jones, W. McMahan, V. Iyengar, and I. D. Walker,“User interfaces for continuum robot arms,” in
IEEE/RSJ InternationalConference on Intelligent Robots and Systems , 2005, pp. 3123–3130.[14] M. D. Grissom, V. Chitrakaran, D. Dienno, M. Csencits, M. Pritts,B. Jones, W. McMahan, D. Dawson, C. Rahn, and I. Walker, “Designand experimental testing of the octarm soft robot manipulator,” in
Unmanned Systems Technology VIII , vol. 6230. International Societyfor Optics and Photonics, 2006, p. 62301F.[15] A. Majewicz and A. M. Okamura, “Cartesian and joint space tele-operation for nonholonomic steerable needles,” in
World HapticsConference (WHC) , 2013, pp. 395–400.[16] C. G. Frazelle, A. D. Kapadia, K. E. Fry, and I. D. Walker, “Teleop-eration mappings from rigid link robots to their extensible continuumcounterparts,” in
IEEE International Conference on Robotics andAutomation (ICRA) , 2016, pp. 4093–4100.[17] H. El-Hussieny, U. Mehmood, Z. Mehdi, S.-G. Jeong, M. Usman,E. W. Hawkes, A. M. Okamura, and J.-H. Ryu, “Development andevaluation of an intuitive flexible interface for teleoperating softgrowing robots,” in
IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , 2018, pp. 4995–5002.[18] J. Luong, P. Glick, A. Ong, M. S. deVries, S. Sandin, E. W. Hawkes,and M. T. Tolley, “Eversion and retraction of a soft robot towards theexploration of coral reefs,” in
IEEE International Conference on SoftRobotics (RoboSoft) , 2019, pp. 801–807.[19] R. J. Webster III and B. A. Jones, “Design and kinematic modelingof constant curvature continuum robots: A review,”
The InternationalJournal of Robotics Research , vol. 29, no. 13, pp. 1661–1683, 2010.[20] S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task loadindex): Results of empirical and theoretical research,” in
Advances inpsychology . Elsevier, 1988, vol. 52, pp. 139–183. [21] J. M. Walker, N. Zemiti, P. Poignet, and A. M. Okamura, “Holdablehaptic device for 4-dof motion guidance,” in
IEEE World HapticsConference (WHC), in press , 2019.[22] F. Stroppa, C. Loconsole, S. Marcheschi, N. Mastronicola, andA. Frisoli, “An improved adaptive robotic assistance methodologyfor upper-limb rehabilitation,” in