Towards integrated tactile sensorimotor control in anthropomorphic soft robotic hands
Nathan F. Lepora, Andrew Stinchcombe, Chris Ford, Alfred Brown, John Lloyd, Manuel G. Catalano, Matteo Bianchi, Benjamin Ward-Cherrier
TTowards integrated tactile sensorimotor controlin anthropomorphic soft robotic hands
Nathan F. Lepora, Andrew Stinchcombe, Chris Ford, Alfred Brown, John Lloyd,Manuel G. Catalano, Matteo Bianchi, Benjamin Ward-Cherrier
Abstract — In this work, we report on the integrated senso-rimotor control of the Pisa/IIT SoftHand, an anthropomorphicsoft robot hand designed around the principle of adaptivesynergies, with the BRL tactile fingertip (TacTip), a softbiomimetic optical tactile sensor based on the human senseof touch. Our focus is how a sense of touch can be used tocontrol an anthropomorphic hand with one degree of actuation,based on an integration that respects the hand’s mechanicalfunctionality. We consider: (i) closed-loop tactile control toestablish a light contact on an unknown held object, basedon the structural similarity with an undeformed tactile image;and (ii) controlling the estimated pose of an edge feature ofa held object, using a convolutional neural network approachdeveloped for controlling other sensors in the TacTip family.Overall, this gives a foundation to endow soft robotic hands withhuman-like touch, with implications for autonomous grasping,manipulation, human-robot interaction and prosthetics.
I. INTRODUCTIONReplication of the human hand and its functionality is oneof the major goals of robotics, with expected widespreadindustrial and societal ramifications. There are emergingtrends in the design of robot hands, such as soft actuation andthe use of underactuation [1]–[5], which have a close linkwith biological principles of motor control. The versatilityand execution of the human hand depends on a couplingbetween its mechanical structure and its sensory capabilitiesin the form of proprioceptive and tactile feedback. How thesesensorimotor capabilities interact to give humans their uniquedexterity is not understood yet. However, it seems reasonableto consider them as core capabilities whose embodiment inrobot hands could bridge the gap between human and robotperformance in autonomous grasping and manipulation.As emphasised recently in a review of the trends and chal-lenges in robot manipulation [6], it remains an open problemto construct robotic hands that contain all of the mechanicaland haptic sensing components necessary to control the handto grasp, manipulate and explore objects with the ease ofthe human hand. The present work presents an integratedsensorimotor control of an anthropomorphic hand based ontwo complementary soft robotic technologies: the Pisa/IITSoftHand [7] and BRL tactile fingertip (TacTip) [8], [9]. Aninitial integration was proposed in [10], which is progressed NL, AS, CF, AB, JL and BWC are with the Department of EngineeringMathematics and Bristol Robotics Laboratory, University of Bristol, U.K. MBC is with the Centro di Ricerca E. Piaggio e Dipartimento diIngegneria dell’Informazione, Universit di Pisa, Pisa, Italia MGB is with Istituto Italiano di Tecnologia, Genova, ItaliaCorresponding author: [email protected] Fig. 1. The BRL biomimetic optical tactile fingertip (TacTip) integrated asthe distal phalanx of the fifth digit of the anthropomorphic Pisa/IIT SoftHand here to implement the first sensorimotor control loop of aSoftHand using this unique combined technology.The SoftHand is an anthropomorphic soft articulated robothand whose design and control are based on postural syn-ergies that represent a reduced set of principal directionsin hand configuration space describing the most frequentpostures in human hand movements [11]. Endowing softhands with advanced sensing capabilities comes with chal-lenges (discussed in the next section), with several solutionsproposed [12]–[15], but none has captured the rich geometricinformation from optical tactile sensors such as the MITGelSight [16], [17] or BRL TacTip [8], [9]. Of these, theTacTip is based on the dermal papillae structure in humantactile skin where low-threshold mechanoreceptors are lo-calized [18, Fig.1], which are fabricated by 3D-printing pin-like structures in a compliant skin imaged with a camera [8].Given the soft biomimetic nature of the SoftHand, the TacTipoffers a complementary soft biomimetic sense of touch.In this paper, an integration of the Pisa/IIT SoftHand andBRL TacTip is used to investigate sensorimotor control basedon the tactile feedback. First, we demonstrate closed-looptactile control to maintain delicate hand closure on an un-known object, using a basic measure of contact deformationwith the Structural Similarity Index Measure (SSIM) againstan undeformed tactile image. Second, we validate that thecustomized tactile sensor can accurately perceive the pose ofedge features of held objects, aiming to emulate the recentperformance of the BRL TacTip with deep learning [19]. Thispose-based control is then used to control the grasp closureand track the object pose. Success on these tests indicatesthe promise of this tactile hand for other sensorimotor tasks. a r X i v : . [ c s . R O ] F e b I. BACKGROUND AND RELATED WORKTactile sensing in dexterous robot hands has been a majorfocus of robotics research for decades, with a huge varietyof different hand and sensor combinations [20]. In part, thisproliferation has risen because of a lack of guiding principlesto encourage roboticists along a fruitful path to tame thecomplexity of hands [21]. One approach, pursued here, is toexploit the principles underlying the function of the humanhand to inform technological advances. Indeed, although thedebate on the anthropomorphism in robotics is still open [22],the human hand still represents the unmatched golden stan-dard for dexterous manipulation and haptic exploration.The Pisa/IIT SoftHand combines two key principles thatunderlie human dexterity: postural synergies and soft actu-ation [23], which results in a five-fingered anthropomorphicrobot hand that is simple to control yet moves with theprimary motions of the human hand. The original designimplemented one soft synergy [7], resulting in a wide rangeof grasps controlled by just one degree of actuation. Morerecently, this has progressed to the Pisa/IIT SoftHand 2 byimplementing a second postural synergy, which enables abroader range of grasps and some in-hand manipulation [24].Endowing soft robot hands with tactile sensing and pro-prioceptive capabilities is a challenging task [6]. Classicalsolutions for rigid hands, e.g. joint encoders, cannot beapplied in a straightforward manner to deformable struc-tures. Solutions based on inertial measurements for handposture reconstruction [12] or high frequency accelerationand contact detection [13] have been proposed, alongsideindirect estimation methods, such as measuring the motorcurrent powering the artificial hand [14]. However, althoughpromising, these techniques are still far from fully capturingthe rich spectrum of tactile cues arising from the interactionwith external objects in a manner resembling human touch.The BRL TacTip is based on the principle that transductionin the upper layers of human skin takes place via thedeformation of dermal papillae [8], [9]. These structuresenable sensing primarily via shear, providing an indirectmeasure of pressure [25], unlike most artificial tactile sensorsthat use pressure-sensitive taxel arrays. Originally, the TacTiphad a soft tactile dome of mm diameter, which has sincebeen customized into a family of biomimetic tactile sensorsof various 3D-printed morphologies [8] suitable both forstand-alone use and integration with robotic grippers largeenough to accommodate the same USB camera [26]–[28].Until recently, an issue with optical tactile sensors is thatthey have needed to be large to accommodate a camera [8],[16], which precluded sensors the size of a human fin-gertip. This problem is being overcome with the ongoingminiaturization of camera technology that has resulted insmaller versions of the GelSight integrated with roboticgrippers [29]–[33] and a version of the TacTip integratedwith the 3-fingered Model-O OpenHand [34], [35].The integration of optical tactile sensing into 5-fingeredanthropomorphic hands gives a new level of challenge toreach the size and shape of a human fingertip while respect- Fig. 2. CAD of the tactile fingertip (approx 1:1 scale on page). The tip(to left; black and brown) is printed as one part. The base (white and grey)houses the camera (green) and attaches as the distal phalanx of the finger.Fig. 3. Side, top and bottom view of the integrated tactile fingertip. Thesingle output cable connects to a PC via USB. ing the hand’s mechanical integrity. As far as we know, theonly optical tactile sensor integrated with an anthropomor-phic 5-fingered robot hand is a neuromorphic (event-based)version of the TacTip on the qb SoftHand [10]. Here weintroduce a smaller digital version of the TacTip on a Pisa/IITSoftHand with a tendon layout that aids customization of thedistal phalanx. By using a digital (rather than event-based)TacTip, we are able to utilize deep neural network methodsthat have been applied recently to controlling the TacTip [19],[36], which are here developed into an integrated approachfor sensorimotor control of the SoftHand.III. METHODS
A. Pisa/IIT SoftHand
The Pisa/IIT SoftHand is an anthropomorphic robotic handof similar size to an adult human hand [7]. It has 19 joints ofwhich 5 are simple revolute joints that implement the adduc-tion/abduction movement of each finger, and the remaining14 are compliant rolling-contact element joints. Tendons runfrom the palm base through all the fingers. The geometry ofthe hand’s bottom part is designed to enable easy connectionwith standard mechanical interfaces, with a completely self- ig. 4. Tactile images acquired from the undeformed sensor (left panels)and when the SoftHand is holding an object (right panels). The middle rowshows the high-definition sensor output and the bottom row pre-processed(subsampled and thresholded) tactile images. contained construction such that motors, electronics andsensors are on-board. The hand is open source and availableat: .Each SoftHand digit has a base and three phalanges, basedon a repeating arrangement of 3 pulleys that the tendonpasses through (technical details in [7], [24]). The base is aphalanx modified to end in a revolute joint that attaches to thepalm. In the original SoftHand, the distal phalanx is modifiedinto a (non-sensitive) fingertip that contains a pulley to routeback the tendon. The interphalangeal soft roll-articular jointseach comprise two coupled rolling cams ( . mm radius)with gear teeth held together by elastic ligaments.To facilitate tactile sensor integration, this work useda non-standard version of the SoftHand with a tendon ineach finger that ends on a retainer. In contrast, the originalSoftHand has a tendon that runs in a loop from the palm,up and down each finger via a pulley at the tip, then backto the palm. The use of tendon that ends on a retainer madeit easier to reroute the terminus of the tendon closer to thedistal joint, leaving the internal volume of the distal phalanxfree for modification to house the tactile sensor. B. Tactile sensor-hand integration
The tactile fingertip is adapted from an optical biomimetictactile sensor developed in Bristol Robotics Laboratory: theBRL TacTip. The integration with the SoftHand is facilitatedby [8]: (i) the use of multi-material 3D-printing to fabricateboth the soft and hard components of the fingertip, and (ii) amodular design in which the tactile fingerpad is separatepiece from the base of the sensor that holds the camera.Together these mean that the design and testing can berapidly iterated to converge on a solution for the integration. As each digit of the SoftHand is identical, the tactile sensorcan be used on any of the 5 fingers. In this work, we reportonly on one sensor integrated into the fifth digit, chosenmainly because it is slightly easier to access for testing.The integration required several major changes to both theTacTip and the distal phalanx of the SoftHand: • The tendon was rerouted to end on a retaining bolt closeto the soft roll-articulator joint, from its original locationat the tip of the finger (bolt visible in Figure 3, middle). • One side of the soft roll-articular joint was reproducedand extended into a hollow fingertip base for the distalphalanx, which holds the camera and gives a mount forthe tip/tactile skin (Figure 2, middle; Figure 3, top). • A backing plate was added to protect the camera elec-tronics, provide strain relief to the cable and result in aneat profile (Figure 2, right; Figure 3, bottom panel). • The design was based around housing a miniaturecamera: the Model SYD (Misumi Electronics) of length . mm, width mm and depth . mm, including thelens and electronic component (Figure 2, to scale). • The camera lens has a deg field of view, whichconstrained the depth of the fingerpad to be at least mm for all pins to be viewable. • The camera module has its own LED lighting that weuse to illuminate inside the fingerpad, powered by asingle USB3 cable connection to a PC. • The camera has maximal resolution of 1080p at fps,although we acquire images at lower resolution toreduce image processing and storage.Other design aspects were evolved from versions of theTacTip integrated into robot hands [10], [27], [28], [34]: • The fingerpad is fabricated as one part with a multi-material 3D-printer (Stratasys Objet), comprising a rigidrim (Vero White) and a compliant skin (Tango Black+),with the rim slotting onto the base section. • The inside of the skin extends into a rectangular array of × pins ( mm length and . mm diameter). Markerson the end of these pins are printed in Vero White. • A clear acrylic sheet ( mm thick) is glued into the rimto give a small cavity, which is filled with an optically-clear silicone gel (RTV27905, Techsil UK) that givesthe fingerpad a compliance similar to a human fingertip. C. Tactile data acquisition and processing
Deformation of the tactile sensing pad is imaged withthe internal camera at its native resolution of 1080p, thenadaptively thresholded with a Gaussian filter (width 39, mean0 pixels) and subsampled/cropped to (240 × -pixel grey-scale images (Figure 4). All image acquisition and processingwas carried out in Python OpenCV. The tactile image datawas used in two ways:
1) Contact deformation measurement:
A simple yet ro-bust measure of the difference in tactile images can be foundfrom the Structural Similarity Index Measure (SSIM) [37],which can be used to measure contact deformation bycomparing a tactile image against a non-deformed reference ose Range Perturbation x [-6,6] mm [-2,2] mm y - [-2,2] mm z [0,3] mm [-1,1] mm φ [-5,5] deg [-2,2] deg ψ [-10,10] deg [-2,2] deg θ [-45,45] deg [-2,2] deg Fig. 5. Pose ranges used as labels to train/test the neural network along withunlabelled shear perturbations of the contact for aiding generalization [19]. image [34]. Here we use e SSIM ( I ) = 1 − SSIM(
I, I ref ) (1)as a measure of the deformation of image I compared withthe reference image I ref , with SSIM implemented usingPython SciKit-Image and computed from the local means,variances and cross-covariance of the two images [38]. TheRMS pixel intensity change was also explored, but it wasnot useful as it saturated at small deformations. The SSIM-based deformation measure changed gradually as the contactintensified, making it suitable for use in a feedback controller.
2) Edge pose estimation:
A complementary use of thetactile images is to estimate the 3D pose from the edgeof a contacted object relative to the sensor (Figure 5), andthus estimate the pose of an object held in hand. We haverecently published a comprehensive paper on how to applyconvolutional neural networks to pose estimation [19] whichwe refer to for the details of the deep learning used here.The principal difference from our previous work is that thetactile fingertip was trained while mounted on the SoftHand.This was arranged with a custom test platform comprising amount that immobilises the hand and fingertip, combinedwith a test stimulus mounted on a robot arm (photo inFigure 8). The stimulus was then contacted repeatedly againstthe fingertip to gather the large amount (10,000 contacts) oflabelled data needed to train and test the deep neural network.We used a result from ref. [19] that the pose estimationcan be improved by introducing motion-dependent shearinto the training data collection. Thus, each sample of datahad a random labelled pose and a random unlabelled shearperturbation (ranges in Figure 5). The optimized networkhyperparameters are reported in Table I, with the trainingimplemented in the Python Keras library using a Titan XpGPU (12Gb memory). Once trained, the deep neural networkwas deployed on the CPU of a standard laptop.
Hyperparameters Optimized values N conv N filters N dense N unit EURAL NETWORK AND LEARNING HYPERPARAMETERS . Fig. 6. SoftHand controller with feedback derived from the tactile image.
D. Tactile SoftHand control
The SoftHand control software (Figure 6) combines afeedback loop and tactile processing pipeline implemented inPython with a Simulink interface (qbmove, distrubuted by qbRobotics) that sends motor commands to the robot hand. TheSimulink interface is built upon a C++ API for controlling themotor in the hand, compiled into MEX files for consolidationinto Simulink blocks. Real-time communicateion betweenPython and Simulink used the MATLAB Engine library.In the Python control loop, the actuator is driven by incre-mentally changing the set-point for the encoder on the motorof the SoftHand, rather than directly driving the actuator (be-cause this functionality was not part of the qbmove Simulinkinterface). Hence, the controller does not directly use themotor encoder position as feedback, but instead incrementsthe set-point and then waits for that point to be reachedbefore the next Python control loop can begin. This processfunctioned effectively for the experiments presented here, butdoes lead to a slow cycle time between the processed tactileimages and the hand control (typically 100-200 ms).A proportional-gain feedback controller was implementedbased on the SSIM contact deformation defined above, witha change in motor command u ( t ) sent to the hand ∆ u ( t ) = g P ( e SSIM ( I ( t )) − r ) , (2)where I ( t ) is the tactile image at time t , g P = 100 is a(hand-tuned) proportional gain and r = 0 . the set-point. Asecond controller was considered using the z -component ofthe edge pose estimation from the neural network, ∆ u ( t ) = g P ( z ( t ) − r z ) , (3)with the set point r z in the range 0-3 mm. The motorcommand has range ≤ u ≤ u max with u max = 19000 .IV. RESULTS A. Experiment 1: Adaptive grasp closure from touch
The first experiment tests the performance of the tac-tile SoftHand using the feedback controller in Figure 6 tomaintain a light touch as the hand closes around an object.As described in the Methods, we use a proportional-gainfeedback controller on the set-point of the actuator, withfeedback signal a measure of contact deformation estimatedfrom comparing the tactile image against an undeformedreference using the structural similarity SSIM. ig. 7. Experiment 1 tests the control of light touch on four objects: 3 square prisms (20 mm, 30 mm, 40 mm) and an irregular soft object. The controllermaintain the SSIM contact deformation at a set point r = 0 . deformation from no contact. A video of this experiment is provided with this paper. The adaptation of the grasp closure was tested with 4distinct objects and found to work well with no failure casesunder various object poses (Figure 7). The grasp closure wascontrolled to be sufficiently light that there was little if anymovement of the objects as the finger contacted them. Thisis evident in Figure 7 by the motor coming to a halt as theSSIM control signal reaches its set point.In our view, the main use of this capability is to make aninitial light contact that can then be adapted further usingother information from the tactile feedback about the objectand its pose. Therefore, we now consider the quality of tactileinformation available from the sensor.
Fig. 8. Experiment 2 tests the estimation of object edge pose. The PoseNetperformance is shown along with smoothed predictions (red; 50-samplemoving average) and mean absolute errors (MAE) assessed as in Ref. [19].
B. Experiment 2: Tactile perception of object edge pose
To test the quality of the tactile sensing from the inte-grated fingertip, we assess its performance on an edge poseestimation task that has been examined in detail with thestandard design of TacTip sensor [19]. This experiment isboth useful in itself, to develop a trained model for estimatingthe pose of held objects, and also serves as benchmark onthe performance of the integrated optical tactile sensor.Data were gathered for this test by using a robot arm tobring a test object into contact with the tactile fingertip whilethe finger was held immobile (Figure 8, top; also Sec. III-C).A range of object poses were considered that varied inall dimensions except along the edge (Table in Figure 5),with shear perturbations applied to the contact. A ‘PoseNet’convolutional neural network was trained to regress pose overthe labelled tactile images (subsection III-C). The traininginvolved Bayesian optimization over the network architectureand hyperparameters [19] (optima reported in Table I).Pose estimation was then assessed on a distinct test set.Plots of the predictions versus labels show good performancefor three ( x, z, θ ) pose components and fair performance forthe other angular ( φ, ψ ) components relative to their ranges(Figure 8). We emphasise that these predictions are after arandom unknown shear, which makes the regression moredifficult because of the latent variable but aids generalisation.In comparison with the standard design of TacTip sensorused in Ref. [19], the accuracies were similar for the ( x, z, θ ) pose components (Table II). Ranges of the angular compo-nents ( φ, ψ ) were smaller that those used with the standardTacTip to avoid damage as the stimulus tilts into the hand. Pose SoftHand TacTip Original TacTip [19]component MAE range MAE range horizontal, x z φ ψ θ OSE N ET PREDICTION PERFORMANCE ig. 9. Experiment 3A closes the hand using SSIM feedback control thenramps the motor signal. Edge pose is estimated from the tactile images.Fig. 10. Experiment 3B closes the hand using SSIM feedback control thencontrols hand closure based on the z -pose estimated from the tactile images,with set points 1 mm, 1.5 mm, 2 mm, 2.5 mm and 3 mm every 20 seconds.A video of both experiments 3A,B is provided with this paper. C. Experiment 3: Object pose during adaptive grasp closure
Lastly, we combine the adaptive grasp closure and tactileperception capabilities in two experiments: first, ramping themotor signal (by 1% per second) while estimating objectpose, and second, using the estimated pose to control handclosure. In both experiments, we start by closing the handfor 20 sec using the SSIM contact deformation for a lightgrasp, using the smallest square prism (20 mm) as stimulus.Note that Experiments 3A,B (Figure 9,Figure 10) are about × slower than Experiment 1 (Figure 7) because of the deepneural network. Although pose estimates took only ∼ ms,the computational load caused slower image acquisition forthe videos provided to accompany the figures; otherwise, allexperiments would run at a similar rate.Experiment 3A (Figure 9) shows that pose estimationremains stable during the test with noise as expected fromthe offline results in Experiment 2 (Figure 8). For lightcontacts, both the SSIM contact deformation and z -poseincrease together. At medium to strong contacts, the z -pose becomes more sensitive than the SSIM, which has a saturating non-linearity. Conversely, for almost no contact,the neural network becomes unreliable (defaulting to a meanprediction). Thus, poses are only shown for − SSIM > . .Experiment 3B (Figure 10) shows that the z -pose issuitable for feedback control to modulate the adaptive graspclosure, by stepping through 5 set points from 1 mm to 3 mm.At each set point, the motor signal equilibrates at the valueneeded to maintain that contact on the fingertip while theother pose estimates remain stable.V. D ISCUSSION
In this paper, we have presented an integration of the BRLtactile fingertip (TacTip) and the anthropomorphic Pisa/IITSoftHand that is able to finely control its hand closure usingtactile feedback. Two measures from the tactile images gavesuitable feedback signals for controlling hand closure: (i) astructural similarity index measure (SSIM) [38] of contactdeformation for very light or no contact; and (ii) object edgepose estimation from a convolutional neural network [19] forlight/medium to strong contacts. Hence, the SSIM contactdeformation can guide hand closure to an initial contact,followed by pose estimation to adapt that contact or grasp.This initial study considered tactile sensing from a singlefingertip to show the potential for a tactile soft hand. Inprinciple, it should be straightforward to integrate the otherfour fingertips, as all fingers of the SoftHand have the samedesign. Another capability to explore is slip correction, asthe TacTip is effective for detecting and correcting slipwhen integrated into a 3-fingered gripper [35]. The SoftHandused here only has one degree of actuation, but its softnessenables greater dexterity by interacting with the environment.Tactile sensing from multiple fingertips could control thoseinteractions or help control soft robotic hands with moredegrees of actuation such as the Pisa/IIT SoftHand 2 [24].In terms of the design, the integrated tactile sensor hasa similar size and shape to a human fingertip, with a softsurface compatible with the soft hand design. It is alsorobust: the same fingertip was used without wear or damageover tens of thousands of contacts and under driving thehand to near its maximal grasping force. This robustnesscomplements that of the SoftHand from their soft designs.Future work will encompass a more complete evaluationof the performance in different autonomous explorationand grasping tasks with a real-time implementation of thesensorimotor control and a fully 5-fingered sensorized hand.A
CKNOWLEDGMENT
We thank Vinicio Tincani and Cristiano Petrocelli (IIT) forthe preparation of the hand and CAD, and Elim Kwan forinternship work on this project. This work was supportedby Leverhulme Trust Research Leadership award on ‘Abiomimetic forebrain for robot touch’ (RL-2016-39), theEU’s H2020 Programme under grant agreements SOPHIA(871237), ERC Synergy Grant Natural BionicS (810346) andReconCycle (871352), and and by the Italian Ministry ofEducation and Research (MIUR) in the framework of theCrossLab project (Departments of Excellence).
EFERENCES[1] C. Piazza, G. Grioli, M.G. Catalano, and A. Bicchi. A Century ofRobotic Hands.
Annual Review of Control, Robotics, and AutonomousSystems , 2(1):1–32, 2019.[2] L. Birglen, T. Lalibert´e, and C. M. Gosselin.
Underactuated RoboticHands . Springer, December 2007.[3] L. U. Odhner, L. P. Jentoft, M. R. Claffee, N. Corson, Y. Tenzer, R. R.Ma, M. Buehler, R. Kohout, R. D. Howe, and A. M. Dollar. A com-pliant, underactuated hand for robust manipulation.
The InternationalJournal of Robotics Research , 33(5):736–752, April 2014.[4] R. Deimel and O. Brock. A novel type of compliant and underactuatedrobotic hand for dexterous grasping.
The International Journal ofRobotics Research , 35(1-3):161–185, January 2016.[5] M. Pozzi, S. Marullo, G. Salvietti, J. Bimbo, M. Malvezzi, andD. Prattichizzo. Hand closure model for planning top grasps withsoft robotic hands.
The International Journal of Robotics Research ,page 0278364920947469, 2020.[6] A. Billard and D. Kragic. Trends and challenges in robot manipulation.
Science , 364(6446), June 2019.[7] M. G. Catalano, G. Grioli, A. Farnioli, A. Serio, C. Piazza, andA. Bicchi. Adaptive synergies for the design and control of the Pisa/IITSoftHand.
The International Journal of Robotics Research , 33(5):768–782, 2014.[8] B. Ward-Cherrier, N. Pestell, L. Cramphorn, B. Winstone, M. E.Giannaccini, J. Rossiter, and N. Lepora. The TacTip Family: SoftOptical Tactile Sensors with 3D-Printed Biomimetic Morphologies.
Soft Robotics , 5(2):216–227, 2018.[9] C. Chorley, C. Melhuish, T. Pipe, and J. Rossiter. Development ofa tactile sensor based on biologically inspired edge encoding. In
International Conference on Advanced Robotics , pages 1–6, 2009.[10] B. Ward-Cherrier, J. Conradt, M. G. Catalano, M. Bianchi, and N. F.Lepora. A miniaturised neuromorphic tactile sensor integrated with ananthropomorphic robot hand. In
IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) , 2020.[11] M. Santello, M. Bianchi, M. Gabiccini, E. Ricciardi, G. Salvietti,D. Prattichizzo, M. Ernst, A. Moscatelli, H. J¨orntell, A. M. L. Kappers,K. Kyriakopoulos, A. Albu-Sch¨affer, C. Castellini, and A. Bicchi.Hand synergies: Integration of robotics and neuroscience for under-standing the control of biological and artificial hands.
Physics of LifeReviews , 17:1–23, 2016.[12] G. Santaera, E. Luberto, A. Serio, M. Gabiccini, and A. Bicchi. Low-cost, fast and accurate reconstruction of robotic and human posturesvia IMU measurements. In
IEEE International Conference on Roboticsand Automation (ICRA) , pages 2728–2735, 2015.[13] M. Bianchi, G. Averta, E. Battaglia, C. Rosales, M. Bonilla, A. Tondo,M. Poggiani, G. Santaera, S. Ciotti, M. G. Catalano, and A. Bic-chi. Touch-Based Grasp Primitives for Soft Hands: Applications toHuman-to-Robot Handover Tasks and Beyond. In
IEEE InternationalConference on Robotics and Automation (ICRA) , pages 7794–7801,2018.[14] S. Casini, M. Morvidoni, M. Bianchi, M. Catalano, G. Grioli, andA. Bicchi. Design and realization of the CUFF - clenching upper-limb force feedback wearable device for distributed mechano-tactilestimulation of normal and tangential skin forces. In
IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) ,pages 1186–1193, 2015.[15] G. Z¨oller, V. Wall, and O. Brock. Acoustic Sensing for Soft PneumaticActuators. In , pages 6986–6991, October 2018. ISSN:2153-0866.[16] W. Yuan, S. Dong, and E. H. Adelson. GelSight: High-ResolutionRobot Tactile Sensors for Estimating Geometry and Force.
Sensors ,17(12):2762, 2017.[17] M. K. Johnson and E. H. Adelson. Retrographic sensing for themeasurement of surface texture and shape. In
IEEE Conference onComputer Vision and Pattern Recognition , pages 1070–1077, 2009.[18] V. E. Abraira and D. D. Ginty. The Sensory Neurons of Touch.
Neuron ,79(4):618–639, 2013.[19] N. F. Lepora and J. Lloyd. Optimal Deep Learning for Robot Touch:Training Accurate Pose Models of 3D Surfaces and Edges.
IEEERobotics Automation Magazine , 27(2):66–77, 2020.[20] Z. Kappassov, J. Corrales, and V. Perdereau. Tactile sensing indexterous robot hands — Review.
Robotics and Autonomous Systems ,74:195–220, 2015. [21] A. Bicchi and G. Tamburrini. Social Robotics and Societies of Robots.
The Information Society , 31(3):237–243, 2015.[22] C. Bartneck, D. Kuli´c, E. Croft, and S. Zoghbi. Measurement In-struments for the Anthropomorphism, Animacy, Likeability, PerceivedIntelligence, and Perceived Safety of Robots.
International Journal ofSocial Robotics , 1(1):71–81, 2009.[23] M. Santello, M. Flanders, and J. F. Soechting. Postural HandSynergies for Tool Use.
Journal of Neuroscience , 18(23):10105–10115, December 1998.[24] C. D. Santina, C. Piazza, G. Grioli, M. G. Catalano, and A. Bicchi.Toward Dexterous Manipulation With Augmented Adaptive Synergies:The Pisa/IIT SoftHand 2.
IEEE Transactions on Robotics , 34(5):1141–1156, 2018.[25] J. Platkiewicz, H. Lipson, and V. Hayward. Haptic Edge DetectionThrough Shear.
Scientific Reports , 6(1):23551, 2016.[26] B. Ward-Cherrier, L. Cramphorn, and N. F. Lepora. Tactile Manip-ulation With a TacThumb Integrated on the Open-Hand M2 Gripper.
IEEE Robotics and Automation Letters , 1(1):169–175, 2016.[27] B. Ward-Cherrier, N. Rojas, and N. F. Lepora. Model-Free Precise in-Hand Manipulation with a 3D-Printed Tactile Gripper.
IEEE Roboticsand Automation Letters , 2(4):2056–2063, 2017.[28] N. Pestell, L. Cramphorn, F. Papadopoulos, and N. F. Lepora. ASense of Touch for the Shadow Modular Grasper.
IEEE Robotics andAutomation Letters , 4(2):2220–2226, 2019.[29] D. F. Gomes, Z. Lin, and S. Luo. GelTip: A Finger-shaped OpticalTactile Sensor for Robotic Manipulation. arXiv:2008.05404 [cs] ,2020.[30] B. Romero, F. Veiga, and E. Adelson. Soft, Round, High Resolu-tion Tactile Fingertip Sensors for Dexterous Robotic Manipulation. arXiv:2005.09068 [cs] , 2020.[31] E. Donlon, S. Dong, M. Liu, J. Li, E. Adelson, and A. Rodriguez. Gel-Slim: A High-Resolution, Compact, Robust, and Calibrated Tactile-sensing Finger. In
IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 1927–1934, 2018.[32] A. Wilson, S. Wang, B. Romero, and E. Adelson. Design of aFully Actuated Robotic Hand With Multiple Gelsight Tactile Sensors. arXiv:2002.02474 [cs] , 2020.[33] M. Lambeta, P. Chou, S. Tian, B. Yang, B. Maloon, V. R. Most,D. Stroud, R. Santos, A. Byagowi, G. Kammerer, D. Jayaraman, andR. Calandra. DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor With Application to In-Hand Manipulation.
IEEE Robotics and Automation Letters , 5(3):3838–3845, 2020.[34] J. W. James, A. Church, L. Cramphorn, and N. F. Lepora. TactileModel O: Fabrication and testing of a 3d-printed, three-fingered tactilerobot hand.
Soft Robotics , 2020. arxiv.org: 1907.07535.[35] J. W. James and N. F. Lepora. Slip detection for grasp stabilisationwith a multi-fingered tactile robot hand.
IEEE Transactions onRobotics , 2020. arXiv: 2010.01928.[36] N. F. Lepora, A. Church, C. de Kerckhove, R. Hadsell, and J. Lloyd.From Pixels to Percepts: Highly Robust Edge Perception and ContourFollowing Using Deep Learning and an Optical Biomimetic TactileSensor.
IEEE Robotics and Automation Letters , 4(2):2101–2107, 2019.[37] S. Luo, W. Yuan, E. Adelson, A. G. Cohn, and R. Fuentes. ViTac:Feature Sharing Between Vision and Tactile Sensing for Cloth TextureRecognition. In , pages 2722–2727, May 2018. ISSN: 2577-087X.[38] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Imagequality assessment: from error visibility to structural similarity.