CoVR: A Large-Scale Force-Feedback Robotic Interface for Non-Deterministic Scenarios in VR
CCoVR: A Large-Scale Force-Feedback Robotic Interface forNon-Deterministic Scenarios in VR
Elodie Bouzbib , Gilles Bailly Sinan Haliyo Pascal Frey Sorbonne Université, CNRS, ISIR. Paris, France Sorbonne Université, ISCD. Paris, France
Figure 1. CoVR is a physical column mounted on a 2D Cartesian ceiling robot to provide strong kinesthetic feedback (> 100N) in a room-scale VR arena.The column panels are interchangeable and its movements can safely reach any location in the VR arena thanks to XY displacements and trajectorygenerations avoiding collisions with the user. When CoVR is static, it can resist to body-scaled users’ actions, such as (A) users pushing on a statictangible rigid wall with a high force or (B) leaning on it; When CoVR is dynamic, it can act on users. (C) CoVR can pull the users to provide largeforce-feedback or even (D) transport the users.
ABSTRACT
We present CoVR, a novel robotic interface providing strongkinesthetic feedback (100 N) in a room-scale VR arena. Itconsists of a physical column mounted on a 2D Cartesianceiling robot (XY displacements) with the capacity of (1)resisting to body-scaled users’ actions such as pushing orleaning; (2) acting on the users by pulling or transporting themas well as (3) carrying multiple potentially heavy objects (upto 80kg) that users can freely manipulate or make interactwith each other. We describe its implementation and define atrajectory generation algorithm based on a novel user intentionmodel to support non-deterministic scenarios, where the usersare free to interact with any virtual object of interest with noregards to the scenarios’ progress. A technical evaluation anda user study demonstrate the feasibility and usability of CoVR,as well as the relevance of whole-body interactions involvingstrong forces, such as being pulled through or transported.
Author Keywords
Virtual Reality; Haptics; Kinesthetic Feedback; Actuateddevice; Tangible User Interface; Encountered-type HapticDevices; Robotic Graphics
ACM ISBN 978-1-4503-2138-9.DOI:
INTRODUCTION
While visual and auditory displays in Virtual Reality (VR)have reached a level where the produced stimuli are quiteconvincing, haptic technology is still poor compared to the richways humans can interact with their environment. Multipledirections have been envisioned to enhance the users’ hapticexperiences in VR, through hand-held controllers or wearablessimulating the environment [22, 11], or through the directmanipulation of passive props [37, 14].In these regards, McNeely introduced Robotic Graphics andmore specifically
Robotic Shape Displays (RSD) [45] in 1993as a concept for providing force-feedback in VR. It exploresthe use of a robotic interface in the VR arena to provide hapticexperiences, while the user remains unencumbered (no wear-ables, no controllers) [67]. It aims to stay out of reach whenno interaction is required, and to displace passive props todynamically overlay virtual ones otherwise.Multiple expanding fields such as Gaming, Training or Simula-tion could benefit from this concept. Several approaches havealready been conducted to instantiate it, from human actuators[19] to drones [5]. However, they often suffer from severaltechnical trade-offs (e.g. cost, real workspace, embeddedmass, speed, accuracy) that grow into interaction challenges(e.g. locomotion whole body interaction, force-feedback, freemanipulation of multiple props).In this paper, we propose CoVR , a novel Robotic ShapeDisplay providing whole-body interactions and strong force CoVR, pronounced "Cover", stands for a Column in VR whichphysically covers for virtual objects a r X i v : . [ c s . H C ] S e p eedback in room-scale arenas. We detail our approach fromthree complementary perspectives:From a mechanical perspective, CoVR is a 2D Cartesian ceil-ing robot, carrying a column. With only two single degrees offreedom (XY displacements), CoVR is a large-scale groundedrobot which exhibits notable mechanical capabilities: high lat-eral and vertical force feedback and perceived stiffness (over100 N) and load capabilities (over 80 kg). It hence can trans-port a variety of potentially heavy props and objects to standin for their virtual counterparts. We demonstrate that our im-plementation is fast (over 1.0 m / s) and accurate (under 2 cmin a 30 m arena).From a software perspective , while it is easy to control Carte-sian robots as they only rely on XY displacements, definingthe robot target positions can be challenging. The system hasto predict where the next interaction will occur, while avoid-ing the user for unexpected collisions. We thus elaborate auser intention model to make CoVR always available priorto the users’ interactions, even in non-deterministic scenarios,i.e. where users are free to interact with any virtual object ofinterest with no regards to the scenarios’ progress. Given theposition and orientation of the user as well as the objects ofinterest’s, the model estimates the best positions for the robotand its optimal trajectories to reach an object of interest whilerespecting safety constraints (e.g. safe-zones around the user).A technical evaluation demonstrates that (1) user intentionscan be captured using available data from a single HMD withno additional apparatus in a room-scale VR arena and that (2)our system successfully reaches targets prior to interactions inmost of non-deterministic scenarios with randomly distributedtargets and distractors.From the interaction perspective, CoVR offers (1) body-scaledinteractions such as leaning or pushing involving strong forceswith a static column (Figure 1 - A, B), or (2) dynamic inter-actions such as being pulled through by large traction forces(Figure 1 - C). CoVR can displace over 80kg of embeddedmass, which enables (3) interactions with potentially heavyobjects that users can freely manipulate but also (4) transportof the users themselves (Figure 1 - D).We report on a user study demonstrating (1) the robustnessof both CoVR mechanical and software implementations, (2)the benefits of a robotic interface providing body-scale inter-actions and strong forces and (3) in particular "being trans-ported", which was the favourite one. RELATED WORK
Approaches to provide physical interactions in virtual realityare to either simulate physical objects or to exploit the onesavailable from the environment. CoVR is part of a hybridapproach, which uses robotic devices to displace or renderphysical objects.
Simulating physical objects
Several devices have been proposed to simulate physical ob-jects in VR, with a trade-off between the quality of the hapticrendering, especially kinesthetic feedback and the workspacesize [53]. These devices are usually limited to desktop usage [27, 23] but can render high quality kinesthetic feedback (over30N) [53]. Most of these solutions provide stimulation at thescale of the hand without any regards concerning whole-bodyinteractions. To widen the workspace, alternatives attach thedevice to mobile platforms [41, 24, 52, 48, 25, 40, 26, 25],which remain slow and require to be held continuously whilstmoving (which is opposed to Krueger’s postulate to developunencumbered artificial realities [67]).In contrast, several low-cost haptic devices have been proposedin HCI. Typically, wearable or hand-held devices [4, 1, 44, 8,11, 68, 21, 61, 31, 57, 42, 22, 6, 20] are naturally compatiblewith large environments. They can simulate various hapticfeatures (weight, stiffness, shape, texture) on different bodyparts. However, they provide limited kinesthetic feedback andthey need to be held continuously.
Exploiting physical objects
The second approach exploits directly physical objects placedin the VR arena, following Insko’s postulate that passive propsenhance virtual environments [37]. For instance, one solu-tion is for each virtual object to annex a physical object withsimilar properties in the VR arena [32]. Another solution isto use human actuators [19] who execute a subtle choregra-phy to move the physical objects at the right place and time.The choregraphy is rather costly and time-consuming. Somesolutions aim to reduce the number of human actuators tomove physical objects by using other users [17, 16] or eventhe users themselves [15] at the cost of reducing the numberof interactive features.
Robotic Shape Displays
The concept of Robotic Shape Displays (RSD) [45] focuses onthe mobile, unencumbered and untethered aspect of the human [67]. It consists of using a robot to overlay virtual objects withphysical props. We distinguish the robotic system (hardware)from their trajectory generation algorithms (software):
Robotic system . Recently, several classes of prototypes aimedto instantiate the original Robotic Shape Display concept bydisplacing objects [58, 5, 30, 38, 36] or simulating them [56]to meet the users without impairing their movements.Drones can transport objects that users can explore [70, 39,33] or manipulate [5] in a theoretically unlimited workspace(in practice, the workspace is limited to 2.5 m because oftechnical constraints). This approach only provides a smallamount of force feedback as state-of-the-art drones can notresist to human actions. Moreover, this technology has severaltechnical limitations including speed ( < . ms / s ), accuracy( > cm ), autonomy and safety. Moreover, the accuracy andreliability of these systems decrease with the embedded massof the props, which reduces the range of available scenarioswith high kinesthetic feedback.Lightweight mobile robots [30, 58, 66]) or lightweight roboticarms [7, 65, 73, 45, 69, 55, 35] provide a medium amountof force feedback , but lack an efficiency of displacements.Mobile robots are limited in speed (< 0.5 m/s) and autonomywhile robotic arms are limited to desktop usage.nly few prototypes provide a high amount of force feed-back . TilePop [62] or LiftTiles[59] modify the floor topologywith a large inflatable mat covering the surface of the VR arena.This provides high vertical force feedback and hence supportswhole-body interactions below 1m (eg user sitting [74]), de-spite slow inflation (5s) and deflation (20s) times. Relying on arobust robotic arm such as a Kuka [38, 46], can produce a highblocking force and hence provide large kinesthetic feedbackat a body-scale, in spite of its cost. This robustness obviouslygoes along with strong software safety considerations aroundthe user. These prototypes do not easily scale to VR arenasand do not let users freely manipulate a wide range of props. Trajectory Generation . The quality of the interaction doesnot only depend on the hardware implementation (eg speed)but also on the algorithm to generate trajectories, especiallyin non-deterministic scenarios . When the device does notknow in advance which virtual object to physically overlay,it is thus necessary to (1) build a user intention model (e.g.[12]) to estimate what will be next object to overlay and (2) a path planning algorithm (e.g. [18]) to displace the robot tothe target location without colliding obstacles. Only few ofthe above systems [38, 72, 73] rely on these components tosupport non-deterministic scenarios. However, they requiremultiple robots [58, 65] or specific devices (e.g. eye-tracker[12]). In contrast, we elaborate a low-computational intentionmodel working with common HMDs.In summary, Robotic Shape Display systems still face multipleinteraction (e.g. whole body interaction, locomotion, forcefeedback, free manipulation of multiple props) and technicalchallenges (speed, accuracy, safety, price). CoVR addressesmany of these challenges and focuses on providing large force-feedback (100N). CoVR supports whole-body interactionswhile letting users physically unencumbered. It is designed forGaming or Training purposes in large virtual arenas ( ≈ m ).CoVR can carry potentially heavy physical props to matchvirtual objects the user is about to interact with, without sacri-ficing speed, accuracy, safety or price. DESIGN AND IMPLEMENTATION OF CoVR
When designing our
Robotic Shape Display , CoVR, we primar-ily focused on force feedback and workspace size as designconsiderations (which is a challenging trade-off for haptic de-vices [53]) as well as speed, interaction opportunities, priceand safety. We initially considered mobile robots such as [29].However, these interfaces are limited by a compromise be-tween force-feedback, speed and autonomy. More particularly,we decided to focus on enabling strong force feedback at awhole-body scale and allowing different body postures. Wethen deliberated upon a grounded solution, a 2D Cartesianceiling-mounted robot, which can be integrated into a room-scale arena (4x4x2.5m; LxWxH ). The advantages are speed,accuracy, force-feedback, while allowing to move potentially Non-deterministic scenarios consist of scenarios where multiplevirtual objects are available at the same time, and the user is free tointeract with any of them. The system does not know in advancewhich one to physically overlay. The robot can be mounted on the ceiling or on an external trussstructure ( triangle aluminum Global Truss ) as shown in Figure 2. heavy physical objects without the embedded mass affectingits displacements.Another important consideration was the number of degrees offreedom (DoF) of the robot ( X − Y planar motion, Z elevation, W rotation around Z , 6DoF Robotic arm...). When dealingwith robotic interfaces, a trade-off between price, complexityand interaction possibilities can be drawn. We realised that2D planar motion carrying a modular structure already allowsquite a large variety of scenarios while keeping a low technicalcomplexity and cost. However, our chosen architecture can beextended with additional DoFs (for instance, by attaching a6DoF Kuka robotic arm; see section Discussion).In the following sections, we describe the main componentsof the final version of our system. We then evaluate CoVRin a technical evaluation validating its control through a user-intention based algorithm. Robotic system
Robot . CoVR relies on a 2D Cartesian ceiling robot (Figure2), actuated with DC Motors (
Dunkermotoren 55x30, KPL43gearbox, . Nm torque for X-axis, Dunkermotoren 63x55,KPL57 gearbox, . Nm torque for Y-axis ) trough a pulley-belt mechanism (Figure 2). We chose a pulley-belt mechanismbecause it is simple to implement and can easily be scaled tolarger VR arena. The robot moves a 15 x cm carriage onwhich is attached a modular structure (see below).The robot is controlled in speed with a Roboclaw 2x30AV5E motor controller and
AEAT-601B-F06 encoders, mountedon a custom-designed 3D-printed support. The Roboclawcontroller is connected to an
Arduino MEGA 2560 micro-controller. It provides closed loop control with a
PIV scheme.The total price of the robot (motors, rails and pulley-belt) isunder 1500euros. Speed . The speed of the robot depends on the distance totravel. For large distances (> 1 m), the speed is over , We used an iterative process to design CoVR. At each iteration, weimproved key features such as robustness, accuracy, speed, safety,while widening the interaction possibilities. Proportional position loop Integral and proportional Velocity loop
Figure 2. Top isometric view of CoVR setup: (A) Structure; (B) Skeleton,modular column-like structure to attach props and panels; (C) CoVRpanel; (D) Carriage; (E) X -axis rail; (F) 1: X -Pulley-belt system andMotor, 1: Y -Pulley-belt system and Motor, 3 - Electronics (Arduino andRoboClaw); (G) Y -direction rail. hich is approximately a normal human walk speed. For smalldistances (< 80 cm), speed is about 0.5 m/s, which remainsfaster than current mobile solutions (e.g. [58, 30]). Noise.
At full speed, CoVR’s average noise is 55 dB average(max: 65 dB).
Weights and Forces Capabilities.
The carriage can support atotal weight of 800 N vertically ( ≈ kg ) and 1000 N horizon-tally. The embedded mass the carriage can support is largeenough to support a human lying on it or even to be pushed byit (Figure 1-A,B) without causing any damage to the structure.The system can also provide high traction force to pull theuser (-C) or even transport her (-D). Column
A column-like modular structure (Figure 3-A) is attached tothe moving carriage. Different surfaces in arbitrary positions,shapes, orientations and sizes can be attached using a simpleclamping mechanism. It is similar to stage designing in realtheatres [49], where a limited number of decors can quicklybe replaced. Another advantage is to easily support DIY :the stage designer can use cardboard or props with differentmass, textures and shapes that users can freely manipulateat different heights of the column. The positions and shapesof the physical objects are then communicated to the VRdesigner in a calibration phase. In summary, the column hasbeen designed to be flexible enough to support a wide rangeof interactions. Figure 3-B and -C show two examples ofimplemented columns. The section Interactions and DemoApplications detail interactions with these columns. back side
Figure 3. Column design. (A) Modular structure attached to the 2Dceiling robot to provide a wide variety of surfaces and props. (B) The 3-side column used in the user study with a chair (left), a cylinder attachedto a spring virtually representing a broom (front), a large cardboardsimulating a wall and piece of fabric representing a ghost (right). (C) A4-side column implemented with a lever attached to the structure withan elastic (left), haptic code made in cardboard and glue (front), and atray with a large and small cube (back) to insert into the locker (right).
Display and Tracking
We use the Oculus Rift S [2] HMD because it is not sensitiveto occlusion problems and it allows interactions under or evenin the column (Figure 8). We used Unity3D to create virtualscenes. It centralises the communication and synchronisationbetween different components though plugins (SteamVR, Ar-duino/Roboclaw). In particular, the SteamVR plugin asset [3]is used for the Oculus communication and the Uduino package[63] for fast prototyping between Arduino and Unity. This feature made the W rotation partially redundant. Safety
As users are invited to move around an active large-scale me-chanical system, safety measures had to be established. Thesewere planned on several levels, from the structure conceptionto the motions around the users during interactions.
Carriage:
One risk is the fall of the carriage. The carriage cansupport both larger axial (800N) and radial (1000N) forcesthan those required for the envisioned scenarios. A securitycoefficient of 2.5 was introduced for elastic deformation cal-culations in the conception process.
Column motions:
Hardware, software and electronic emer-gency stops are implemented. The carriage motion is restrictedon both ends with spring-based mechanical stops. The soft-ware stops the motors when the column is within 2 cm of theselimits. The controller electronically shuts down when themotor’s current exceeds 5 A . More importantly, the columnimmediately stops if the user is not tracked for more than 0.5s.Finally, the game master has a manual emergency stop buttonthat turns the system off, keeping it electrically grounded toavoid potential shocks. Finally, given the power and speed ofthe robot, it is important to ensure the column will not acci-dentally physically collide with the user. We thus developedan algorithm to generate the robot trajectories. Robot Motion Control
We present a model to control the robot displacements . Whiletrajectories are easily generated by the Cartesian structure (XYdisplacements), the algorithm inputs for scenarios involvingmultiple objects of interest need to be defined and safety mea-sures around the user need to be implemented. The main ideais to attach the robot to a virtual proxy (a ball with mass andgravity) with a spring-damper model. The ball’s displace-ments depend on (1) the user’s location to avoid collisions,(2) the user intentions and (3) the progress of the scenario toattract the ball towards the objects users are most likely tointeract with next. A key contribution regarding our trajectorygeneration model is the elaboration of a low-computational user intention model working with common HMDs. We nowdetail our approach to generate trajectories. Trajectory generation.
Each virtual object of interest i withinthe scene gets a weight W i which depends on its likeliness tobe interacted with next. The virtual proxy (ball) and CoVRcommand position CoV R ( x , y ) is hence a weighted average ofthe positions of each object of interest: CoV R ( x , y ) = ∑ Ni = W i ∗ ( x i , y i ) ∑ Ni = W i (1)where N is the number of virtual objects of interest (VOI)in the scene, ( x i , y i ) the cartesian coordinates of the VOI i and W i its weight, estimated given a user intention model (seebelow). A virtual spring between the proxy and the commandposition is then defined, and the according spring force isapplied for the proxy to reach this position. We use Unity3D’sphysics engine to automatically generate the proxy trajectoriesto reach a target. The target position is not necessarily the osition of a virtual object. If the scene contains two objects ofsame interests, CoVR will automatically place itself betweenthese two objects’ positions, hence the displacement whenone becomes the chosen object of interest is minimised andCoVR is more likely to reach it prior to user interaction. Asthe proxy is also attached to the CoVR, its resulting motiontakes naturally into account the robots speed limitations. VOI1 VOI2VOI3
Proxy/CoVRObstacleUser
VOI1 VOI2VOI3
User movementProxy/CoVR trajectoryObject of Interest (a) (b)
Figure 4. Control algorithm relying on a physical model: (a) The virtualproxy of the physical CoVR column is connected to all virtual objectsof interest (VOIs) with weights depending on the users’ intentions to in-teract with them. The user and other forbidden zones are covered by arigid cone-like obstacle to be repulsive. (b) Whenever the user is aboutto interact with a VOI, the proxy/CoVR move towards it, while naturallyavoiding obstacles (e.g the user).
We also created virtual obstacles to cover all forbidden areasin the arena (user, external people, furniture). Indeed, wedesigned a cone-like rigid shape that we attached to the user’sposition as shown in Figure 4. The size of the cone (diameter= 90cm) was chosen to avoid collisions even if the users’ armsare open. Thanks to the contact mechanics and gravity in Unity,the proxy is naturally pushed and rolls away from the obstacle.The obstacle’s curvature ensures a smooth deceleration of theproxy when this latter is getting to close to it. The radius of theobstacle decreases (20cm) when the user comes near a objectof interest, so the proxy is not pushed away.
User-intention model
We elaborated a user intention model to support non-deterministic scenarios, i.e. scenarios where the system doesnot know beforehand which object to physically overlay. Themodel inputs are the positions of the virtual objects of interest(VOI) as well as the available data from the users’ apparatus:the HMD’s position and orientation. It hence does not requireadditional hardware such as eye-tracker or finger/hand tracker.We defined the total weight W i of the VOI i to be a function ofthe user’s distance D to a VOI, and her orientation ( O ). W i ( d , θ ) = ω ∗ D ( d ) + ( − ω ) ∗ O ( θ ) (2) W i ( d , θ ) = ω ∗ + d + ( − ω ) ∗ e ( cos ( θ ) − ) (3)where ω is the contribution of the distance over the orientation. D ( d ) and O ( θ ) ’s ranges are between 0 and 1, hence W i ’s rangeis from 0 to 1 too. O ( θ ) is equal to 1 whenever the user’sHMD orientation is colliding with any surface point of theVOI’s mesh, and is decreasing exponentially whenever theuser’s orientation moves further away. On the same principle, D ( d ) is equal to 1 whenever the user is close to a target, anddecreases with the same regards . We also increase the stability of the column in the vicinity of theVOI. When W i > . W i is rounded to 1, typically when an object is Scenario-based model
Depending on the progress of their scenario, designers canestimate the prior probability of an object to be interactedwith: in a basketball game for instance, the user is more likelyto interact with the ball first than with the hoop. We let thepossibility to designers to define their own scenario-basedmodel by refining the estimation of W i : W i = P i × W i ( d , θ ) (4)where P i is the prior probability of the VOI i to be interactedwith from the progress of the given scenario. We will discussthe use of these probabilities in the Discussion section of ourTechnical Evaluation below. TECHNICAL EVALUATION
The primary aim of this technical evaluation is to determinethe ω parameter of the user intention model, i.e. the optimalcontribution of the distance over the orientation to estimatewhich object of interest is more likely to be interacted with.We are also interested in studying CoVR’s success rate asa function of the number of objects of interest ( distractors )within the scene. Indeed, we anticipated that the performanceof the user intention model and the value of ω depend on thenumber of distractors within the scene. Finally, we want toconfirm that CoVR’s speed is sufficient enough to reach avirtual object of interest even when the user does not have adecision to make (number of VOI = 1).We first perform a data collection over a panel of users to betterunderstand how intentions can be quantified as a functionof both distance and orientation. We then perform multiplephysical simulations to find the best ω parameter that matchesusers’ behaviors. Data Collection
Participants and Apparatus.
Experimental Design
Task and Stimuli.
We considered an exploratory task, suchas the ones users would perform in games, i.e. users taketheir time, observe the decors, avoid virtual obstacles and facetheir objects of interest whenever interacting. To replicatethese game features and to capture the corresponding users’behaviors, we created an empty scene where virtual numberedballs appear simultaneously at random locations with randomorientations (see Figure 5). Instructions are written on thewalls surrounding the users, and tell them to touch a givennumbered virtual target. Users are then asked to face thetargets whenever touching them. at less than 20 degrees from the user’s HMD direction or when theobject is at a distance below 20cm from the user. It allows for CoVRto stay at the closest VOI as long as the user remains in its vicinity. onditions.
In this experiment, we control the number ofdistractors within the scene from 0 to 4 (number of balls isfrom 1 to 5). This allows us to understand the performanceof CoVR over the number of available VOIs. The minimumdistance between two targets is their diameter - 10cm (eg theycannot overlap) and they cannot appear at the user’s location.As long as the user does not touch the target ball, nothingchanges in the scene. As soon as the target ball is interactedwith, another condition starts.
Design.
We used a within design. All participants tested allfive conditions (0,1,2,3,4 distractors). The order of appearanceof each condition was randomized within the blocks. Partic-ipants performed 10 blocks. The duration of the experimentwas about 12 minutes per participant (std = 2.6). In summary,the experimental design is: 6 participants ×
10 blocks × =
300 trials.For each trial, we measure the users’ position and orientationat each frame, with a frame rate of 75 fps.
Figure 5. Technical Evaluation "Simulation"
Virtual Scene example af-ter the Data Collection. (A) User looks for the target (according to thewalls’ instruction). Weights change according to her position and orien-tation. (B)
Intention Detection : User chooses a target and its weight goesto 1. (C)
Trajectory : The proxy (blue ball) moves accordingly with thecentroid of all the objects’ of interest’s weights towards the chosen one(weight = 1), while avoiding the user obstacle. When the proxy reachesthe chosen ball, the user obstacle size decreases.
Parameter Fitting
We used the data collection to replicate the users’ displace-ments into a Simulation virtual scene. The robotic systemphysically moved accordingly with our "user intention model"(section above). Each simulation corresponded to a differ-ent ω . We simulated all the data from the 6 participants (i.e.including the 5 conditions). We first performed a broad explo-ration of ω (step= 0.25) and then refined it to find the optimalone for each condition (number of distractors in the scene).We tested 13 parameters over 6 users, which resulted in morethan 17 hours of simulation.Our main measurement was the success rate of CoVR reachinga VOI before the user, i.e. when CoVR’s distance to the targetwas below its diameter (10cm) when the user was touching it. Results
Success Rate . Figure 6 shows the success rate as a function of ω and the number of distractors. The success rate is approxi- mately 100% (only 1/300 targets missed) when there is onlyone VOI in the scene, indicating that the system is at leastas fast as the participant when the target position is known(i.e. the system does not rely on the users’ intention). Theresults also confirmed that the success rate decreases with thenumber of distractors. Figure 6 also shows that we obtain thebest average success rate (80%) with ω = 0.175 (CI=14%)regardless of the number of distractors. Success rate remainsabove 80% up to 2 distractors.We also note that the success rate per user decreased withthe time spent in the experiment (88% success for a 14mnexperiment vs 74% for an 8mn experiment). Target distance . We measured the average distance betweenthe carriage and the target centres when the user was collid-ing with the virtual target. The average distance among allthe trials is 1.8cm (95% CI = 0.33 cm) demonstrating therepeatability of our implementation.
Detection time . We also measured the time difference betweenthe target’s weight reaching 1 and the user colliding with it.Results show that this detection time does not depend on thedistractors, with a 7s average (std = 0.6s) and a 96% accuracy.We note that if the detection time is below 4s, it results in afailure of the overlaying, as CoVR struggles to get around theuser (especially the obstacle) and place itself properly.
Figure 6. Success Rate of CoVR reaching the chosen VOI prior to theuser interaction, function of ω and the number of distractors. Errorbars indicate 95% confidence interval with a T-Distribution. The tableshows the Success Rate with the optimal parameter, ω = 0.175, as func-tion of the number of distractors. Number of users collision . No collision between the user andCoVR were noted during the simulations.
Accuracy . Finally, we measured the distance between thevirtual proxy and the physical column. The mean distanceover all users and conditions is 0.94 cm (CI 95% = 0.99 cm),which ensures they share the same trajectory, and hence a safeuser environment around CoVR.
Table 1. Accuracy, measured by the distance between CoVR and theproxy, with ω = 0.175. Discussion
This evaluation tested CoVR in an uncontrolled environment,with random locations and orientations for each target anddistractor and a user-intention based model. Despite this en-vironment, our system had a high success rate ( > Adding a scenario-based model.
According to the Equation4, we can add "prior probabilities" to the different VOIs, de-pending on the progress of the scenario. After selecting ouroptimal parameter ω = . Distribution) 98.3 (1.8) 95.0 (2.4) 93.3 (4.4) 96.7 (3.5) 93.3 (3.5)Average Distance to Target (STD Distance, in cm) 0.4 (1.4) 0.7 (1.7) 0.6 (1.6) 1.0 (1.8) 1.2 (2.2)
Table 2. Success Rate and Distance to Target with ω = 0.175, and a 75%probability to be interacted with added on the target. Assigning multiple VOI to the same Physical position.
Thanksto its size and shape, CoVR can contain multiple objects ondifferent panels and at different heights. We can take advan-tage of this feature to assign multiple virtual objects of interestto the same physical location, hence reducing the amount ofCoVR displacements and the risk of spatial mismatches.
Adding visual effects . When a spatial mismatch is likely tooccur, literature usually proposes to cater for it with visualeffects [19] or dynamic redirection techniques [ ? , 50, 9]. Theserespectively distract the users and give spare time to the robotto reach the target location or dynamically correct the usersand CoVR’s interaction positions. INTERACTING WITH CoVR
The technical evaluation showed that CoVR is able to move ata sufficient speed to follow users at a natural walk speed.When no interaction is required , CoVR remains out of reachand the users can wander in the whole arena. CoVR thusdoes not interfere with users’ natural behavior. Letting theusers truly walk (instead of using a metaphor for locomotion)reinforces the immersion [64].When interactions are required , a key aspect of CoVR is toallow interactions involving strong kinesthetic feedback at abody scale. We distinguish two main uses of CoVR: static usewhere users transmit forces when interacting with the column(e.g. exploration, manipulation) and dynamic use, whereusers are receiving forces enabled by CoVR’s displacementsduring the interaction (e.g. leading through forces, transport).We now detail these two uses of CoVR.
Static Use of CoVR
Hand exploration:
Hands remain the primary body part forexploring the world and the most sensitive one. Users canprobe objects directly with their bare-hands. As such, interac-tions are not limited to one finger: surfaces can be realisticallytouched and their texture fully felt with the whole hand. In particular, users can interact with the palm, which contributesto a sense of tangible presence, as it enables kinesthesia ontop of tactile cues [47]. Moreover, the explored surface canbe large and not limited to a specific orientation or shape. Forinstance, users can perform large hand movements to find aspecific tactile pattern on a wall for instance (Figure 7-A).
Figure 7. (A) Tactile Exploration: The user tactilely explores large sur-faces, for instance, to find a hidden code over a human-sized wall. (B) Di-rected Manipulation: The user pulls a lever which is attached to CoVRwith an elastic, letting it a single degree of freedom, providing a mechan-ical manipulation of props.
Whole-body interactions:
Users can apply strong forces withany part of their body: users can lean on a fixed wall (Figure 1- B), push hard on it with their hands or shoulders (Figure 10)or even kick it. CoVR is rigid and robust enough to remainstill during all of these interactions.
Figure 8. Postures; The user (1) goes through an obstacle with constrainsbelow and beneath her (2) or crouches.
Postures:
CoVR also supports a variety of users’ postureswith interactions at different heights such as crouching under atable, going through obstacles with physical props both belowand above the users (Figure 8), sitting on a chair [74, 60] orclimbing a stair to reach a high target (Figure 10).
Manipulation:
Manipulation of real objects and passive propsimproves interaction fidelity [51, 37]. CoVR enables differenttypes of object manipulation: • Free manipulation . CoVR can carry untethered objectswhich users can grab and freely manipulate. A large varietyof samples (Figure 9-B) of any textures is possible, as longas dimensions and weights are compatible. Thanks to theCoVR’s grounding and high motor torques, it can carrylarge masses without compromising its speed or accuracy. • Contact.
Objects can also be manipulated to interact witheach other. For instance, in Figure 9-C, the big cube doesnot physically fit in the locker. The user hence needs to finda smaller one.
Directed manipulation . Users can interact with objectstethered to CoVR. Its structure allows for mechanical ma-nipulation of objects and for users to actuate them. Forinstance, in Figure 7-B, the user actuates a lever mountedon the column, simulating a slot machine. By attachingobjects on CoVR’s skeleton, mechanical manipulation withmultiple numbers of degrees of freedom is possible.A single physical object can overlay multiple virtual ones ofsimilar primitives [32]. Instead of using visual effects such as[10], CoVR physically moves a single prop to overlay multiplevirtual ones. For instance, one physical door can overlay threevirtual ones (Figure 9-A). These mappings were previouslyseen in the literature [30, 29].
Figure 9. (A) Directed manipulation; User opens three virtual doors -but only a single physical one, cut through a panel cardboard. (B) Freemanipulation; User finds a teddy bear. (C) Free manipulation and Con-tact; The user manipulates a cube which is too big to fit in the locker.She realises she needs to find a smaller cube.
Dynamic Use of CoVR
In the previous section, CoVR was motionless during theinteraction ( static ). The following interactions require thesystem to move in the user’s vicinity ( dynamic ). Receiving Physical Contact:
CoVR can physically touch theusers and produce impact force feedback [66]. It is thus initiat-ing the haptic interaction, instead of the user. As receiving aninteraction might be surprising in VR, we recommend attach-ing props at a distance from CoVR’s main skeleton, to producelight impact forces. For instance, a fabric (60cm away fromthe main skeleton) can lightly brush the users to simulate thecrossing of a ghost (Figure 12) through them. Users can alsobe touched by a virtual agent trying to catch their attention,providing a sense of physical presence [43, 34].
Leading through forces:
Users can be led by CoVR throughbody-scaled tension and traction forces. For instance, in Figure1-C, the user physically holds a cylinder attached by a spring tothe column, virtually represented by a broom, which providesher with a large force-feedback and leads her the way in thevirtual environment. She is pulled by the broom when CoVRmoves. Another example is inspired from [17], involving afishing pole where the line is attached to the column. Themotion of the column creates the illusion of a fish biting.
Transport:
Finally, CoVR mechanical properties open up anew range of interactions in VR. Indeed, CoVR can transportthe users. For instance, it can move a chair with a sitting userto a different location (Figure 11) as CoVR can handle largeembedded masses. We envision other scenarios transportingthe user, such as Wind-surfing or Water-skiing [71].
DEMO APPLICATIONS
We created a two-scene demo application to demonstrate the in-teraction possibilities offered by CoVR. It relied on the 3-sidecolumn illustrated in Figure 3-B and involved 5 interactions, 7virtual objects but only 5 props. In the following subsections,user interactions will be displayed in bold while the motionsand CoVR’s interactions will be displayed in italics . Escaping the Room
We created a first scene where the users need to escape a room.
Reaching for the Light
First, Bob is in a dark room where the only thing visible is alight bulb, at a 2.5m height, in a small cupboard. Bob hence climbs in the cupboard to touch the bulb, which then turns onthe lights. In the physical world, he hence goes into CoVR ,which remains still and touches the top of CoVR’s skeleton.
Figure 10. "Escaping the Room".
Reaching for the Light : User climbs inthe cupboard to reach the virtual light bulb;
The Magic Wall : (A) Userchooses a wall. CoVR moves accordingly with the user’s intentions. Userpushes on the wall.
We note that none of the users touched the "ghost" byaccident during the experiment. (B) The wall remains static, and changescolor to encourage the user to maintain contact. (C) After 10seconds ofmaintained contact, the Magic Wall moves, giving the user the impres-sion of having pushed it herself.
The Magic Wall
Bob then sees a carpet with the words "Start". Once he reachesit, three walls appear. A sign informs him he needs to pushthem. Bob chooses a wall, but can change his mind and pickanother one if he wants. He then has to maintain contact and keep pushing for 10 seconds. The walls’ color changes fromgreen to red (accordingly with the timer), to indicate Bob heneeds to keep pushing and that a maintained contact is needed.When the walls appear, CoVR hence uses the users’ intentions-based algorithm in order to place itself at the chosen wall.When Bob pushes a wall, CoVR remains static . Once thetimer is finished, CoVR steps backwards , which gives Bob theimpression of having pushed the wall himself . ravelling in the Clouds After pushing the walls, dust starts flying around Bob, who isthen teleported in a forest.
The Magic Broom
The user now sees a magic broom. He holds it tightly, and isnow pulled by CoVR, through the forest to the clouds. CoVR pulls the user with a strong force-feedback, as the broom isactually a cylinder attached to CoVR with a string and a spring(see Figure 1 - C).
Moving in the Clouds
Once the travel is over, a "Continue" panel appears. WhenBob touches it, a chair appears. Bob then sits comfortably inthe chair, and CoVR transports him through the clouds.
Figure 11.
Moving in the Clouds : User is sitting in a chair and physicallytransported through the clouds.
The Ghosts
The user, in the clouds, is now surrounded by ghosts. He thensees a halo, in which he decides to go into. When he reachesit, he then sees a huge ghost about to go through him. Bob remains still while CoVR initiates the interaction by brushinghis head with a piece of fabric . Figure 12.
The Ghosts : (A) User enters the halo. (B) A piece of fabriclightly brushes the user’s head. (C) The ghost flies away.
USER STUDY
The goal of this study is three-fold: (1) validating the imple-mentation of CoVR, (2) investigating how users experience(i.e. apply and receive) strong forces and (3) collect feedbackon the interactions of the demo application.
Participants.
Procedure.
Participants were informed they were going tointeract with physical props and were asked not to rush withinthe scene. They were asked to wear an Oculus Rift S HMDas well as Optitrack markers on their dominant hand. They allwere introduced to CoVR and saw it moving beforehand. A game master was present during all the experiments, to ensurethe participants’ safety and activate some of the interactions.After the experiment, participants filled a Likert-scale ques-tionnaire about their enjoyment on each demo interaction andthen participated in a semi-structured interview. They gaveapproximately 20 minutes of their time.
Results
Quantitative results.
Participants ranked their global enjoyment with a 6.0/7 grade(std = 0.5).
Favourite interactions.
Users were asked to choose theirtwo favourite interactions in terms of enjoyment, among thefive that were provided. 62.5% of the participants said theirfavourite interaction was the transport , while the remaining37.5% preferred the magic broom (being pulled). The secondfavourite interactions were evenly split between pushing walls,the magic broom, transport and being gone through by ghosts . Reaching for the LightThe Magic WallThe Magic BroomMoving in the CloudsThe GhostsGlobal Enjoyment 1 2 3 4 5 6 7
Figure 13. Enjoyment results per interaction, ranked on a 7-point Likertscale - 1 indicates "not enjoyable", 7 indicates "very enjoyable". Errorbars indicate the standard deviation of the grades in the users’ panel.
Force-feedback.
All of the participants ranked the force theyapplied ( wall ) or applied to them ( broom ) compared to theirmaximum force on a 7-point Likert scale (1 = pretty soft, 7 =very hard). The forces they applied to the walls was rankedwith an average of 5.5 (std = 0.75, min = 5/7, max = 7/7) whilethe force applied to them with the magic broom was rankedwith an average of 6.1 (std=0.64). In particular, 87.5% ofthe participants (7/8) ranked the force applied to them withtravelling with the magic broom between 6 and 7/7 (the lastparticipant attributed a 5/7 grade).
Spatial Mismatches.
None of the participants experiencedspatial mismatches, even with the non-deterministic scenarioinvolving multiple doors.
Apprehension.
The participants ranked their fear of beingaround a moving platform with a Likert scale (1 = not scary atall; 7 = frightening). The average fear was 3.6/7 (std = 1.5). P2(expert VR user) told us that he would have liked to have noisecancelling ear-puffs and ranked his scare with a 6/7 grade, ashe noise was keeping him from being fully immersed. All ofthe non-expert users ranked their scare with a 2 or 3/7 gradeand dove into the VR environment without apprehension.
Qualitative results.
Whole-body Interactions.
In our semi-structured interview, wediscussed the users’ game preferences. All of the participantstold us that they prefer whole-body interactions in explorationgames, where performances do no matter. They all informedus they enjoyed our game and the interactions it provided, andwere mostly surprised to be pulled by the broom or transported.
Force-feedback.
They were especially surprised by the forceprovided by the broom, as it was the first dynamic interactionthey were experiencing. P5 said that she was afraid of heightsin the virtual scene, so when the broom started pulling her,she felt quite stressed out. P4 told us she enjoyed the use ofpassive props and direct manipulation [14].
Future Interactions Opportunities.
We asked participants togive us feedback on interactions they would like to experiencein VR with CoVR. Two external expert users told us that theywould enjoy climbing on a wall. P4 mentioned virtual escapegame, where she could truly benefit from passive haptics, ma-nipulate objects and feel force-feedback. P2 and P6 suggestedwar games, where they could lean on the walls to get somerest, or hide from enemies. P7 added he would enjoy havingmore modalities involved, for instance he would appreciatehaving a sensation of wind when climbing (on a stair or else)to increase his immersion.
Discussion
We now summarize and discuss our main findings.
CoVR implementation . The experiment confirmed the robust-ness of CoVR as it did not show any failure during the expe-riences: Our robotic system applied or received strong forcesby the participants without damage. Moreover none of theparticipants experienced collision or spatial mismatches whilethey were freely walking in the entire room-scale arena thanksto our trajectory generation algorithm and more particularlyour user-intention model.
Experiencing strong forces . The experiment also revealed thebenefits of robotic interfaces and more specifically roboticshape displays providing strong forces. Indeed, participantsspontaneously applied 5.5/7 of their maximum forces whenpushing on the walls. One participant reported having applied"very hard" forces (7/7). Moreover, participants perceivedthe strong (6.1/7) tension forces when interacting with thebroom and enjoyed them (second favourite interaction, and anaverage of 6/7). Seven participants reported having received"very strong" forces (>6/7).
Transporting the user . The favourite interaction was "trans-port" (6.6/7) where a user was sitting on a chair moving inthe VR arena. This interaction requires both a large arenaand a robotic system able to displace heavy embedded masses,which are unique features of CoVR.In summary, this experiment revealed that whole-body inter-actions involving strong forces (applying forces, receiving forces or embedding heavy masses) are a promising directionfor future
Robotic Shape Displays . CONCLUSION AND FUTURE WORK
We presented CoVR a novel
Robotic Shape Display for room-scale VR arena providing whole-body interactions and strongforce feedback. We also proposed a low-computational userintention model compatible with common HMDs to supportnon-deterministic scenarios. The technical evaluation and theuser study demonstrated the feasability of the approach, itsusability and the relevance of interactions involving strongforces. While CoVR addresses several interactions and techni-cal challenges, we see several directions for future work.
Adding multiple columns . A main limitation of our currentsetup is the use of a single column in the VR arena. Oneapproach consists of mounting additional 2D Cartesian robotson the sides of the VR arena to control horizontal columns.Another one is to add a second ceiling robot (the robots canshare the rails). These two approaches limit the work-areaof the additional columns but appropriate control strategiescan optimize trajectories and augment interaction possibilities,especially with multiple users (see below).
Combining multiple RSDs . Our approach is compatible withprevious Robotic Shape Display solutions. For instance, weenvision a VR arena combining CoVR with a swarm of mobilerobots such as [58, 30, 66]. These ones can collect objects onthe ground and bring them back to CoVR. Our CoVR’s trajec-tory generation algorithm remains valid in such configurations.More DoFs could also be integrated to CoVR by coupling itwith a Kuka robotic arm or a Snake Charmer [7] interface.
Augmenting I/O capabilities . We will investigate how addi-tional capabilities can improve user experience. For instance,it would be interesting to augment a column with sensors (e.g.touch input, force sensors, proximity sensors, etc.).Adding adepth camera could enable the detection of untracked movingbodies, such as an unexpected pet in the VR arena. Hap-tic stimuli can be expanded to vibrations, sliding, textures,temperatures, or to shape changing illusions. For instance,heat-lamps or wind-blowers could also be integrated [54].
Collaboration . Finally, our system is currently designed for asingle user interaction. We plan to investigate remote-presenceinteraction: a second identical structure can for instance beassembled in another room. Users in each room can interactwith different VOIs, share mutual physical contact or collab-oratively manipulate objects [13, 30]. We also plan to inves-tigate which scenarios (e.g. a master and a slave) and whichinteractions would support collaborative interaction in a sin-gle arena. Our software implementation can already supportseveral users in the same arena - each user being consideredas an obstacle, but collaborative interactions raise multiplechallenges [28].
ACKNOWLEDGEMENTS
We would like to thank M. Teyssier, C. Rigaud, B. Geslain,S. Sakr, M. Serrano, J. Müller, J. Gugenheimer as well as theparticipants of the studies.
EFERENCES [1] 2019a. CyberGrasp. (2019). [2] 2019b. Oculus Rift S. (2019). [3] 2019c. SteamVR - Valve Corporation. (2019). [4] 2019d. Teslasuit | Full body haptic VR suit for motioncapture and training. (2019). https://teslasuit.io/ [5] Parastoo Abtahi, Benoit Landry, Jackie (Junrui) Yang,Marco Pavone, Sean Follmer, and James A. Landay.2019. Beyond The Force: Using Quadcopters toAppropriate Objects and the Environment for Haptics inVirtual Reality. In
Proceedings of the 2019 CHIConference on Human Factors in Computing Systems -CHI ’19 . ACM Press, Glasgow, Scotland Uk, 1–13.
DOI: http://dx.doi.org/10.1145/3290605.3300589 [6] E. Amirpour, M. Savabi, A. Saboukhi, M. Rahimi Gorii,H. Ghafarirad, R. Fesharakifard, and S. Mehdi Rezaei.2019. Design and Optimization of a Multi-DOF HandExoskeleton for Haptic Applications. In . 270–275.
DOI: http://dx.doi.org/10.1109/ICRoM48714.2019.9071884
ISSN: 2572-6889.[7] Bruno Araujo, Ricardo Jota, Varun Perumal, Jia XianYao, Karan Singh, and Daniel Wigdor. 2016. SnakeCharmer: Physically Enabling Virtual Objects. In
Proceedings of the TEI ’16: Tenth InternationalConference on Tangible, Embedded, and EmbodiedInteraction - TEI ’16 . ACM Press, Eindhoven,Netherlands, 218–226.
DOI: http://dx.doi.org/10.1145/2839462.2839484 [8] Jonas Auda, Max Pascher, and Stefan Schneegass. 2019.Around the (Virtual) World: Infinite Walking in VirtualReality Using Electrical Muscle Stimulation. In
Proceedings of the 2019 CHI Conference on HumanFactors in Computing Systems - CHI ’19 . ACM Press,Glasgow, Scotland Uk, 1–8.
DOI: http://dx.doi.org/10.1145/3290605.3300661 [9] Mahdi Azmandian, Mark Hancock, Hrvoje Benko, EyalOfek, and Andrew D. Wilson. 2016a. HapticRetargeting: Dynamic Repurposing of Passive Hapticsfor Enhanced Virtual Reality Experiences. In
Proceedings of the 2016 CHI Conference on HumanFactors in Computing Systems - CHI ’16 . ACM Press,Santa Clara, California, USA, 1968–1979.
DOI: http://dx.doi.org/10.1145/2858036.2858226 [10] Mahdi Azmandian, Mark Hancock, Hrvoje Benko, EyalOfek, and Andrew D. Wilson. 2016b. HapticRetargeting: Dynamic Repurposing of Passive Hapticsfor Enhanced Virtual Reality Experiences. In
Proceedings of the 2016 CHI Conference on HumanFactors in Computing Systems - CHI ’16 . ACM Press,Santa Clara, California, USA, 1968–1979.
DOI: http://dx.doi.org/10.1145/2858036.2858226 [11] Hrvoje Benko, Christian Holz, Mike Sinclair, and EyalOfek. 2016. NormalTouch and TextureTouch:High-fidelity 3D Haptic Shape Rendering on HandheldVirtual Reality Controllers. In
Proceedings of the 29thAnnual Symposium on User Interface Software andTechnology - UIST ’16 . ACM Press, Tokyo, Japan,717–728.
DOI: http://dx.doi.org/10.1145/2984511.2984526 [12] Gordon Binsted, Romeo Chua, Werner Helsen, andDigby Elliott. 2001. Eye–hand coordination ingoal-directed aiming.
Human Movement Science
20, 4-5(Nov. 2001), 563–585.
DOI: http://dx.doi.org/10.1016/S0167-9457(01)00068-9 [13] Scott Brave, Hiroshi Ishii, and Andrew Dahley. 1998.Tangible interfaces for remote collaboration andcommunication. In
Proceedings of the 1998 ACMconference on Computer supported cooperative work -CSCW ’98 . ACM Press, Seattle, Washington, UnitedStates, 169–178.
DOI: http://dx.doi.org/10.1145/289444.289491 [14] Steve Bryson. 2005. Direct Manipulation in VirtualReality. In
Visualization Handbook . Elsevier, 413–430.
DOI: http://dx.doi.org/10.1016/B978-012387582-2/50023-X [15] Lung-Pan Cheng, Li Chang, Sebastian Marwecki, andPatrick Baudisch. 2018. iTurk: Turning Passive Hapticsinto Active Haptics by Making Users Reconfigure Propsin Virtual Reality. In
Proceedings of the 2018 CHIConference on Human Factors in Computing Systems -CHI ’18 . ACM Press, Montreal QC, Canada, 1–10.
DOI: http://dx.doi.org/10.1145/3173574.3173663 [16] Lung-Pan Cheng, Patrick Lühne, Pedro Lopes,Christoph Sterz, and Patrick Baudisch. 2014. HapticTurk: a Motion Platform Based on People. (2014), 11.[17] Lung-Pan Cheng, Sebastian Marwecki, and PatrickBaudisch. 2017a. Mutual Human Actuation. In
Proceedings of the 30th Annual ACM Symposium onUser Interface Software and Technology - UIST ’17 .ACM Press, Qu&
DOI: http://dx.doi.org/10.1145/3126594.3126667 [18] Lung-Pan Cheng, Eyal Ofek, Christian Holz, HrvojeBenko, and Andrew D. Wilson. 2017b. Sparse HapticProxy: Touch Feedback in Virtual Environments Using aGeneral Passive Prop. In
Proceedings of the 2017 CHIConference on Human Factors in Computing Systems -CHI ’17 . ACM Press, Denver, Colorado, USA,3718–3728.
DOI: http://dx.doi.org/10.1145/3025453.3025753 [19] Lung-Pan Cheng, Thijs Roumen, Hannes Rantzsch,Sven Köhler, Patrick Schmidt, Robert Kovacs, JohannesJasper, Jonas Kemper, and Patrick Baudisch. 2015.TurkDeck: Physical Virtual Reality Based on People. In
Proceedings of the 28th Annual ACM Symposium onUser Interface Software & Technology - UIST ’15 . ACMPress, Daegu, Kyungpook, Republic of Korea, 417–426.
DOI: http://dx.doi.org/10.1145/2807442.2807463
20] Inrak Choi, Heather Culbertson, Mark R. Miller, AlexOlwal, and Sean Follmer. 2017. Grabity: A WearableHaptic Interface for Simulating Weight and Grasping inVirtual Reality. In
Proceedings of the 30th Annual ACMSymposium on User Interface Software and Technology -UIST ’17 . ACM Press, Qu&
DOI: http://dx.doi.org/10.1145/3126594.3126599 [21] Inrak Choi, Eyal Ofek, Hrvoje Benko, Mike Sinclair,and Christian Holz. 2018. CLAW: A MultifunctionalHandheld Haptic Controller for Grasping, Touching, andTriggering in Virtual Reality. In
Proceedings of the 2018CHI Conference on Human Factors in ComputingSystems - CHI ’18 . ACM Press, Montreal QC, Canada,1–13.
DOI: http://dx.doi.org/10.1145/3173574.3174228 [22] Xavier de Tinguy, Thomas Howard, ClaudioPacchierotti, Maud Marchal, and Anatole Lécuyer. 2020.WeATaViX: WEarable Actuated TAngibles for VIrtualreality eXperiences. (2020), 9.[23] Force Dimension. 2019. Force Dimension - products.(2019). [24] A. Formaglio, A. Giannitrapani, M. Franzini, D.Prattichizzo, and F. Barbagli. 2005. Performance ofMobile Haptic Interfaces. In
Proceedings of the 44thIEEE Conference on Decision and Control . 8343–8348.
DOI: http://dx.doi.org/10.1109/CDC.2005.1583513 [25] F. Gosselin, C. Andriot, F. Bergez, and X. Merlhiot.2007. Widening 6-DOF haptic devices workspace withan additional degree of freedom. In
Second JointEuroHaptics Conference and Symposium on HapticInterfaces for Virtual Environment and TeleoperatorSystems (WHC’07) . 452–457.
DOI: http://dx.doi.org/10.1109/WHC.2007.127 [26] Haption. 2019a. Scale1™ - HAPTION SA. (2019). [27] Haption. 2019b. Virtuose™ 6D - HAPTION SA. (2019). [28] Zhenyi He and Ken Perlin. 2019. CollaboVR: AReconfigurable Framework for Multi-user toCommunicate in Virtual Reality. arXiv:1912.03863 [cs] (Dec. 2019). http://arxiv.org/abs/1912.03863 arXiv:1912.03863.[29] Zhenyi He, Fengyuan Zhu, Aaron Gaudette, and KenPerlin. 2017b. Robotic Haptic Proxies for CollaborativeVirtual Reality. arXiv:1701.08879 [cs] (Jan. 2017). http://arxiv.org/abs/1701.08879 arXiv: 1701.08879.[30] Zhenyi He, Fengyuan Zhu, and Ken Perlin. 2017a.PhyShare: Sharing Physical Interaction in VirtualReality. arXiv:1708.04139 [cs] (Aug. 2017). http://arxiv.org/abs/1708.04139 arXiv: 1708.04139.[31] Seongkook Heo, Christina Chung, Geehyuk Lee, andDaniel Wigdor. 2018. Thor’s Hammer: An UngroundedForce Feedback Device Utilizing Propeller-Induced Propulsive Force. In
Proceedings of the 2018 CHIConference on Human Factors in Computing Systems -CHI ’18 . ACM Press, Montreal QC, Canada, 1–11.
DOI: http://dx.doi.org/10.1145/3173574.3174099 [32] Anuruddha Hettiarachchi and Daniel Wigdor. 2016.Annexing Reality: Enabling Opportunistic Use ofEveryday Objects as Tangible Proxies in AugmentedReality. In
Proceedings of the 2016 CHI Conference onHuman Factors in Computing Systems - CHI ’16 . ACMPress, Santa Clara, California, USA, 1957–1967.
DOI: http://dx.doi.org/10.1145/2858036.2858134 [33] Matthias Hoppe, Pascal Knierim, Thomas Kosch,Markus Funk, Lauren Futami, Stefan Schneegass, NielsHenze, Albrecht Schmidt, and Tonja Machulla. 2018.VRHapticDrones: Providing Haptics in Virtual Realitythrough Quadcopters. In
Proceedings of the 17thInternational Conference on Mobile and UbiquitousMultimedia - MUM 2018 . ACM Press, Cairo, Egypt,7–18.
DOI: http://dx.doi.org/10.1145/3282894.3282898 [34] Matthias Hoppe, Beat Rossmy, Daniel Peter Neumann,Stephan Streuber, Albrecht Schmidt, and Tonja-KatrinMachulla. 2020. A Human Touch: Social TouchIncreases the Perceived Human-likeness of Agents inVirtual Reality. (2020), 11.[35] Hiroshi Hoshino. 1995. A Contruction MEthod ofVirtual Haptic Space. (1995). [36] Hsin-Yu Huang, Chih-Wei Ning, Po-Yao (Cosmos)Wang, Jen-Hao Cheng, and Lung-Pan Cheng. 2020.Haptic-go-round: A surrounding Platform forEncounterType Haptic in VR experienes. (2020). https://dl.acm.org/doi/pdf/10.1145/3334480.3383136 [37] Brent Edward Insko. 2001. Passive Haptics SignificantlyEnhances Virtual Environments. (2001), 111.[38] Yaesol Kim, Hyun Jung Kim, and Young J. Kim. 2018.Encountered-type haptic display for large VRenvironment using per-plane reachability maps:Encountered-type Haptic Display for Large VREnvironment.
Computer Animation and Virtual Worlds
29, 3-4 (May 2018), e1814.
DOI: http://dx.doi.org/10.1002/cav.1814 [39] Pascal Knierim, Thomas Kosch, Valentin Schwind,Markus Funk, Francisco Kiss, Stefan Schneegass, andNiels Henze. 2017. Tactile Drones - ProvidingImmersive Tactile Feedback in Virtual Reality throughQuadcopters. In
Proceedings of the 2017 CHIConference Extended Abstracts on Human Factors inComputing Systems - CHI EA ’17 . ACM Press, Denver,Colorado, USA, 433–436.
DOI: http://dx.doi.org/10.1145/3027063.3050426 [40] Chaehyun Lee, Min Sik Hong, In Lee, Oh Kyu Choi,Kyung-Lyong Han, Yoo Yeon Kim, Seungmoon Choi,and Jin S Lee. 2007. Mobile Haptic Interface for LargeImmersive Virtual Environments: PoMHI v0.5. (2007),2.41] In Lee, Inwook Hwang, Kyung-Lyoung Han, Oh KyuChoi, Seungmoon Choi, and Jin S. Lee. 2009. Systemimprovements in Mobile Haptic Interface. In
WorldHaptics 2009 - Third Joint EuroHaptics conference andSymposium on Haptic Interfaces for Virtual Environmentand Teleoperator Systems . IEEE, Salt Lake City, UT,USA, 109–114.
DOI: http://dx.doi.org/10.1109/WHC.2009.4810834 [42] Jaeyeon Lee, Mike Sinclair, Mar Gonzalez-Franco, EyalOfek, and Christian Holz. 2019. TORC: A VirtualReality Controller for In-Hand High-Dexterity FingerInteraction. In
Proceedings of the 2019 CHI Conferenceon Human Factors in Computing Systems - CHI ’19 .ACM Press, Glasgow, Scotland Uk, 1–13.
DOI: http://dx.doi.org/10.1145/3290605.3300301 [43] Jean-Claude Lepecq, Lionel Bringoux, Jean-MariePergandi, Thelma Coyle, and Daniel Mestre. 2008.Afforded Actions as a Behavioral Assessment ofPhysical Presence. (2008), 8.[44] Pedro Lopes, Alexandra Ion, and Patrick Baudisch. 2015.Impacto: Simulating Physical Impact by CombiningTactile Stimulation with Electrical Muscle Stimulation.In
Proceedings of the 28th Annual ACM Symposium onUser Interface Software & Technology - UIST ’15 . ACMPress, Daegu, Kyungpook, Republic of Korea, 11–19.
DOI: http://dx.doi.org/10.1145/2807442.2807443 [45] W. A. McNeely. 1993. Robotic graphics: a newapproach to force feedback for virtual reality. In
Proceedings of IEEE Virtual Reality AnnualInternational Symposium . 336–341.
DOI: http://dx.doi.org/10.1109/VRAIS.1993.380761 [46] Víctor Mercado and Univ Rennes. 2020. Design andEvaluation of Interaction Techniques Dedicated toIntegrate Encountered-Type Haptic Displays in VirtualEnvironments. (2020), 9.[47] Kouta Minamizawa, Domenico Prattichizzo, andSusumu Tachi. 2010. Simplified design of haptic displayby extending one-point kinesthetic feedback tomultipoint tactile feedback. In . IEEE, Waltham, MA, USA, 257–260.
DOI: http://dx.doi.org/10.1109/HAPTIC.2010.5444646 [48] Norbert Nitzsche, Uwe D. Hanebeck, and G. Schmidt.2003. Design issues of mobile haptic interfaces.
Journalof Robotic Systems
20, 9 (Sept. 2003), 549–556.
DOI: http://dx.doi.org/10.1002/rob.10105 [49] J. Pair, U. Neumann, D. Piepol, and B. Swartout. 2003.FlatWorld: combining Hollywood set-design techniqueswith VR.
IEEE Computer Graphics and Applications
DOI: http://dx.doi.org/10.1109/MCG.2003.1159607 [50] Sharif Razzaque, Zachariah Kohn, and Mary C. Whitton.2001.
EUROGRAPHICS 2001 / Jonathan C. RobertsShort Presentation © The Eurographics Association2001. Redirected Walking .[51] Katja Rogers, Jana Funke, Julian Frommel, Sven Stamm,and Michael Weber. 2019. Exploring Interaction Fidelity in Virtual Reality: Object Manipulation andWhole-Body Movements. In
Proceedings of the 2019CHI Conference on Human Factors in ComputingSystems - CHI ’19 . ACM Press, Glasgow, Scotland Uk,1–14.
DOI: http://dx.doi.org/10.1145/3290605.3300644 [52] Massimo Satler, Carlo A. Avizzano, and EmanueleRuffaldi. 2011. Control of a desktop mobile hapticinterface. In .IEEE, Istanbul, 415–420.
DOI: http://dx.doi.org/10.1109/WHC.2011.5945522 [53] Hasti Seifi, Farimah Fazlollahi, Michael Oppermann,John Andrew Sastrillo, Jessica Ip, Ashutosh Agrawal,Gunhyuk Park, Katherine J. Kuchenbecker, and Karon E.MacLean. 2019. Haptipedia: Accelerating HapticDevice Discovery to Support Interaction & EngineeringDesign. In
Proceedings of the 2019 CHI Conference onHuman Factors in Computing Systems - CHI ’19 . ACMPress, Glasgow, Scotland Uk, 1–12.
DOI: http://dx.doi.org/10.1145/3290605.3300788 [54] Emily Shaw, Tessa Roper, Tommy Nilsson, GlynLawson, Sue V. G. Cobb, and Daniel Miller. 2019. TheHeat is On: Exploring User Behaviour in a MultisensoryVirtual Environment for Fire Evacuation.
Proceedings ofthe 2019 CHI Conference on Human Factors inComputing Systems - CHI ’19 (2019), 1–13.
DOI: http://dx.doi.org/10.1145/3290605.3300856 arXiv:1902.04573.[55] Ken Shigeta, Yuji Sato, and Yasuyoshi Yokokohji. 2007.Motion Planning of Encountered-type Haptic Device forMultiple Fingertips Based on Minimum Distance PointInformation. In
Second Joint EuroHaptics Conferenceand Symposium on Haptic Interfaces for VirtualEnvironment and Teleoperator Systems (WHC’07) .IEEE, Tsukaba, 188–193.
DOI: http://dx.doi.org/10.1109/WHC.2007.85 [56] Alexa F. Siu, Eric J. Gonzalez, Shenli Yuan, Jason B.Ginsberg, and Sean Follmer. 2018. shapeShift: 2DSpatial Manipulation and Self-Actuation of TabletopShape Displays for Tangible and Haptic Interaction. In
Proceedings of the 2018 CHI Conference on HumanFactors in Computing Systems - CHI ’18 . ACM Press,Montreal QC, Canada, 1–13.
DOI: http://dx.doi.org/10.1145/3173574.3173865 [57] Evan Strasnick, Christian Holz, Eyal Ofek, MikeSinclair, and Hrvoje Benko. 2018. Haptic Links:Bimanual Haptics for Virtual Reality Using VariableStiffness Actuation. In
Proceedings of the 2018 CHIConference on Human Factors in Computing Systems -CHI ’18 . ACM Press, Montreal QC, Canada, 1–12.
DOI: http://dx.doi.org/10.1145/3173574.3174218 [58] Ryo Suzuki, Hooman Hedayati, Clement Zheng, JamesBohn, Daniel Szafir, Ellen Yi-Luen Do, Mark D Gross,and Daniel Leithinger. 2020a. RoomShift: Room-scaleDynamic Haptics for VR with Furniture-moving SwarmRobots. (2020), 11.59] Ryo Suzuki, Ryosuke Nakayama, Dan Liu, YasuakiKakehi, Mark D. Gross, and Daniel Leithinger. 2020b.LiftTiles: Constructive Building Blocks for PrototypingRoom-scale Shape-changing Interfaces. In
Proceedingsof the Fourteenth International Conference on Tangible,Embedded, and Embodied Interaction . ACM, SydneyNSW Australia, 143–151.
DOI: http://dx.doi.org/10.1145/3374920.3374941 [60] Shan-Yuan Teng, Da-Yuan Huang, Chi Wang, Jun Gong,Teddy Seyed, Xing-Dong Yang, and Bing-Yu Chen.2019. Aarnio: Passive Kinesthetic Force Output forForeground Interactions on an Interactive Chair. In
Proceedings of the 2019 CHI Conference on HumanFactors in Computing Systems - CHI ’19 . ACM Press,Glasgow, Scotland Uk, 1–13.
DOI: http://dx.doi.org/10.1145/3290605.3300902 [61] Shan-Yuan Teng, Tzu-Sheng Kuo, Chi Wang, Chi-huanChiang, Da-Yuan Huang, Liwei Chan, and Bing-YuChen. 2018. PuPoP: Pop-up Prop on Palm for VirtualReality. In
The 31st Annual ACM Symposium on UserInterface Software and Technology - UIST ’18 . ACMPress, Berlin, Germany, 5–17.
DOI: http://dx.doi.org/10.1145/3242587.3242628 [62] Shan-Yuan Teng, Cheng-Lung Lin, Chi-huan Chiang,Tzu-Sheng Kuo, Liwei Chan, Da-Yuan Huang, andBing-Yu Chen. 2019. TilePoP: Tile-type Pop-up Prop forVirtual Reality. (2019), 11.[63] Marc Teyssier. 2019. Uduino | Home. (2019). https://marcteyssier.com/uduino/ [64] Martin Usoh, Kevin Arthur, Mary C. Whitton, RuiBastos, Anthony Steed, Mel Slater, and Frederick P.Brooks. 1999. Walking > walking-in-place > flying, invirtual environments. In
Proceedings of the 26th annualconference on Computer graphics and interactivetechniques - SIGGRAPH ’99 . ACM Press, Not Known,359–364.
DOI: http://dx.doi.org/10.1145/311535.311589 [65] Emanuel Vonach, Clemens Gatterer, and HannesKaufmann. 2017. VRRobot: Robot actuated props in aninfinite virtual environment. In . IEEE, Los Angeles, CA, USA, 74–83.
DOI: http://dx.doi.org/10.1109/VR.2017.7892233 [66] Yuntao Wang, Hanchuan Li, Zhengyi Cao, Huiyi Luo,Ke Ou, John Raiti, Chun Yu, Shwetak Patel, andYuanchun Shi. 2020. MoveVR: Enabling MultiformForce Feedback in Virtual Reality using HouseholdCleaning Robot. (2020), 12. [67] Alan Wexelblat. 1993. Virtual reality: applications andexplorations. (1993). http://libertar.io/lab/wp-content/uploads/2016/02/Virtual.Reality.-.Applications.And_.Explorations.pdf/page=164
MyronKrueger, Artificial reality 2 An easy entry to Virtualreality Chap 7.[68] Eric Whitmire, Hrvoje Benko, Christian Holz, EyalOfek, and Mike Sinclair. 2018. Haptic Revolver: Touch,Shear, Texture, and Shape Rendering on aReconfigurable Virtual Reality Controller. In
Proceedings of the 2018 CHI Conference on HumanFactors in Computing Systems - CHI ’18 . ACM Press,Montreal QC, Canada, 1–12.
DOI: http://dx.doi.org/10.1145/3173574.3173660 [69] M. Yafune and Y. Yokokohji. 2011. Haptically renderingdifferent switches arranged on a virtual control panel byusing an encountered-type haptic device. In . 551–556.
DOI: http://dx.doi.org/10.1109/WHC.2011.5945545 [70] Kotaro Yamaguchi, Ginga Kato, Yoshihiro Kuroda,Kiyoshi Kiyokawa, and Haruo Takemura. 2016. ANon-grounded and Encountered-type Haptic DisplayUsing a Drone. In
Proceedings of the 2016 Symposiumon Spatial User Interaction - SUI ’16 . ACM Press,Tokyo, Japan, 43–46.
DOI: http://dx.doi.org/10.1145/2983310.2985746 [71] Yuan-Syun Ye, Hsin-Yu Chen, and Liwei Chan. 2019.Pull-Ups: Enhancing Suspension Activities in VirtualReality with Body-Scale Kinesthetic Force Feedback. In
Proceedings of the 32nd Annual ACM Symposium onUser Interface Software and Technology - UIST ’19 .ACM Press, New Orleans, LA, USA, 791–801.
DOI: http://dx.doi.org/10.1145/3332165.3347874 [72] Yasuyoshi Yokokohji, Ralph L. Hollis, and TakeoKanade. 1999. WYSIWYF Display: A Visual/HapticInterface to Virtual Environment.
Presence:Teleoperators and Virtual Environments
8, 4 (Aug.1999), 412–434.
DOI: http://dx.doi.org/10.1162/105474699566314 [73] Y. Yokokohji, J. Kinoshita, and T. Yoshikawa. 2001.Path planning for encountered-type haptic devices thatrender multiple objects in 3D space. In
ProceedingsIEEE Virtual Reality 2001 . 271–278.
DOI: http://dx.doi.org/10.1109/VR.2001.913796http://dx.doi.org/10.1109/VR.2001.913796