Mid-Air Drawing of Curves on 3D Surfaces in Virtual Reality
MMid-Air Drawing of Curves on 3D Surfaces in AR/VR
RAHUL ARORA,
University of Toronto
KARAN SINGH,
University of Toronto (a) (b) (c) (d) (e)
Fig. 1. Drawing curves mid-air that lie precisely on the surface of a virtual 3D object in AR/VR is difficult (a). Projecting mid-air 3D strokes ( black ) onto 3Dobjects is an under-constrained problem with many seemingly reasonable solutions (b). We analyze this fundamental AR/VR problem of 3D stroke projection,define and characterize multiple novel projection techniques (c), and test the two most promising approaches— spraycan shown in blue and mimicry shownin red in (b)–(d)—using a quantitative study with 20 users (d). The user-preferred mimicry technique attempts to mimic the 3D mid-air stroke as closelyas possible when projecting onto the virtual object. We showcase the importance of drawing curves on 3D surfaces, and the utility of our novel mimicry approach, using multiple artistic and functional applications (e) such as interactive shape segmentation (top) and texture painting (bottom). Horse modelcourtesy Cyberware, Inc. Spiderman bust base model © David Ruiz Olivares (CC BY 4.0).
Complex 3D curves can be created by directly drawing mid-air in im-mersive environments (AR/VR). Drawing mid-air strokes precisely on thesurface of a 3D virtual object however, is difficult; necessitating a projectionof the mid-air stroke onto the user “intended” surface curve. We presentthe first detailed investigation of the fundamental problem of 3D strokeprojection in AR/VR. An assessment of the design requirements of real-timedrawing of curves on 3D objects in AR/VR is followed by the definition andclassification of multiple techniques for 3D stroke projection. We analyzethe advantages and shortcomings of these approaches both theoretically andvia practical pilot testing. We then formally evaluate the two most promisingtechniques spraycan and mimicry with 20 users in VR. The study showsa strong qualitative and quantitative user preference for our novel stroke mimicry projection algorithm. We further illustrate the effectiveness andutility of stroke mimicry, to draw complex 3D curves on surfaces for variousartistic and functional design applications.CCS Concepts: •
Human-centered computing → Virtual reality; • Computingmethodologies → Graphics systems and interfaces;
Shape modeling;
Additional Key Words and Phrases: 3D sketching; curve on surface; AR/VR
Drawing is a fundamental tool of human visual expression andcommunication. Digital sketching with pens, styli, mice, and evenfingers in 2D is ubiquitous in visually creative computing applica-tions. Drawing or painting on
3D virtual objects for example, is critical to interactive 3D modeling, animation, and visualization,where its uses include: object selection, annotation, and segmenta-tion [Heckel et al. 2013; Jung et al. 2002; Meng et al. 2011]; 3D curveand surface design [Igarashi et al. 1999; Nealen et al. 2007]; strokesfor 3D model texturing or painterly rendering [Kalnins et al. 2002](Figure 1e). In 2D, digitally drawn on-screen strokes are WYSIWYGmapped onto 3D virtual objects, by projecting 2D stroke pointsthrough the given view onto the virtual object(s) (Figure 2a).Sketching in immersive environments (AR/VR) has the mysticalaura of a magical wand, allowing users to draw directly mid-air in3D. Mid-air drawing thus has the potential to significantly disruptinteractive 3D graphics, as evidenced by the increasing popularity ofAR/VR applications such as Tilt Brush [Google 2020] and Quill [Ocu-lus 2020b]. A fundamental requirement for numerous interactive3D applications in AR/VR however, remains the ability to directlydraw, or project drawn 3D strokes, precisely on 3D virtual objects.While directly drawing on a physical 3D object is reasonably easy, itis near impossible without haptic constraints to draw directly on avirtual 3D object (Figure 3). Furthermore, unlike 2D drawing, wherethe WYSIWYG view-based projection of 2D strokes onto 3D objectsis unambiguously clear, the user-intended mapping of a mid-air 3Dstroke onto a 3D object is less obvious. We thus present the firstdetailed investigation into plausible user-intended projections ofmid-air strokes on to 3D virtual objects. , Vol. 1, No. 1, Article 1. Publication date: January 2020. a r X i v : . [ c s . G R ] S e p a) (b) (c) Fig. 2. Stroke projection using a 2D interface is typically WYSIWYG: 2Dpoints along a user stroke (a, inset) are ray-cast through the given view tocreate corresponding 3D curve points on the surface of 3D scene objects (a).Even small errors or noise in 2D strokes can cause large discontinuities in3D, especially near ridges and sharp features (b). Complex curves spanningmany viewpoints, or with large scale variations in detail, often require thecurve to be drawn in segments from multiple user-adjusted viewpoints (c).
Interfaces for 2D/3D curve creation in general, use perceptualinsights or geometric assumptions like smoothness and planarity,to project, neaten, or otherwise process sketched strokes. Someapplications wait for user stroke completion before processing itin entirety, for example when fitting splines [Bae et al. 2008]. Ourgoal is to establish an application agnostic, base-line projection ap-proach for mid-air 3D strokes. We thus assume a stroke is processedwhile being drawn and inked in real-time, i.e., the output curvecorresponding to a partially drawn stroke is fixed/inked in real-time,based on partial stroke input [Thiel et al. 2011].One might further conjecture that all “reasonable” and mostlycontinuous projections would produce similar results, as long asusers are given interactive visual feedback of the projection. Thisis indeed true for tasks requiring discrete point-on-surface selec-tion, where users can freely re-position the drawing tool until itsinteractively visible projection corresponds to user-intent. Real-time curve drawing, however, is very sensitive to the projectiontechnique, where any mismatch between user intention and algo-rithmic projection, is continuously inked into the projected curve(Figure 1d).
2D Strokes Projected onto 3D Objects:
The standard user-intendedmapping of a 2D on-screen stroke is a raycast projection through thegiven monocular viewpoint, onto the visible surface of 3D objects.Raycasting is WYSIWYG (What You See Is What You Get) in that the3D curve visually matches the 2D stroke from said viewpoint (seeFigure 2a). Ongoing research on mapping 2D strokes to 3D objectsassumes this fundamental view-centric projection, focusing insteadon specific problems such as creating spatially coherent curvesaround ridge/valley features (where small 2D error can cause large3D depth error upon projection, Figure 2b); or drawing complexcurves with large scale variation (where multiple viewpoint changesare needed while drawing, Figure 2c). These problems are mitigatedby the direct 3D input and viewing flexibility of AR/VR, assumingthe mid-air stroke to 3D object projection matches user intent.
3D Strokes Projected onto 3D Objects:
Physical analogies motivateexisting approaches to defining a user-intended projection from3D points in a mid-air stroke to 3D points on a virtual object (Fig-ure 4). Graffiti-style painting with a spraycan , is arguably the currentstandard, deployed in commercial immersive paint and sculpt soft-ware such as Oculus Medium [2020a] and Gravity Sketch [2020]. Aclosest-point projection approximates drawing with the tool on the
Fig. 3. Mid-air drawing precisely on a 3D virtual object is difficult (faintregions of strokes are above or below the surface), regardless of drawingquick smooth strokes blue , or slow detailed strokes purple . Deliberatelyslow drawing is further detrimental to stroke aesthetic (right).
3D object, without actual physical contact (used by the ”guides” toolin Tilt Brush [2020]). Like view-centric 2D stroke projection, theseapproaches are context-free : processing each mid-air point indepen-dently. The AR/VR drawing environment comprising six–degree offreedom controller input and unconstrained binocular viewing, ishowever, significantly richer than 2D sketching. The user-intendedprojection of a mid-air stroke (§ 3) as a result is complex, influencedby the ever-changing 3D relationship between the view, drawingcontroller and virtual object. We therefore argue the need for his-torical context (i.e., the partially drawn stroke and its projection) indetermining the projection of a given stroke point. We balance theuse of this historical context, with the overarching goal of a generalpurpose projection that makes little or no assumption on the natureof the user stroke or its projection.We thus explore anchored projection techniques, that minimallyuse the most recently projected stroke point, as context for project-ing the current stroke point (§ 4). We evaluate various anchoredprojections, both theoretically and practically by pilot testing. Ourmost promising and novel approach anchored-smooth-closest-point (also called mimicry ), captures the natural tendency of a user stroketo mimic the shape of the desired projected curve. A formal userstudy (§ 5), shows mimicry to perform significantly better than spraycan (the current baseline) in producing curves that match userintent (§ 6). This paper thus contributes, to the best of our knowl-edge, the first principled investigation of real-time inked techniquesto project 3D mid-air strokes drawn in AR/VR onto 3D virtual ob-jects, and a novel stroke projection benchmark for AR/VR: mimicry . Overview.
Following a review of related work (§ 2), we analyzethe pros and cons of context-free projection (§ 3), laying the founda-tion for our novel anchored projection, mimicry (§ 4). We formallycompare mimicry against the current baseline spraycan (§ 5). Thestudy results and discussion (§ 6) are followed by applications show-casing the utility of mimicry (§ 7). We conclude with limitationsand directions for future work (§ 8).
Our work is related to research on drawing and sculpting in immer-sive realities, interfaces for drawing curves on, near, and aroundsurfaces, and sketch-based modelling tools.
Immersive creation has a long history in computer graphics. Immer-sive 3D sketching was pioneered by the HoloSketch system [Deering Curve creation and editing on or near the surface of 3D virtualobjects is fundamental for a variety of artistic and functional shapemodeling tasks. Functionally, curves on 3D surfaces are used tomodel or annotate structural features [Gal et al. 2009; Stanculescuet al. 2013], define trims and holes [Schmidt and Singh 2010], andto provide handles for shape deformation [Kara and Shimada 2007;Nealen et al. 2007; Singh and Fiume 1998], registration [Gehre et al.2018] and remeshing [Krishnamurthy and Levoy 1996; Takayamaet al. 2013]. Artistically, curves on surfaces are used in painterlyrendering [Gooch and Gooch 2001], decal creation [Schmidt et al.2006], texture painting [Adobe 2020], and even texture synthesis[Fisher et al. 2007]. Curve on surface creation in this body of researchtypically uses the established view-centric WYSIWYG projectionof on-screen sketched 2D strokes. While the sketch view-pointin these interfaces is interactively set by the user, there has beensome effort in automatic camera control for drawing [Ortega andVincent 2014], auto-rotation of the sketching view for 3D planarcurves [McCrae et al. 2014], and user assistance in selecting themost sketchable viewpoints [Bae et al. 2008]. Immersive 3D drawingenables direct, view-point independent 3D curve sketching, and isthus an appealing alternative to these 2D interfaces.Our work is also related to drawing curves around surfaces. Suchtechniques are important for a variety of applications: modelingstring and wire that wrap around objects [Coleman and Singh 2006];curves that loosely conform to virtual objects or define collision-freepaths around objects [Krs et al. 2017]; curve patterns for clothingdesign on a 3D mannequin model [Turquin et al. 2007]; curves forlayered modeling of shells and armour [De Paoli and Singh 2015];and curves for the design and grooming of hair and fur [Fu et al.2007; Schmid et al. 2011; Xing et al. 2019]. Some approaches such as SecondSkin [2015] and Skippy [2017] use insights into spatial rela-tionship between a 2D stroke and the 3D object, to infer a 3D curvethat lies on and around the surface of the object. Other techniqueslike Cords [2006] or hair and clothing design [Xing et al. 2019] arecloser to our work, in that they drape 3D curve input on and around3D objects using geometric collisions or physical simulation. Incontrast, this paper is application agnostic, and remains focused onthe general problem of projecting a drawn 3D stroke to a real-timeinked curve on the surface of a virtual 3D object. While we do notaddress curve creation with specific geometric relationships to theobject surface (like distance-offset curve), our techniques can beextended to incorporate geometry-specific terms (§ 8).
Sketch-based 3D modeling is a rich ongoing area of research (seesurvey by Olsen et al. [2009]). Typically, these systems interpret 2Dsketch inputs for various shape modeling tasks. One could catego-rize these modeling approaches as single-view (akin to traditionalpen on paper) [Andre and Saito 2011; Chen et al. 2013; Schmidtet al. 2009; Xu et al. 2014] or multi-view (akin to 3D modeling withfrequent view manipulation) [Bae et al. 2008; Fan et al. 2013, 2004;Igarashi et al. 1999; Nealen et al. 2007]. Single-view techniques useperceptual insights and geometric properties of the 2D sketch toinfer its depth in 3D, while multi-view techniques explicitly use viewmanipulation to specify 3D curve attributes from different views.While our work utilizes mid-air 3D stroke input, the ambiguity ofprojection onto surfaces connects it to the interpretative algorithmsdesigned for sketch-based 3D modeling. We aim to take advantageof the immersive interaction space by allowing view manipulationas and when desired, independent of geometry creation.
We first formally state the problem of projecting a mid-air 3D strokeonto a 3D virtual object. Let M = ( V , E , F ) be a 3D object, repre-sented as a manifold triangle mesh embedded in R . A user draws apiece-wise linear mid-air stroke by moving a 6-DoF controller ordrawing tool in AR/VR. The 3D stroke P ⊂ R is a sequence of n points ( p i ) n − i = , connected by line segments. Corresponding to eachpoint p i ∈ R , is a system state S i = ( h i , c i , h i , c i ) , where h i , c i ∈ R are the positions of the headset and the controller, respectively, and h i , c i ∈ Sp ( ) are their respective orientations, represented as unitquaternions. Also, without loss of generality, assume c i = p i , i.e.the controller positions describe the stroke points p i .We want to define a projection π , which transforms the sequenceof points ( p i ) n − i = to a corresponding sequence of points ( q i ) n − i = on the 3D virtual object, i.e. q i ∈ M . Consecutive points in thissequence are connected by geodesics on M , they describe the pro-jected curve Q ⊂ M . The aim of a successful projection methodof course, is to match the undisclosed user-intended curve. Theprojection is also designed for real-time inking of curves: points p i are processed upon input and projected in real-time (under 100ms)to q i using the current system state S j , and optionally, prior systemstates ( S j ) i − j = , stroke points ( p j ) i − j = and their projections ( q j ) i − j = .Stroke dynamics, captured from the controller’s inertial sensors,or as finite differences of stroke position, have been effective in a) (b)(c) (d) Fig. 4. Context-free techniques:
Occlude projects points from the controllerorigin along the direction from the eye (HMD origin) to the controller (a);
Spraycan projects point from the controller origin in a direction definedby the controller’s orientation (b);
Head-centric , akin to 2D projects pointsalong the view direction defined by HMD orientation (c);
Snap projectspoints from the controller origin to their closest-point on M (d). interactive sketch neatening [Arora et al. 2018; Thiel et al. 2011]. Wedo not however, explicitly model stroke dynamics in our proposedprojections, since early pilot testing did not suggest a relationshipbetween stroke velocity/acceleration and intended stroke projection. Context-free techniques project points independent of each other,simply based on the spatial relationships between the controller,HMD, and 3D object ( Figure 4). We can further categorize tech-niques as raycast or proximity based.
View-centric projection in 2D inter-faces project points from the screen along a ray from the eye throughthe screen point, to where the ray first intersects the 3D object. Inan immersive setting, raycast approaches similarly use a ray ema-nating from the 3D stroke point to intersect 3D objects. This ray ( o , d ) with origin o and direction d can be defined in a number ofways. Similar to pointing behavior, Occlude defines this ray from theeye through the controller origin (also the stroke point, Figure 4a) ( c i , ( c i − h i )/(cid:107) c i − h i (cid:107)) . If the ray intersects M , then the closestintersection to p i defines q i . In case of no intersection, p i is ignoredin defining the projected curve, i.e., q i is marked undefined and theprojected curve connects q i − to q i + (or the proximal index pointson either side of i for which projections are defined). The Spraycan approach treats the controller like a spraycan, defining the ray likea nozzle direction in the local space of the controller (Figure 4b).For example the ray could be defined as ( c i , f i ) , where the nozzle f i = c i ·[ , , ] T is the controller’s local z-axis (or forward direction).Alternately, Head-centric projection can define the ray using theHMD’s view direction as ( h i , h i · [ , , ] T ) (Figure 4c). Pros and Cons:
The strengths of raycasting are: a predictablevisual/proprioceptive sense of ray direction; a spatially continuousmapping between user input and projection rays; and AR/VR sce-narios where it is difficult or undesirable to reach and draw close to the virtual object. The biggest limitation of raycast projection stemsfrom the controller/HMD-based ray direction being completely ag-nostic of the shape or location of the 3D object. Projected curves canconsequently be very different in shape and size from drawn strokes,and ill-defined for stroke points with no ray-object intersection.
In 2D interfaces, the on-screen2D strokes are typically distant to the viewed 3D scene, necessi-tating some form of raycast projection onto the visible surface of3D objects. In AR/VR, however, users are able to reach out in 3Dand directly draw the desired curve on the 3D object. While precisemid-air drawing on a virtual surface is very difficult in practice (Fig-ure 3), projection methods based on proximity between the mid-airstroke and the 3D object are certainly worth investigation.The simplest proximity-based projection technique
Snap , projectsa stroke point p i to its closest-point in M (Figure 4d). q i = π snap ( p i ) = arg min x ∈M d ( p i , x ) , (1)where d (· , ·) is the Euclidean distance be-tween two points. Unfortunately, for tri-angle meshes, closest-point projectiontends to snap to the edges of the mesh( blue curve inset), resulting in unexpect-edly jaggy projected curves, even forsmooth 3D input strokes ( black curveinset) [Panozzo et al. 2013]. These dis-continuities are due to the discrete na-ture of the mesh representation, as wellspatial singularities in closest point computation even for smooth3D objects. We mitigate this problem by formulating an extension ofPanozzo et al.’s Phong projection [2013] in § 3.2, that simulates pro-jection of points onto an imaginary smooth surface approximatedby the triangle mesh. We denote this smooth-closest-point projectionas π SCP ( red curve inset). Pros and Cons:
The biggest strength of proximity-based projec-tion is it exploits the immersive concept of drawing directly on ornear an object, using the spatial relationship between a 3D strokepoint and the 3D object to determine projection. The main limitationis that since users rarely draw precisely on the surface, discontinu-ities and local extrema persist when projecting distantly drawn stokepoints, even when using smooth-closest-point . In § 4.1, we addressthis problem using stroke mimicry to anchor distant stroke pointsclose to the object to be finally projected using smooth-closest-point . Our goal with smooth-closest-point projection is to define a mappingfrom a 3D point to a point on M that approximates the closest pointprojection but tends to be functionally smooth, at least for pointsnear the 3D object. We note that computing the closest point to aLaplacian-smoothed mesh proxy, for example, will also provide asmoother mapping than π snap , but a potentially poor closest-pointapproximation to the original mesh. Phong projection , introduced by Panozzo et al. [2013], addressesthese goals for points expressible as weighted-averages of points x d } ⊂ M d y d = (cid:205) w i x di z d ∈ M d { x d } ⊂ M y = (cid:205) w i x i z ∈ M P Phonд
Bary (M)
Bary (M) , e d (M) Def. (a) Computing weighted averages in Panozzo et al. [2013] y d ∈ T d M z d ∈ M d y ∈ T M z ∈ M P Phonд
Bary (M)
Bary (T M ) , e d (T M ) (b) Computing smooth-closest-point projection.(c) Computing a d -dimensional embedding for M (a) and T M (b).Fig. 5. Panozzo et al. [2013] compute weighted averages on surfaces (a),while we want to compute a smooth closest-point projection for an arbitrarypoint near the mesh in R (b). We therefore embed T M —the region around the mesh—in higher-dimensional space R d , instead of just M (c). on M , but we extend their technique to define a smooth-closest-point projection for points in the neighbourhood of the mesh. Forcompleteness, we first present a brief overview of their technique.Phong projection is a two-step approach to map a point y ∈ R to a manifold triangle mesh M embedded in R , emulating closest-point projection on a smooth surface approximated by the trianglemesh. First, M is embedded in a higher dimensional Euclidean space R d such that Euclidean distance (between points on the mesh) in R d better approximates geodesic distances in R . Second, analogous tovertex normal interpolation in Phong shading, a smooth surface isapproximated by blending tangent planes across edges. Barycentriccoordinates at a point within a triangle are used to blend the tangentplanes corresponding to the three edges incident to the triangle. Weextend the first step to a higher dimensional embedding of not justthe triangle mesh M , but a tetrahedral representation of an offsetvolume around the mesh M (Figure 5). The second step remains thesame, and we refer the reader to Panozzo et al. [2013] for details.For clarity, we refer to M embedded in R as M , and the embed-ding in R d as M d . Panozzo et al. compute M d by first embedding asubset of the vertices in R D using metric multi-dimensional scaling(MDS) [Cox and Cox 2008], aiming to preserve the geodesic distancebetween the vertices. This subset consists of the high-curvaturevertices of M . The embedding of the remaining vertices is thencomputed using LS-meshes [Sorkine and Cohen-Or 2004]. For the problem of computing weighted averages on surfaces,one only needs to project 3D points of the form y = (cid:205) w i x i , whereall x i ∈ M . The point y is lifted into R d by simply defining y d = (cid:205) w i x di , where x di is defined as the point on M d with thesame implicit coordinates (triangle and barycentric coordinates) as x i does on M . Therefore, their approach only embeds M into R d (Figure 5a,c). In contrast, we want to project arbitrary points near M onto it using the Phong projection. Therefore, we compute theoffset surfaces at signed-distance ± µ from M . We then compute atetrahedral mesh T M of the space between these two surfaces in R . In the final step, we embed the vertices of T M in R d using MDSand LS-Meshes as described above. Note that all of the above stepsare realized in a precomputation.Now, given a 3D point y within a distance µ from M , we situateit within T M , use tetrahedral Barycentric coordinates to infer itslocation in R d , and then compute its Phong projection (Figure 5b,c).We fallback to closest-point projection for points outside T M , sincePhong projection converges to closest-point projection when farfrom M . Furthermore, we set µ large enough to easily handle oursmooth-closest-point queries in § 4.1. We implemented the four different context-free projection approachesin Figure 4, and had 4 users informally test each, drawing a varietyof curves on the various 3D models seen in this paper. Qualitatively,we made a number of observations:–
Head-centric and
Occlude projections become unpredictable if theuser is inadvertently changing their viewpoint while drawing.These projections are also only effective when drawing frontallyon an object, like with a 2D interface. Neither as a result exploitsthe potential gains of mid-air drawing in AR/VR.–
Spraycan projection was clearly the most effective context-freetechnique. Commonly used for graffiti and airbrushing, usuallyon fairly flat surfaces, we noted however, that consciously reori-enting the controller while drawing on or around complex objectswas both cognitively and physically tiring.–
Snap projection was quite sensitive to changes in the distance ofthe stroke from the object surface, and in general produced themost undulating projections due to closest-point singularities.– All projections converge to the mid-air user stroke when it pre-cisely conforms to the surface of the 3D object. But as the distancebetween the object and points on the mid-air stroke increases,their behavior diverges quickly.– While users did draw in the vicinity and mostly above the objectsurface, they rarely drew precisely on the object. The averagedistance of stroke points from the target object was observed tobe 4.8 cm in a subsequent user study (§ 5).– The most valuable insight however, was that the user stroke inmid-air often tended to mimic the expected projected curve.Context-free approaches, by design, are unable to capture thismimicry, i.e., the notion that the change between projected pointas we draw a stroke is commensurate with the change in the 3Dpoints along the stroke. This inability due to a lack of curve historyor context, materializes as problems in different forms. a) (b)(c) (d) Fig. 6. Context-free projection problems: projection discontinuities (a), un-desirable snapping (b) large depth disparity (c) and unexpected jumps (d).
Proximal projection (including smooth-closest-point ) can be highly discontinuous with increasingdistance from the 3D object, particularly in concave regions (Fig-ure 6a). Mid-air drawing along valleys without staying in precisecontact with virtual object is thus extremely difficult. Raycast projec-tions can similarly suffer large discontinuous jumps across occludedregions (in the ray direction) of the object (Figure 6d).While this problem theoretically exists in 2D interfaces as well,it is less observed in practice for two reasons: 2D drawing on aconstraining physical surface is significantly more precise than mid-air drawing in AR/VR [Arora et al. 2017]; and artists minimize suchdiscontinuities by carefully choosing appropriate views (raycastdirections) before drawing each curve. Automatic diretion controlof view or controller, while effective in 2D [Ortega and Vincent2014]), is detrimental to a sense of agency and presence in AR/VR.
Proximity-based methods also tendto get stuck on sharp (or high curvature) convex features of theobject (Figure 6b). While this can be useful to trace along a ridgefeature, it is particularly problematic for general curve-on-surfacedrawing.
The relative orientation betweenthe 3D object surface and raycast direction can cause large depthdisparities between parts of user strokes and curves projected by ray-casting (Figure 6c). Such irregular bunching or spreading of pointson the projected curve also goes against our observation of strokemimicry. Users can arguably reduce this disparity by continuallyorienting the view/controller to keep the projection ray well alignedwith object surface normal. Such re-orientation however can betiring, ergonomically awkward, and deviates from 2D experience,where pen/brush tilt only impacts curve aesthetic, and not shape.We noted that the
Occlude and
Spraycan techniques were com-plementary: drawing with
Occlude on parts of an object frontal tothe view provided good comfort and control, which degraded whendrawing closer to the object silhouette, and observed the oppositewhen drawing with
Spraycan . We thus implemented a hybrid pro-jection, where the ray direction was interpolated between
Occlude and
Spraycan based on alignment with the most recently projected
OffsetsurfaceOffsetsurface Parallelsurface (a) (b)(c) (d)
Fig. 7. Anchored smooth-closest-point (a), and refinements: using a locally-fit plane (b), and anchor point constrained to an offset (c) or parallel surface(d). q i , is obtained by projecting r i (a), r (cid:48) i (c), or r (cid:48)(cid:48) i (d) onto M via smooth-closest-point; or closest-point to r i in M ∩ N i (b). smooth surface normal. Unfortunately, the difference between Oc-clude and
Spraycan ray directions was often large enough to makeeven smooth ray transitions abrupt and hard to control.All these problems point to the projection function ignoring theshape of the mid-air stroke P and the projected curve Q , and canbe addressed using projection functions that explicitly incorporateboth. We call these functions anchored . The limitations of context-free projection can be addressed by equip-ping stroke point projection with the context/history of recentlydrawn points and their projections. In this paper we minimally useonly the most recent stroke point p i − and its projection q i − , ascontext to anchor the current projection.Any reasonable context-free projection can be used for the firststroke point p . We use spraycan π spray , our preferred context-freetechnique. For subsequent points ( i > r i = q i − + ∆ p i , (2)where ∆ p i = ( p i − p i − ) . We then compute q i as a projectionof the anchored stroke point r i onto M , that attempts to capture ∆ p i ≈ ∆ q i . Anchored projection captures our observation thatthe mid-air user stroke tends to mimic the shape of their intendedcurve on surface. While users to do not adhere consciously to anyprecise geometric formulation of mimicry, we observe that usersoften draw the intended projected curve as a corresponding strokeon an imagined offset or translated surface (Figure 7). A goodgeneral projection for the anchored point r i to M thus needs to becontinuous, predictable, and loosely capture this notion of mimicry. Controller sampling rate in current AR/VR systems is 50Hz or more,meaning that even during ballistic movements, the distance (cid:107) ∆ p i (cid:107) or any stroke sample i is of the order of a few millimetres. Con-sequently, the anchored stroke point r i is typically much closer to M , than the stroke point p i , making closest-point snap projectiona compelling candidate for projecting r i . Such an anchored closest-point projection explicitly minimizes (cid:107) ∆ p i − ∆ q i (cid:107) , but precise min-imization is less important than avoiding projection discontinuitiesand undesirably snapping, even for points close to the mesh. Ourformulation of a smooth-closest-point projection π SCP in § 3.2 ad-dresses these goals precisely. Also note that the maximum observed (cid:107) ∆ p (cid:107) for the controller readily defines the offset distance µ for ourpre-computed tet mesh T M . We define mimicry projection as Π mimicry ( p i ) = (cid:40) π spray ( p i ) if i = , π SCP ( r i ) otherwise. (3) We further explore refinements to mimicry projection, that mightimprove curve projection in certain scenarios.
Planar curves are very common in design and visualization[McCrae et al. 2011]. We can locally encourage planarity in mimicryprojection by constructing a plane N i with normal ∆ p i × ∆ p i − (i.e.the local plane of the mid-air stroke) and passing through the anchorpoint r i (Figure 7b). We then intersect N i with M . q i is defined asthe closest-point to r i on the intersection curve that contains q i − .Note, we use π spray ( p i ) for i <
2, and we retain the most recentlydefined normal direction ( N i − or prior) when N i = ∆ p i × ∆ p i − isundefined. We find this method works well for near-planar curves,but the plane is sensitive to noise in the mid-air stroke (Figure 9f),and can feel sticky or less responsive for non-planar curves. Offset and Parallel surface drawing captures the observationthat users tend to draw an intended curve as a corresponding mid-air stroke on an imaginary offset or parallel surface of the object M .While we do not expect users to draw precisely on such a surface,we note that is unlikely a user would intentionally draw orthogonalto such a surface along the gradient of the 3D object.In scenarios when a user is sub-consciously drawing on a offsetsurface of M (an isosurface of its signed-distance function d M (·) ),we can remove the component of a user stroke segment that liesalong the gradient ∇ d M , when computing the desired anchor point r i in Equation 4 as (Figure 7c): r (cid:48) i = q i − + ∆ p i − (cid:16) ∆ p i · ∇ d M ( p i ) (cid:17) ∇ d M ( p i ) (4)We can similarly locally constrain user strokes to a parallel surfaceof M in Equation 5 as: r (cid:48)(cid:48) i = q i − + ∆ p i − (cid:16) ∆ p i · ∇ d M ( r i ) (cid:17) ∇ d M ( r i ) . (5)Note that the difference from Eq. 4 is the position where ∇ d M iscomputed, as shown in Figure 7d. A parallel surface better matcheduser expectation than an offset surface in our pilot testing, but bothtechniques produced poor results when user drawing deviated fromthese imaginary surfaces (Figure 9g–l). For completeness, we also investigated raycast alternatives to pro-jection of the anchored stroke point r i . We used similar priors (a) (b) Fig. 8. Anchored raycast techniques: ray direction defined orthogonal to ∆ p i in a local plane (a); parallel transport of ray direction along the userstroke (b). The cast rays (forward/backward) are shown in blue. of local planarity and offset or parallel surface transport as withmimicry refinement, to define ray directions. Figure 8 shows twosuch options.In Figure 8a, we cast a ray in the local plane of motion, orthogonalto the user stroke, given by ∆ p i . We construct the local planecontaining r i spanned by ∆ p i and p i − − q i − , and then define thedirection orthogonal to ∆ p i in this plane. Since r i may be inside M ,we cast two rays bi-directionally ( r i , ± ∆ p ⊥ i ) , where ∆ p ⊥ i = ∆ p i × (cid:0) ∆ p i × ( p i − − q i − ) (cid:1) If both rays successfully intersect M , we choose q i to be the pointcloser to r i , a heuristic that works well in practice. As with locallyplanar mimicry projection, this technique suffered from instabilityin the local plane.Motivated by mimicry, in Figure 8b, we also explored paralleltransport of the projection ray direction along the user stroke. For i >
0, we parallel transport the previous projection direction q i − − p i − along the mid-air curve by rotating it with the rotation thataligns ∆ p i − with ∆ p i . Once again bi-directional rays are cast from r i , and q i is set to the closer intersection with M .In general, we found that all raycast projections, even whenanchored, suffered from unpredictability over long strokes andstroke discontinuities when there are no ray-object intersections(Figure 9n,o). In summary, extensive pilot testing of the anchored techniquesrevealed that they seemed generally better than context-free ap-proaches, specially when users drew further away from the 3Dobject. Among the anchored techniques, stroke mimicry capturedas an anchored-smooth-closest-point projection, proved to be theo-retically elegant, and practically the most resilient to ambiguities ofuser intent and differences of drawing style among users.
Anchoredclosest-point can be a reasonable proxy to anchored smooth-closest-point when pre-processing the 3D virtual objects is undesirable.Our techniques are implemented in C a) (b) (c) (d) (e) (f) (g) (h)(i) (j) (k)(l) (m) (n)(o) Fig. 9.
Mimicry vs. other anchored stroke projections: Mid-air strokes are shown in black and mimicry curves in red . Anchored closest-point ( blue ), issimilar to mimicry on smooth, low-curvature meshes (a,b) but degrades with mesh detail/noise (c,d). Locally planar projection ( blue ) is susceptible local planeinstability (e,f). Parallel ( purple h,k) or offset ( blue i,l) surface based projection fail in (h,l) when the user stroke deviates from said surface, while mimicry remains reasonable (g, j). Compared to mimicry (m), anchored raycasting based on a local plane ( purple n), or ray transport ( blue o) can be discontinuous. d =
8; that is, we embed M in R for computing the Phong pro-jection. We use µ = libigl [Jacobson et al. 2018]. We then improve the surface qualityusing TetWild [Hu et al. 2018], before computing the tetrahedralmesh T M between the two surfaces using TetGen [Si 2015].We support fast closest-point queries, using an AABB tree imple-mented in geometry3Sharp [Schmidt 2017]. Signed-distance is alsocomputed using the AABB tree and fast winding number [Barill et al.2018], and gradient ∇ d M computed using central finite differences.To ease replication of our various techniques and aid future work,we will open-source our implementation.We now formally compare our most promising projection mimicry ,to the best state-of-the-art context-free projection spraycan . We designed a user study to compare the performance of the spray-can and mimicry methods for a variety of curve-drawing tasks. Weselected six shapes for the experiment (Figure 10), aiming to covera diverse range of shape characteristics: sharp features ( cube ), largesmooth regions ( trebol , bunny ), small details with ridges and valleys( bunny ), thin features ( hand ), and topological holes ( torus , fertility ).We then sampled ten distinct curves on the surface of each of thesix objects. A canonical task in our study involved the participantattempting to re-create a given target curve from this set. We de-signed two types of drawing tasks shown in Figure 11: Tracing curves , where a participant tried to trace over a visible
FertilityHandBunny TrebolCubeTorus
Fig. 10. The six shapes utilized in the user study. The torus shape was usedfor tutorials, while the rest were used for the recorded experimental tasks. target curve using a single smooth stroke.
Re-creating curves , where a participant attempted to re-createfrom memory, a visible target curve that was hidden as soon as theparticipant started to draw. An enumerated set of keypoints on thecurve however, remained as a visual reference, to aid the participantin re-creating the hidden curve with a single smooth stroke.The rationale behind asking users to draw target curves is bothto control the length, complexity, and nature of curves drawn byusers, and to have an explicit representation of the user-intendedcurve. Curve tracing and re-creating are fundamentally different a) (b) (c) Fig. 11. The two tasks used in our study— curve tracing with the target curvevisible when drawing (a), and curve re-creation where the target curve isinitially visible (b) but is hidden as soon as the participant starts to draw (c). drawing tasks, each with important applications [Arora et al. 2017].Our curve re-creation task is designed to capture free-form drawing,with minimal visual suggestion of intended target curve.
We wanted to design target curves that could be executed using asingle smooth motion. Since users typically draw sharp corners us-ing multiple strokes [Bae et al. 2008], we constrain our target curvesto be smooth, created using cardinal cubic B-splines on the meshes,computed using Panozzo et al. [2013]. We also control the lengthand curvature complexity of the curves, as pilot testing showedthat very simple and short curves can be reasonably executed byalmost any projection technique. Curve length and complexity ismodeled by placing spline control points at mesh vertices, and speci-fying the desired geodesic distance and Gauß map distance betweenconsecutive control points on the curve.We represent a target curve using four parameters (cid:104) n , i , k G , k N (cid:105) ,where n is the number of spline control points, i the vertex indexof the first control point, and k G , k N constants that control thegeodesic and normal map distance between consecutive controlpoints. We define the desired geodesic distance between consecutivecontrol points as, D G = k G × (cid:107) BBox (M)(cid:107) , where (cid:107)
BBox (M)(cid:107) isthe length of the bounding box diagonal of M . The desired Gaußmap distance (angle between the unit vertex normals) betweenconsecutive control points is simply k N .A target curve C , . . . , C n − starting at vertex v i of the mesh isgenerated incrementally for i > C i = arg min v ∈ V (cid:48) (cid:0) d G ( C i − , v ) − D G (cid:1) + (cid:0) d N ( C i − , v ) − k N (cid:1) , (6)where d G and d N compute the geodesic and normal distance be-tween two points on M , and V (cid:48) ⊂ V contains only those ver-tices of M whose geodesic distance from C , . . . , C i − is at least D G /
2. The restricted subset of vertices conveniently helps prevent(but doesn’t fully avoid) self-intersecting or nearly self-intersectingcurves. Curves with complex self-intersections are less impor-tant practically, and can be particularly confusing for the curvere-creation task. All our target curve samples were generated using k G ∈ [ . , . ] , k N ∈ [ π / , π / ] , n =
6, and a randomly chosen i . The curves were manually inspected for self-intersections, andinfringing curves rejected.We then defined keypoints on the target curves as follows: curveendpoints were chosen as keypoints; followed by greedily pickingextrema of geodesic curvature, while ensuring that the arclength distance between any two consecutive keypoints was at least 3cm;and concluding the procedure when the maximum arclength dis-tance between any consecutive keypoints was below 15cm. Ourtarget curves had between 4–9 keypoints (including endpoints). The main variable studied in the experiment was
Projection method — spraycan vs. mimicry —realized as a within-subjects variable. Theorder of methods was counterbalanced between participants. Foreach method, participants were exposed to all the six objects. Objectorder was fixed as torus, cube, trebol, bunny, hand, and fertility,based on our personal judgment of drawing difficulty. The toruswas used as a tutorial, where participants had access to additionalinstructions visible in the scene and their strokes were not utilizedfor analysis. For each object, the order of the 10 target strokes wasrandomized. The first five were used for the tracing curves task,while the remaining five were used for re-creating curves.The target curve for the first tracing task was repeated after thefive unique curves, to gauge user consistency and learning effects.A similar repetition was used for curve re-creation. Participantsthus performed 12 curve drawing tasks per object, leading to a totalof 12 × × =
120 strokes per participant.Owing to the COVID-19 physical distancing guidelines, the studywas conducted in the wild, on participants’ personal VR equipmentat their homes. A 15-minute instruction video introduced the studytasks and the two projection methods. Participants then filled out aconsent form and a questionnaire to collect demographic informa-tion. This was followed by them testing the first projection methodand filling out a questionnaire to express their subjective opinionsof the method. They then tested the second method, followed by asimilar questionnaire, and questions involving subjective compar-isons between the two methods. Participants were required to takea break after testing the first method, and were also encouraged totake breaks after drawing on the first three shapes for each method.The study took approximately an hour, including the questionnaires.
Twenty participants (5 female) aged 21–47 from five countries par-ticipated in the study. All but one were right-handed. Participantsself-reported a diverse range of artistic abilities (min. 1, max. 5,median 3 on a 1–5 scale), and had varying degrees of VR experience,ranging from below 1 year to over 5 years. Thirteen participants hada technical computer graphics or HCI background, while ten had ex-perience with creative tools in VR, with one reporting professionalusage. Participants were paid ≈
22 USD as a gift card.
As the study was conducted on personal VR setups, a variety ofcommercial VR devices were utilized—Oculus Rift, Rift S, and Questusing Link cable, HTC Vive and Vive Pro, Valve Index, and SamsungOdyssey using Windows Mixed Reality. All but one participant useda standing setup allowing them to freely move around. .5 Procedure Before each trial, participants could use the “grab” button on theircontroller (in the dominant hand) to grab the mesh to position andorient it as desired. The trial started as soon as the participantstarted to draw by pressing the “main trigger” on their dominanthand controller. This action disabled the grabbing interaction—participants could not draw and move the object simultaneously.As noted earlier, for curve re-creation tasks, this had the additionaleffect of hiding the target curve, but leaving keypoints visible.
We recorded the head position h and orientation h , controller po-sition c and orientation c , projected point q , and timestamp t , foreach mid-air stroke point p = c . In general, we will refer to a tasktarget curve by X , P S and P M as the mid-air strokes executed, and Q S and Q M , the corresponding curves created using spraycan andmimicry projection, respectively. We drop the superscript when theprojection method used is not relevant, referring to a mid-air strokeas P and its projected curve as Q . We formulated three criteria to filter out meaningless user strokes:
Short Curves: we ignore projected curves Q that are too short ascompared to the length of the target curves X (conservatively curvesless than half as long as the target curve). While it is possible thatthe user stopped drawing mid-way out of frustration, we found itwas more likely that they prematurely released the controller triggerby accident. Both curve lengths are computed in R for efficiency. Stroke Noise: we ignore strokes for which the mid-air stroke is toonoisy. Specifically, mid-air strokes with distant consecutive points( ∃ i s.t. (cid:107) p i − p i − (cid:107) > Inverted Strokes: while we labelled keypoints with numbers andmarked start and end points in green and red (Figure 11), someusers occasionally drew the target curve in reverse. The motionto draw a curve in reverse is not symmetric, and such curves arethus rejected. We detect inverted strokes by look at the indices i , i , . . . , i l of the points in Q which are closest to the keypoints x k , x k , . . . , x k l of X . Ideally, the sequence i , . . . , i l should haveno inversions, i.e., ∀ ≤ j < k ≤ l , i j ≤ i k ; and maximum l ( l + )/ Q is aligned in reverse with X . We consider curves Q with more than l ( l + )/ R for efficiency.Despite conducting our experiment remotely without supervi-sion, we found that 95.6% of the strokes satisfied our criteria andcould be utilized for analysis. For comparisons between π spray and π mimicry , we reject curve pairs where either curve did not satisfythe quality criteria. Out of 1200 curve pairs (2400 total strokes),1103 (91.9%) satisfied the quality criteria and were used for analysis,including 564 pairs for the curve re-creation task and 539 for thetracing task. We define 10 different statistical measures (Table 1) to compare π spray and π mimicry curves in terms of their accuracy, aesthetic, Table 1. Quantitative results (mean ± std-dev.) of the comparisons between mimicry and spraycan projection. All measures are analyzed using Wilcoxonsigned-rank tests, lower values are better, and significantly better values( p < . ) are shown in boldface . Accuracy, aesthetic, and physical effortmeasures are shown with green, red, and blue backgrounds, respectively. Tracing
Measure Spraycan Mimicry p -value z -stat D ep ± ± ¡.001 8.36 D sym ± ± K E ±
262 rad/m ±
162 rad/m ¡.001 15.59 K д ±
245 rad/m ±
157 rad/m ¡.001 15.42 F д ±
413 rad/m ±
285 rad/m ¡.001 14.82 T h ± ± ¡.001 7.93 R h ± ± ¡.001 4.82 T c ± ± R c ± ± ¡.001 5.51 τ ± ± Memory
Measure Spraycan Mimicry p -value z -stat D ep ± ± ¡.001 8.63 D sym ± ± K E ±
236 rad/m ±
127 rad/m ¡.001 14.70 K д ±
219 rad/m ±
123 rad/m ¡.001 14.95 F д ±
371 rad/m ±
227 rad/m ¡.001 14.11 T h ± ± ¡.001 6.78 R h ± ± .002 3.07 T c ± ± R c ± ± ¡.001 4.00 τ ± ± t -test, since the recordeddata for none of our measures was normally distributed (normalityhypothesis rejected via the Kolmogorov-Smirnov test, p < . Accuracy is computed using two mea-sures of distance between points on the projected curve Q andtarget curve X . Both curves are densely re-sampled using m = Q = q , . . . , q m − and X = x , . . . , x m − , we compute the average equi-parameter distance D ep as D ep (Q) = m m − (cid:213) i = d E ( q i , x i ) , (7)where d E computes the Euclidean distance between two points in R . We also compute the average symmetric distance D sym as D sym (Q) = m m − (cid:213) i = (cid:18) min x ∈ X d E ( q i , x ) (cid:19) + m m − (cid:213) i = (cid:18) min q ∈ Q d E ( q , x i ) (cid:19) In other words, D ep computes the distance between correspondingpoints on the two curves and D sym computes the average minimumdistance from each point on one curve to the other curve. Spraycan (Tracing)10 M i m i c r y ( T r ac i ng ) y = x Spraycan (Re-creating)10 M i m i c r y ( R e - c r e a t i n g ) y = x (a) Normalized geodesic curvature K д . Spraycan (Tracing)10 M i m i c r y ( T r ac i ng ) y = x Spraycan (Re-creating)10 M i m i c r y ( R e - c r e a t i n g ) y = x (b) Normalized fairness deficiency F д . Mimicry(Tracing)Spraycan(Tracing) Mimicry(Re-creating)Spraycan(Re-creating) (c) Example strokes, orange points in (a, b) above.Fig. 12. Curvature measures (a,b) indicate that mimicry produces signifi-cantly smoother and fairer curves than spraycan for both tracing (left) andre-creating tasks (right). Pairwise comparison plots between mimicry (y-axis) and spraycan (x-axis), favour mimicry for the vast majority of points(points below the y = x line). A linear regression fit (on the log plots)is shown as a dashed line. Example curve pairs (orange points) for curvetracing (left) and re-creating (right) are also shown with the target curve X shown in gray (c). For both tracing and re-creation tasks, D ep indicated that mimicry produced significantly better results than spraycan (see Table 1,Figure 1c, 12). The D sym difference was not statistically significant,evidenced by users correcting their strokes to stay close to theintended target curve (at the expense of curve aesthetic). For most design applications, jagged pro-jected curves, even if geometrically quite accurate, are aestheticallyundesirable [McCrae and Singh 2008]. Curvature-based measuresare typically used to measure fairness of curves. We report threesuch measures of curve aesthetic for the projected curve Q . We note that the smoothness quality of the user stroke P , was similar to Q and significantly poorer than the target curve X . This is expectedsince drawing in mid-air smoothly and with precision is difficult, andsuch strokes are usually neatened post-hoc [Arora et al. 2018]. Wetherefore avoid comparisons to the target curve and simply reportthree aesthetic measures for a projected curve Q = q , . . . , q n − .We first refine Q by computing the exact geodesic on M betweenconsecutive points of Q [Surazhsky et al. 2005], to create (cid:98) Q withpoints (cid:98) q , . . . , (cid:98) q k − , k ≥ n . We choose to normalize our curvaturemeasures using L X , the length of the corresponding target stroke X . The normalized Euclidean curvature for Q is defined as K E (Q) = L X k − (cid:213) i = θ i (8)where θ i is the angle between the two segments of (cid:98) Q incident on (cid:98) q i . Thus, K E is the total discrete curvature of (cid:98) Q , normalized by thetarget curve length.Since (cid:98) Q is embedded in M , we can also compute discrete geodesic curvature, computed as the deviation from the straightest geodesicfor a curve on surface. Using a signed θ дi defined at each point (cid:98) q i via Polthier and Schmies’s definition [2006], we compute normalizedgeodesic curvature as K д (Q) = L X k − (cid:213) i = | θ дi | . (9)Finally, we define fairness [Arora et al. 2017; McCrae and Singh2008] as a first-order variation in geodesic curvature, thus definingthe normalized fairness deficiency as F д (Q) = L X m − (cid:213) i = | θ дi − θ дi − | , (10)For all three measures, a lower value indicates a smoother, pleas-ing, curve. Wilcoxon signed-rank tests on all three measures in-dicated that mimicry produced significantly smoother and bettercurves than spraycan (Table 1). Quantitatively, we use the amount of head(HMD) and hand (controller) movement, and stroke execution time τ , as proxies for physical effort.For head and hand translation, we first filter the position datawith a Gaussian-weighted moving average filter with σ = normalized head/controller translation T h and T c asthe length of the poly-line defined by the filtered head/controllerpositions normalized by the length of the target curve L X .An important ergonomic measure is the amount of head/handrotation required to draw the mid-air stroke. We first de-noise orfilter the forward and up vectors of the head/controller frame, usingthe same filter as for positional data. We then re-orthogonalize theframes and compute the length of the curve defined by the filteredorientations in SO ( ) , using the angle between consecutive orienta-tion data-points. We define normalized head/controller rotation R h and R c as its orientation curve length, normalized by L X .Table 1 summarizes the physical effort measures. We observelower controller translation (effect size ≈ ≈ spraycan ; lower head translation and o r u s C u b e T r e b o l B u nn y H a n d F e r t i l i t y Fig. 13. Perceived difficulty of drawing for the six 3D shapes in the study.
Spraycan Mimicry Spraycan Mimicry
Curve Tracing Curve Re-creating0510 Very inaccurateSomewhat inaccurateNeutralSomewhat accurateVery accurate (a) Perceived accuracy.
Spraycan Mimicry Spraycan Mimicry
Curve Tracing Curve Re-creating0510 Very unevenSomewhat unevenNeutralSomewhat smoothVery smooth (b) Perceived smoothness.
Spraycan Mimicry Spraycan Mimicry —Physical Effort— —Mental Effort—0510 Very difficultSomewhat difficultNeutralSomewhat easyVery easy (c) Physical and mental effort ratings.Fig. 14. Participants perceived mimicry to be better than spraycan in termsof accuracy (a), curve aesthetic (b) and user effort (c).
Not at allSomewhatVery well
Fig. 15. Participants stated understanding spraycan projection better (left);17/20 users stated an overall preference for mimicry over spraycan (right). orientation (effect sizes ≈ , mimicry . Notewor-thy, is the significantly reduced controller rotation using mimicry ,with spraycan unsurprisingly requiring 35% (tracing) and 44% (re-creating) more hand rotation from the user. The study also pro-vided an opportunity to test if the users actually tended to mimictheir intended curve X in the mid-air stroke P . To quantify the“mimcriness” of a stroke, we subsample P and X into m points as in § 6.2.1, use the correspondence as in Eq. 7 and look at the variationin the distance (distance between the closest pair of correspondingpoints subtracted from that of the farthest pair) as a percentage ofthe target length L X . We call this measure the mimicry violation of a stroke. Intuitively, the lower the mimicry violation , the closerthe stroke P is to being a perfect mimicry of X , going to zero if itis a precise translation of X . Notably, users depicted very similartrends to mimic for both the techniques—with 86% ( mimicry ), 80%( spraycan ) strokes exhibiting mimicry violation below 25% of L X ,and 71%, 66% below 20% of L X —suggesting that mimicry is indeeda natural tendency. Recall that users re-peated 2 of the 10 strokes per shape for both the techniques. To ana-lyze consistency across the repeated strokes, we compared the valuesof the stroke accuracy measure D eq and the aesthetic measure F д between the original stroke and the corresponding repeated stroke.Specifically, we measured the relative change | f ( i ) − f ( i (cid:48) )|/ f ( i ) ,where ( i , i (cid:48) ) is a pair of original and repeated strokes, and f (·) is either D eq or F д . Users were fairly consistent across both thetechniques, with the average consistency for D eq being 35.4% for mimicry and 36.8% for spraycan , while for F д , it was 36.5% and 34.1%,respectively. Note that the averages were computed after removingextreme outliers outside the 5 σ threshold. The mid- and post-study questionnaires elicited qualitative responsesfrom participants on their perceived difficulty of drawing, curve ac-curacy and smoothness, mental and physical effort, understandingof the projection methods, and overall method of preference.Participants rated their perceived difficulty in drawing on the 6study objects (Figure 13), validating our ordering of shapes in theexperiment based on expected drawing difficulty.Accuracy, smoothness, physical/mental effort responses werecollected via 5-point Likert scales. We consistently order the choicesfrom 1 (worst) to 5 (best) in terms of user experience in Figure 14,and reported median ( M ) scores here. Mimicry was perceived to bea more accurate projection method (tracing, re-creating M = , . spraycan ( M = , very accurate or somewhat accurate with mimicry (compared to 2 for spraycan ) (Figure 14a). User perceptionof stroke smoothness was also consistent with quantitative results,with mimicry (tracing, re-creating M = ,
4) clearly outperforming spraycan (tracing, re-creating M = ,
2) (Figure 14b). Lastly, withno need for controller rotation, mimicry ( M =
3) was perceivedas less physically demanding than spraycan ( M = Spraycan , with its physical analogy and mathematically precisedefinition was clearly understood by all 20 participants (17 verywell, 3 somewhat) (Figure 15a).
Mimicry , conveyed as “drawing amid-air stroke on or near the object as similar in shape as possibleto the intended projection”, was less clear to users (7 very well, 11somewhat, 3 not at all). Despite not understanding the method, the3 participants were able to create curves that were both accurateand smooth. Further, users perceived mimicry ( M = .
5) as less ig. 16. Gallery of free-form curves in red , drawn using mimicry . From left to right, tracing geometric features on the bunny, smooth maze-like curves on thecube, maze-like curve with sharp corners and a spiral on the trebol, and artistic tattoo motifs on the hand. Some mid-air strokes ( black ) are hidden for clarity. cognitively taxing than spraycan ( M =
2) (Figure 14c). We believethis may be because users were less prone to consciously controllingtheir stroke direction and rather focused on drawing. The tendencyto mimic may have thus manifested sub-consciously, as we hadobserved in pilot testing.The most important qualitative question was user preference(Figure 15b). 85% of the 20 participants preferred mimicry (10 highlypreferred, 7 somewhat preferred). The remaining users were neutral(1/20) or somewhat preferred spraycan (2/20).
We also asked participants to elaborate on their stated preferencesand ratings. Participants (
P4,8,16,17 ) noted discontinuous “jumps”caused by spraycan , and felt the continuity guarantee of mimicry :“seemed to deal with the types of jitter and inaccuracy VR setups areprone to better” (P6) ; “could stabilize my drawing” (P9) . P9,15 feltthat mimicry projection was smoothing their strokes (no smoothingwas employed): we believe this may be the effect of noise andinadvertent controller rotation, which mimicry ignores, but cancause large variations with spraycan , perceived as curve smoothing.Some participants (
P4,17 ) felt that rotating the hand smoothlywhile drawing was difficult, while others missed the spraycan abilityto simply use hand rotation to sweep out long projected curvesfrom a distance (
P2,7 ). Participants commented on physical effort:“Mimicry method seemed to required [sic] much less head movement,hand rotation and mental planning” (P4) . Fig. 17.
Mimicry used to interactively paint textures on 3D objects.
Participants appreciated the anchored control of mimicry in high-curvature regions (
P1,2,4,8 ) also noting that with spraycan , “thecurvature of the surface could completely mess up my stroke” (P1) .Some participants did feel that spraycan could be preferable whendrawing on near-flat regions of the mesh (
P3,14,19,20 ).Finally, participants who preferred spraycan felt that mimicryrequired more thinking: “with mimicry, there was extra mentaleffort needed to predict where the line would go on each movement” (P3) , or because mimicry felt “unintuitive” (P7) due to their priorexperience using a spraycan technique. Some who preferred mimicry found it difficult to use initially, but felt it got easier over the courseof the experiment (
P4,17 ). Complex 3D curves on arbitrary surfaces can be drawn in AR/VRwith a single stroke, using mimicry (Figure 16). Drawing such curveson 3D virtual objects is fundamental to many applications, includingdirect painting of textures [Schmidt et al. 2006]; tangent vector fielddesign [Fisher et al. 2007]; texture synthesis [Lefebvre and Hoppe2006; Turk 2001]; interactive selection, annotation, and object seg-mentation [Chen et al. 2009]; and seams for shape parametriza-tion [L´evy et al. 2002; Rabinovich et al. 2017; Sawhney and Crane2017], registration [Gehre et al. 2018], and quad meshing [Tong et al.2006]. We showcase the utility and quality of mimicry curves withinexample applications (also see supplemental video).
Texture Painting:
Figures 1e, 17 show examples of textures paintedin VR using mimicry . The long, smooth, wraparound curves onthe torus, are especially hard to draw with 2D interfaces. Ourimplementation uses Discrete Exponential Maps (DEM) [Schmidtet al. 2006] to compute a dynamic local parametrization around eachprojected point q i , to create brush strokes or geometric stamps onthe object. Mesh Segmentation:
Figures 1e and 18 show mimicry used forinteractive segmentation in VR. In our implementation users drawan almost-closed curve Q = { q , . . . , q n − } on the object using mimcry . We snap points q i to their nearest mesh vertex, and useDijkstra’s shortest path to connect consecutive vertices, and to closethe cycle of vertices. A mesh region is selected or segmented usingmesh faces partitioned by these cycles that are easy to draw inAR/VR, but often require view changes and multiple strokes in 2D. Vector Field Design:
Vector fields on meshes are commonly usedfor texture synthesis [Turk 2001], guiding fluid simulations [Stam2003], and non-photorealistic rendering [Hertzmann and Zorin2000]. We use mimicry curves as soft constraints to guide the vectorfield generation of Fisher et al. [2007]. Figure 19 shows example ig. 18. Interactive segmentation by drawing curves onto torus and bunnymeshes. Each segmented portion is shown with a unique colour.Fig. 19. Smooth mimicry curves ( red ) provide constraints for vector fielddesign [Fisher et al. 2007], which we visualize via Line Integral Convolu-tions [Cabral and Leedom 1993]. vector fields, visualized using Line Integral Convolutions [Cabraland Leedom 1993] in the texture domain. We have presented a detailed investigation of the problem of real-time inked drawing on 3D virtual objects in immersive environ-ments. We show the importance of stroke context when project-ing mid-air 3D strokes, and explore the design space of anchoredprojections. A 20-participant remote study showed mimicry to bepreferred over the established spraycan projection for projectingmid-air strokes on 3D objects in AR/VR. Both mimicry projectionand performing VR studies in the wild do have some limitations.Further, while user stroke processing for 2D interfaces is a maturefield of research, mid-air stroke processing for AR/VR is relativelynascent, with many directions for future work. “In the wild” VR Study Limitations.
Ongoing pandemic restrictionspresented both a challenge and an opportunity to remotely conducta more natural study in the wild, with a variety of consumer VRhardware and setups. The enthusiasm of the VR community allowedus to readily recruit 20 diligent users, albeit with a bias towardsyoung, adult males. While the variation in VR headsets seemed tobe of little consequence, there was a notable difference in shape andsize of the 3D controllers. Controller grip and weight can certainlyimpact mid-air drawing posture and stroke behavior. Controller sizeis also significant: a larger Vive controller for example, has a higherchance of occluding target objects and projected curve, as comparedto a smaller Oculus Touch controller. We could have mitigated theimpact of controller size by rendering a standard drawing tool inVR, but we preferred to remain application agnostic, rendering thefamiliar, default controller that matched the physical controller inparticipants’ hands. Further, no participants explicitly mentioned the controller getting in the way of their ability to draw. Overall,our study contributes a high-quality VR data corpus comprising ≈ Mimicry Limitations.
Our lack of a concise mathematical defi-nition of observed stroke mimicry, makes it harder to preciselycommunicate it to users. While a precise mathematical formula-tion may exist, conveying it to non-technical users can still be achallenging task.
Mimicry ignores controller orientation, produc-ing smoother strokes with less effort, but can give participants areduced sense of sketch control (
P2,3,6 ). We hypothesize that thereduced sense of control is in part due to the tendency for anchoredsmooth-closest-point to shorten the user stroke upon projection,sometimes creating a feeling of lag.
Spraycan like techniques in con-trast, have a sense of amplified immediacy, and the explicit abilityto make lagging curves catch-up by rotating a controller in place.
Future work.
Our goal was to develop a general real-time inkedprojection with minimal stroke context via anchoring. Optimizingthe method to account for the entire partially projected stroke mayimprove the projection quality. Relaxing the restriction of real-timeinking would allow techniques such as spline fitting and global op-timization that can account for the entire user stroke and geometricfeatures of the target object. Local parametrizations such as DEM(§ 7) can be used to incrementally grow or shrink the projected curve,so it does not lag the user stroke. Hybrid projections leveragingboth proximity and raycasting are also subject to future work.On the interactive side, we did experiment with feedback to en-courage users to draw closer to a 3D object. For example, we triedvarying the appearance of the line connecting the controller to theprojected point based on line length; or providing aural/haptic feed-back if the controller got further than a certain distance from theobject. While these techniques can help users in specific drawingor tracing tasks, we found them to be distracting and harmful tostroke quality for general stroke projection. Bimanual interactionin VR, such as rotating the shape with one hand while drawing onit with the other (suggested by
P3,19 ), can also be explored.Perhaps the most exciting area of future work is employing data-driven techniques to infer the user-intended projection, perhapscustomized to the drawing style of individual users. Our study codeand data will be made publicly available to aid in such endeavours.In summary, this paper presents early research on processing andprojection of mid-air strokes drawn on and around 3D objects, thatwe hope will inspire further work and applications in AR/VR.
ACKNOWLEDGMENTS
We are thankful to Michelle Lei for developing the initial imple-mentation of the context-free techniques, and to Jiannan Li andDebanjana Kundu for helping pilot our methods. We also thankvarious 3D model creators and repositories for the models we uti-lized: Stanford bunny model courtesy of the Stanford 3D ScanningRepository, trebol model provided by Shao et al.[2012], fertilitymodel courtesy the Aim@Shape repository, hand model provided y Jeffo89 on turbosquid.com, and cup model (Figure 2) providedby Daniel Noree on thingiverse.com under a CC BY 4.0 license.
REFERENCES
Proceed-ings of the Eighth Eurographics Symposium on Sketch-Based Interfaces and Modeling (Vancouver, British Columbia, Canada) (SBIM fi11) . Association for Computing Ma-chinery, New York, NY, USA, 133–fi?!140. https://doi.org/10.1145/2021164.2021189Rahul Arora, Rubaiat Habib Kazi, Fraser Anderson, Tovi Grossman, Karan Singh, andGeorge Fitzmaurice. 2017. Experimental Evaluation of Sketching on Surfaces in VR.In
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI fi17) . Association for Computing Machinery, NewYork, NY, USA, 5643fi?!5654. https://doi.org/10.1145/3025453.3025474Rahul Arora, Rubaiat Habib Kazi, Tovi Grossman, George Fitzmaurice, and Karan Singh.2018. SymbiosisSketch: Combining 2D & 3D Sketching for Designing Detailed3D Objects in Situ. In
Proceedings of the 2018 CHI Conference on Human Factors inComputing Systems (Montreal, Quebec, Canada) (CHI ’18) . ACM, New York, NY,USA, 15. https://doi.org/10.1145/3173574.3173759Seok-Hyung Bae, Ravin Balakrishnan, and Karan Singh. 2008. ILoveSketch: as-natural-as-possible sketching system for creating 3d curve models. In
Proceedings of the21st annual ACM symposium on User interface software and technology (Monterey,CA, USA) (UIST ’08) . ACM, New York, NY, USA, 151–160. https://doi.org/10.1145/1449715.1449740Gavin Barill, Neil G. Dickson, Ryan Schmidt, David I. W. Levin, and Alec Jacobson. 2018.Fast Winding Numbers for Soups and Clouds.
ACM Trans. Graph.
37, 4, Article 43(July 2018), 12 pages. https://doi.org/10.1145/3197517.3201337Brian Cabral and Leith Casey Leedom. 1993. Imaging Vector Fields Using Line IntegralConvolution. In
Proceedings of the 20th Annual Conference on Computer Graphics andInteractive Techniques (Anaheim, CA) (SIGGRAPH fi93) . Association for ComputingMachinery, New York, NY, USA, 263fi?!270. https://doi.org/10.1145/166117.166151Tao Chen, Zhe Zhu, Ariel Shamir, Shi-Min Hu, and Daniel Cohen-Or. 2013. 3-Sweep:Extracting Editable Objects from a Single Photo.
ACM Trans. Graph.
32, 6, Article195 (Nov. 2013), 10 pages. https://doi.org/10.1145/2508363.2508378Xiaobai Chen, Aleksey Golovinskiy, and Thomas Funkhouser. 2009. A Benchmark for3D Mesh Segmentation.
ACM Trans. Graph.
28, 3, Article 73 (July 2009), 12 pages.https://doi.org/10.1145/1531326.1531379Patrick Coleman and Karan Singh. 2006. Cords: Geometric Curve Primitives forModeling Contact.
IEEE Computer Graphics and Applications
26, 3 (2006), 72–79.Michael AA Cox and Trevor F Cox. 2008. Multidimensional scaling. In
Handbook ofdata visualization . Springer, New York, NY, USA, 315–347.Chris De Paoli and Karan Singh. 2015. SecondSkin: Sketch-Based Construction ofLayered 3D Models.
ACM Trans. Graph.
34, 4, Article 126 (July 2015), 10 pages.https://doi.org/10.1145/2766948Michael F Deering. 1995. HoloSketch: a virtual reality sketching/animation tool.
ACMTransactions on Computer-Human Interaction (TOCHI)
2, 3 (1995), 220–238.Lubin Fan, Ruimin Wang, Linlin Xu, Jiansong Deng, and Ligang Liu. 2013. Modeling byDrawing with Shadow Guidance.
Computer Graphics Forum
32, 7 (2013), 157–166.https://doi.org/10.1111/cgf.12223Zhe Fan, Ma Chi, Arie Kaufman, and Manuel M. Oliveira. 2004. A Sketch-BasedInterface for Collaborative Design .. In
Sketch Based Interfaces and Modeling , JoaquimArmando Pires Jorge, Eric Galin, and John F. Hughes (Eds.). The EurographicsAssociation, Geneve, Switzerland, 143–150. https://doi.org/10.2312/SBM/SBM04/143-150Matthew Fisher, Peter Schr¨oder, Mathieu Desbrun, and Hugues Hoppe. 2007. Designof Tangent Vector Fields.
ACM Trans. Graph.
26, 3 (July 2007), 56fi?!es. https://doi.org/10.1145/1276377.1276447Hongbo Fu, Yichen Wei, Chiew-Lan Tai, and Long Quan. 2007. Sketching Hairstyles. In
Proceedings of the 4th Eurographics Workshop on Sketch-Based Interfaces and Modeling (Riverside, California) (SBIM fi07) . Association for Computing Machinery, New York,NY, USA, 31fi?!36. https://doi.org/10.1145/1384429.1384439Ran Gal, Olga Sorkine, Niloy J. Mitra, and Daniel Cohen-Or. 2009. iWIRES: An Analyze-and-Edit Approach to Shape Manipulation.
ACM Transactions on Graphics (Siggraph)
28, 3 (2009),
Computer Graphics Forum
37, 5 (2018), 1–12.https://doi.org/10.1111/cgf.13486Bruce Gooch and Amy Gooch. 2001.
Non-Photorealistic Rendering
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Minneapolis, Minnesota, USA) (CHI ’02) . ACM, New York, NY, USA, 121–128. https://doi.org/10.1145/503376.503398Frank Heckel, Jan H. Moltz, Christian Tietjen, and Horst K. Hahn. 2013. Sketch-BasedEditing Tools for Tumour Segmentation in 3D Medical Images.
Computer GraphicsForum
32, 8 (2013), 144–157. https://doi.org/10.1111/cgf.12193Aaron Hertzmann and Denis Zorin. 2000. Illustrating Smooth Surfaces. In
Proceedingsof the 27th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH fi00) . ACM Press/Addison-Wesley Publishing Co., USA, 517fi?!526. https://doi.org/10.1145/344779.345074Yixin Hu, Qingnan Zhou, Xifeng Gao, Alec Jacobson, Denis Zorin, and Daniele Panozzo.2018. Tetrahedral Meshing in the Wild.
ACM Trans. Graph.
37, 4, Article 60 (July2018), 14 pages. https://doi.org/10.1145/3197517.3201353Takeo Igarashi, Satoshi Matsuoka, and Hidehiko Tanaka. 1999. Teddy: A SketchingInterface for 3D Freeform Design. In
Proceedings of the 26th Annual Conference onComputer Graphics and Interactive Techniques (SIGGRAPH fi99) . ACM Press/Addison-Wesley Publishing Co., USA, 409–fi?!416. https://doi.org/10.1145/311535.311602Bret Jackson and Daniel F Keefe. 2016. Lift-off: Using Reference Imagery and FreehandSketching to Create 3D Models in VR.
IEEE transactions on visualization and computergraphics
22, 4 (2016), 1442–1451.Alec Jacobson, Daniele Panozzo, et al. 2018. libigl: A simple C++ geometry processinglibrary. https://libigl.github.io/.Thomas Jung, Mark D. Gross, and Ellen Yi-Luen Do. 2002. Annotating and Sketching on3D Web Models. In
Proceedings of the 7th International Conference on Intelligent UserInterfaces (San Francisco, California, USA) (IUI fi02) . Association for ComputingMachinery, New York, NY, USA, 95fi?!102. https://doi.org/10.1145/502716.502733Robert D. Kalnins, Lee Markosian, Barbara J. Meier, Michael A. Kowalski, Joseph C. Lee,Philip L. Davidson, Matthew Webb, John F. Hughes, and Adam Finkelstein. 2002.WYSIWYG NPR: Drawing Strokes Directly on 3D Models.
ACM Trans. Graph.
21, 3(July 2002), 755fi?!–762. https://doi.org/10.1145/566654.566648Sho Kamuro, Kouta Minamizawa, and Susumu Tachi. 2011. 3D Haptic Modeling Systemusing Ungrounded Pen-shaped Kinesthetic Display. In . IEEE, New York, NY, USA, 217–218.Levent Burak Kara and Kenji Shimada. 2007. Sketch-Based 3D-Shape Creation forIndustrial Styling Design.
IEEE Comput. Graph. Appl.
27, 1 (Jan. 2007), 60–71.https://doi.org/10.1109/MCG.2007.18Daniel Keefe, Robert Zeleznik, and David Laidlaw. 2007. Drawing on Air: InputTechniques for Controlled 3D Line Illustration.
IEEE Transactions on Visualizationand Computer Graphics
13, 5 (2007), 1067–1081.Daniel F. Keefe, Daniel Acevedo Feliz, Tomer Moscovich, David H. Laidlaw, and Joseph J.LaViola. 2001. CavePainting: A Fully Immersive 3D Artistic Medium and InteractiveExperience. In
Proceedings of the 2001 Symposium on Interactive 3D Graphics (I3Dfi01) . Association for Computing Machinery, New York, NY, USA, 85fi?!93. https://doi.org/10.1145/364338.364370Venkat Krishnamurthy and Marc Levoy. 1996. Fitting Smooth Surfaces to Dense PolygonMeshes. In
Proceedings of the 23rd Annual Conference on Computer Graphics andInteractive Techniques (SIGGRAPH ’96) . Association for Computing Machinery, NewYork, NY, USA, 313fi?!324. https://doi.org/10.1145/237170.237270Vojtˇech Krs, Ersin Yumer, Nathan Carr, Bedrich Benes, and Radom´ır Mˇech. 2017. Skippy:Single View 3D Curve Interactive Modeling.
ACM Trans. Graph.
36, 4, Article 128(July 2017), 12 pages. https://doi.org/10.1145/3072959.3073603Kin Chung Kwan and Hongbo Fu. 2019. Mobi3DSketch: 3D Sketching in Mobile AR.In
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19) . Association for Computing Machinery, New York,NY, USA, 1fi?!11. https://doi.org/10.1145/3290605.3300406Sylvain Lefebvre and Hugues Hoppe. 2006. Appearance-Space Texture Synthesis.
ACMTrans. Graph.
25, 3 (July 2006), 541fi?!548. https://doi.org/10.1145/1141911.1141921Bruno L´evy, Sylvain Petitjean, Nicolas Ray, and J´erome Maillot. 2002. Least SquaresConformal Maps for Automatic Texture Atlas Generation.
ACM Trans. Graph.
21, 3(July 2002), 362fi?!371. https://doi.org/10.1145/566654.566590Mayra D. Barrera Machuca, Paul Asente, Wolfgang Stuerzlinger, Jingwan Lu, andByungmoon Kim. 2018. Multiplanes: Assisted Freehand VR Sketching. In
Proceedingsof the Symposium on Spatial User Interaction (Berlin, Germany) (SUI ’18) . Associationfor Computing Machinery, New York, NY, USA, 36fi?!47. https://doi.org/10.1145/3267782.3267786Mayra Donaji Barrera Machuca, Wolfgang Stuerzlinger, and Paul Asente. 2019. TheEffect of Spatial Ability on Immersive 3D Drawing. In
Proceedings of the ACM Con-ference on Creativity & Cognition (C&Cfi19) . ACM, New York, NY, USA, 173fi?!186.James McCrae and Karan Singh. 2008. Sketching Piecewise Clothoid Curves. In
Pro-ceedings of the Fifth Eurographics Conference on Sketch-Based Interfaces and Modeling (Annecy, France) (SBMfi08) . Eurographics Association, Goslar, DEU, 1fi?!8.James McCrae, Karan Singh, and Niloy J. Mitra. 2011. Slices: A Shape-Proxy Based onPlanar Sections.
ACM Trans. Graph.
30, 6 (Dec. 2011), 1fi?!12. https://doi.org/10.1145/2070781.2024202James McCrae, Nobuyuki Umetani, and Karan Singh. 2014. FlatFitFab: InteractiveModeling with Planar Sections. In
Proceedings of the 27th Annual ACM Sympo-sium on User Interface Software and Technology (Honolulu, Hawaii, USA) (UIST . Association for Computing Machinery, New York, NY, USA, 13fi?!22. https://doi.org/10.1145/2642918.2647388Min Meng, Lubin Fan, and Ligang Liu. 2011. iCutter: A Direct Cut-out Tool for 3DShapes. Computer Animation and Virtual Worlds
22, 4 (2011), 335–342. https://doi.org/10.1002/cav.422Andrew Nealen, Takeo Igarashi, Olga Sorkine, and Marc Alexa. 2007. FiberMesh:Designing Freeform Surfaces with 3D Curves.
ACM Trans. Graph.
Computers and Graphics
33, 1 (2009), 85–103.https://doi.org/10.1016/j.cag.2008.09.013Micha¨el Ortega and Thomas Vincent. 2014. Direct Drawing on 3D Shapes with Auto-mated Camera Control. In
Proceedings of the SIGCHI Conference on Human Factors inComputing Systems (Toronto, Ontario, Canada) (CHI fi14) . Association for Comput-ing Machinery, New York, NY, USA, 2047fi?!2050. https://doi.org/10.1145/2556288.2557242Patrick Paczkowski, Min H. Kim, Yann Morvan, Julie Dorsey, Holly Rushmeier, andCarol O’Sullivan. 2011. Insitu: Sketching Architectural Designs in Context.
ACMTrans. Graph.
30, 6 (Dec. 2011), 1fi?!10. https://doi.org/10.1145/2070781.2024216Daniele Panozzo, Ilya Baran, Olga Diamanti, and Olga Sorkine-Hornung. 2013.Weighted Averages on Surfaces.
ACM Trans. Graph.
32, 4, Article 60 (July 2013),12 pages. https://doi.org/10.1145/2461912.2461935Konrad Polthier and Markus Schmies. 2006. Straightest Geodesics on PolyhedralSurfaces. In
ACM SIGGRAPH 2006 Courses (Boston, Massachusetts) (SIGGRAPHfi06) . Association for Computing Machinery, New York, NY, USA, 30fi?!38. https://doi.org/10.1145/1185657.1185664Michael Rabinovich, Roi Poranne, Daniele Panozzo, and Olga Sorkine-Hornung. 2017.Scalable Locally Injective Mappings.
ACM Trans. Graph.
36, 2, Article 16 (April2017), 16 pages. https://doi.org/10.1145/2983621Rohan Sawhney and Keenan Crane. 2017. Boundary First Flattening.
ACM Trans. Graph.
37, 1, Article 5 (Dec. 2017), 14 pages. https://doi.org/10.1145/3132705Steven Schkolne, Michael Pruett, and Peter Schr¨oder. 2001. Surface Drawing: CreatingOrganic 3D Shapes with the Hand and Tangible Tools. In
Proceedings of the SIGCHIconference on Human factors in computing systems . ACM, New York, NY, USA, 261–268.Johannes Schmid, Martin Sebastian Senn, Markus Gross, and Robert W. Sumner. 2011.OverCoat: An Implicit Canvas for 3D Painting.
ACM Trans. Graph.
30, 4, Article 28(July 2011), 10 pages. https://doi.org/10.1145/2010324.1964923Ryan Schmidt. 2017. geometry3sharp: Open-Source (Boost-license) C
ACM Trans. Graph.
25, 3 (July 2006), 605fi?!613.https://doi.org/10.1145/1141911.1141930Ryan Schmidt, Azam Khan, Karan Singh, and Gord Kurtenbach. 2009. Analytic Drawingof 3D Scaffolds. In
ACM SIGGRAPH Asia 2009 Papers (Yokohama, Japan) (SIGGRAPHAsia fi09) . Association for Computing Machinery, New York, NY, USA, Article 149,10 pages. https://doi.org/10.1145/1661412.1618495Ryan Schmidt and Karan Singh. 2010. Meshmixer: An Interface for Rapid MeshComposition. In
ACM SIGGRAPH 2010 Talks (Los Angeles, California) (SIGGRAPH’10) . ACM, New York, NY, USA, Article 6, 1 pages. https://doi.org/10.1145/1837026.1837034Cloud Shao, Adrien Bousseau, Alla Sheffer, and Karan Singh. 2012. CrossShade: ShadingConcept Sketches Using Cross-section Curves.
ACM Trans. Graph.
31, 4, Article 45(July 2012), 11 pages. https://doi.org/10.1145/2185520.2185541Hang Si. 2015. TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator.
ACMTrans. Math. Softw.
41, 2, Article 11 (Feb. 2015), 36 pages. https://doi.org/10.1145/2629697Karan Singh and Eugene Fiume. 1998. Wires: A Geometric Deformation Technique.In
Proceedings of the 25th Annual Conference on Computer Graphics and InteractiveTechniques, SIGGRAPH 1998, Orlando, FL, USA, July 19-24, 1998
Proceedings of ShapeModeling International (Genova, Italy). IEEE Computer Society Press, Piscataway,NJ, USA, 191–199.Jos Stam. 2003. Flows on Surfaces of Arbitrary Topology. In
ACM SIGGRAPH 2003 Papers (San Diego, California) (SIGGRAPH fi03) . Association for Computing Machinery,New York, NY, USA, 724fi?!731. https://doi.org/10.1145/1201775.882338Lucian Stanculescu, Rapha¨elle Chaine, Marie-Paule Cani, and Karan Singh. 2013. Sculpt-ing Multi-dimensional Nested Structures.
Comput. Graph.-UK
37, 6 (Oct. 2013),753–763. Special issue: Shape Modeling International (SMI) Conference 2013.Vitaly Surazhsky, Tatiana Surazhsky, Danil Kirsanov, Steven J. Gortler, and HuguesHoppe. 2005. Fast Exact and Approximate Geodesics on Meshes.
ACM Trans. Graph.
24, 3 (July 2005), 553fi?!560. https://doi.org/10.1145/1073204.1073228Kenshi Takayama, Daniele Panozzo, Alexander Sorkine-Hornung, and Olga Sorkine-Hornung. 2013. Sketch-based Generation and Editing of Quad Meshes.
ACM Trans.Graph.
32, 4, Article 97 (July 2013), 8 pages. https://doi.org/10.1145/2461912.2461955Yannick Thiel, Karan Singh, and Ravin Balakrishnan. 2011. Elasticurves: ExploitingStroke Dynamics and Inertia for the Real-Time Neatening of Sketched 2D Curves.In
Proceedings of the 24th Annual ACM Symposium on User Interface Software andTechnology (Santa Barbara, California, USA) (UIST fi11) . Association for ComputingMachinery, New York, NY, USA, 383fi?!392. https://doi.org/10.1145/2047196.2047246Yiying Tong, Pierre Alliez, David Cohen-Steiner, and Mathieu Desbrun. 2006. DesigningQuadrangulations with Discrete Harmonic Forms. In
Proceedings of the FourthEurographics Symposium on Geometry Processing (Cagliari, Sardinia, Italy) (SGP fi06) .Eurographics Association, Goslar, DEU, 201fi?!210.Greg Turk. 2001. Texture Synthesis on Surfaces. In
Proceedings of the 28th AnnualConference on Computer Graphics and Interactive Techniques (SIGGRAPH fi01) .Association for Computing Machinery, New York, NY, USA, 347fi?!354. https://doi.org/10.1145/383259.383297Emmanuel Turquin, Jamie Wither, Laurence Boissieux, Marie-Paule Cani, and John F.Hughes. 2007. A Sketch-Based Interface for Clothing Virtual Characters.
IEEEComput. Graph. Appl.
27, 1 (Jan. 2007), 72–81. https://doi.org/10.1109/MCG.2007.1Gerold Wesche and Hans-Peter Seidel. 2001. FreeDrawer: A Free-form SketchingSystem on the Responsive Workbench. In
Proceedings of the ACM symposium onVirtual reality software and technology . ACM, New York, NY, USA, 167–174.Eva Wiese, Johann Habakuk Israel, Achim Meyer, and Sara Bongartz. 2010. Investi-gating the Learnability of Immersive Free-Hand Sketching. In
Proceedings of theSeventh Sketch-Based Interfaces and Modeling Symposium (Annecy, France) (SBIMfi10) . Eurographics Association, Goslar, DEU, 135fi?!142.Jun Xing, Koki Nagano, Weikai Chen, Haotian Xu, Li-yi Wei, Yajie Zhao, Jingwan Lu,Byungmoon Kim, and Hao Li. 2019. HairBrush for Immersive Data-Driven HairModeling. In
Proceedings of the 32nd Annual ACM Symposium on User Interface Soft-ware and Technology (New Orleans, LA, USA) (UIST fi19) . Association for ComputingMachinery, New York, NY, USA, 263fi?!279. https://doi.org/10.1145/3332165.3347876Baoxuan Xu, William Chang, Alla Sheffer, Adrien Bousseau, James McCrae, and KaranSingh. 2014. True2Form: 3D Curve Networks from 2D Sketches via SelectiveRegularization.
ACM Trans. Graph.
33, 4, Article 131 (July 2014), 13 pages. https://doi.org/10.1145/2601097.260112833, 4, Article 131 (July 2014), 13 pages. https://doi.org/10.1145/2601097.2601128