FibAR: Embedding Optical Fibers in 3D Printed Objects for Active Markers in Dynamic Projection Mapping
©© 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization andComputer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/
FibAR: Embedding Optical Fibers in 3D Printed Objects for ActiveMarkers in Dynamic Projection Mapping
Daiki Tone, Daisuke Iwai,
Member, IEEE , Shinsaku Hiura,
Member, IEEE , and Kosuke Sato,
Member, IEEE
Fig. 1. Overview of our proposed system. (a) The internal structure of a projection object. Optical fibers with the same color areconnected to the same IR LED attached to the bottom of the object. (b) The projection object fabricated from a multi-material 3D printer.(c) There are seven holes on the bottom of the object, into which IR LEDs are inserted. (d) A captured IR image shows IR light emitfrom the tip of each fiber, which is transmitted through the fiber from one of the LEDs. (e) A dynamic projection mapping result.
Abstract —This paper presents a novel active marker for dynamic projection mapping (PM) that emits a temporal blinking pattern ofinfrared (IR) light representing its ID. We used a multi-material three dimensional (3D) printer to fabricate a projection object with opticalfibers that can guide IR light from LEDs attached on the bottom of the object. The aperture of an optical fiber is typically very small;thus, it is unnoticeable to human observers under projection and can be placed on a strongly curved part of a projection surface. Inaddition, the working range of our system can be larger than previous marker-based methods as the blinking patterns can theoreticallybe recognized by a camera placed at a wide range of distances from markers. We propose an automatic marker placement algorithmto spread multiple active markers over the surface of a projection object such that its pose can be robustly estimated using capturedimages from arbitrary directions. We also propose an optimization framework for determining the routes of the optical fibers in such away that collisions of the fibers can be avoided while minimizing the loss of light intensity in the fibers. Through experiments conductedusing three fabricated objects containing strongly curved surfaces, we confirmed that the proposed method can achieve accuratedynamic PMs in a significantly wide working range.
Index Terms —Projection mapping, spatial augmented reality, multi-material 3D printer, optical fiber, active marker
NTRODUCTION
Projection mapping (PM), also known as spatial augmented reality(SAR) or projection-based AR, can seamlessly merge physical andvirtual worlds via projection onto real surfaces [8]. It has been in-tegrated and applied in many fields such as education [12], vehicledesign [22, 37], art creation [10, 34], daily life support (e.g., searchingeveryday objects [13, 14]), virtual restoration of historical objects [3],and entertainment (e.g., games [15] and theme parks [23]). While staticsurfaces were typically used in these established systems, dynamicPM involving a moving projection surface is still not widely available.One of the major technical challenges is the geometric registration of aprojector to an arbitrarily moving surface as if projected textures arestuck onto it [11].The visual marker-based approach is a robust and computationallyefficient solution. However, there is a unique technical issue inherentin PM that does not have to be considered in typical video see-throughARs, i.e., markers on a projection surface need to be unnoticeable underprojection. Researchers have made efforts to tackle this by proposing • Daiki Tone and Kosuke Sato are with Osaka University• Daisuke Iwai is with Osaka University and JST, PRESTO. E-mail:[email protected].• Shinsaku Hiura is with University of Hyogo.Manuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publicationxx xxx. 201x; date of current version xx xxx. 201x. For information onobtaining reprints of this article, please send e-mail to: [email protected] Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx several techniques such as drawing markers using a special ink that isvisible only in the near-infrared (IR) spectrum [28] and by diminishingvisible markers using a radiometric compensation technique [4, 5].However, these systems apply spatial patterns in representing the IDs ofmarkers which results in a tradeoff between the marker size and workingrange. The markers need to be large enough to be identified by a camera.As a result, it is necessary to place a marker on a relatively planarsurface. If it is placed on a strongly uneven or curved surface, a part ofthe marker would easily be occluded from the camera, resulting in thefailure of ID recognition. Therefore, the shape of the projection surfaceis heavily restricted. Although small markers relax the restriction, theysignificantly limit the working range, i.e., the systems work only whenthe distance between the camera and the projection object is shortenough to correctly recognize the markers.In this paper, we propose the use of an active marker emitting atemporal blinking pattern of IR light (which represents its ID), that isnot susceptible to the previous tradeoff. A na¨ıve implementation wouldbe to embed LEDs in a projection object at all the marker positions.However, doing so requires tedious manual work to install multipleLEDs under the surface and to connect them to a driving circuit viawires that also need to be manually installed in the object. On theother hand, by leveraging a recent advancement of the multi-material3D printing technology, we can apply another strategy that requiresalmost no manual works for marker installation. Specifically, a useronly needs to attach a base component (consisting of near IR LEDs,a circuit, and a battery) at the bottom of the projection object. TheIR light from the LEDs can be guided through optical fibers to thesurface of the projection object. The object can be printed out from a1 a r X i v : . [ c s . G R ] F e b ig. 2. The overall system diagram. multi-material 3D printer that can automatically embed optical fibersin the object. The aperture of an optical fiber is typically very small;thus, it is unnoticeable to human observers under projection and canbe placed on a non-planar part of a projection surface. In addition, theworking range of the proposed system can be larger than the previousmarker-based methods because the blinking patterns can theoreticallybe recognized by a camera placed at a wide range of distances frommarkers.In the proposed method, the placement of each marker on an unevensurface needs to be carefully determined to avoid occlusion. To this end,we propose an automatic marker placement algorithm for spreadingmultiple active markers over the surface of a projection object such thatits pose can be robustly estimated from captured images obtained fromarbitrary directions. Our algorithm determines the marker placementunder the condition that multiple markers share the same blinkingpattern and are connected to the same IR LED. If we simply assign aunique blinking pattern to each active marker, the number of IR LEDsand that of markers need to be identical. However, the number ofmarkers is relatively large for the use case we considered (e.g., morethan 40 in our experiment), and a large number of LEDs cannot beattached to the bottom surface of the object due to its limited space.Also, long bit depths are needed to assign unique blinking patternsto the markers. The length of the pattern does not matter as long astracking of the object is successful, however, it requires a long timeto recover from a tracking failure. Thus, we assign the same blinkingpatterns to multiple markers to reduce the number of IR LEDs. Oncemarker placement is decided, the next thing to do is to compute theroutes of optical fibers from each LED to the corresponding markers onthe surface. We developed an optimization framework for determiningthe routes of the optical fibers in such a way that collisions of the fiberscan be avoided and the loss of light intensity in the fibers is minimized.Figure 2 shows the overall system diagram.In this study, we present the technical details of our automatic markerplacement and optimal fiber routing algorithms. Afterward, we dis-cuss the experiments we conducted using fabricated projection objectsto evaluate the tracking accuracy and working range of a prototypesystem. To summarize, this paper makes the following three primecontributions:• We introduce a novel active AR marker for dynamic PM appli-cations based on optical fibers that are automatically embeddedin a projection object, by leveraging the recent multi-materialcapability in 3D printing technology.• We propose an automatic marker placement algorithm to avoidmarker occlusions on an uneven surface for robust pose estimationunder a condition in which multiple markers share the same binarypattern.• We optimize the routes of optical fibers from each LED to thecorresponding multiple markers to avoid collision of the fibersand to maximize the intensities of the guided IR lights. ELATED W ORKS
Bandyopadhyay et al. proposed the first prototype of a dynamic PM inwhich a user draws on a handheld object using projected imagery [6]. Since then, dynamic PM has been explored mainly in the research con-text of user interface [9, 16, 41]. Although many interaction techniqueswere proposed, accurate geometric registration was out of focus inthese projects. It is only in recent time that researchers have startedto focus on manipulating the appearance (e.g., texture or BRDF) ofa moving surface rather than just projecting GUI widgets or pictures.To this end, accurate geometric registration becomes an indispensabletechnical challenge. In the subsequent section, we discuss prior workson dynamic PM in the latter category.Previously presented geometric registration methods in dynamic PMmainly applied computer vision techniques, and fall into two groups:marker-less and marker-based approaches. The marker-less approachsuits the context of PM very well because it does not need to attachmarkers on a projection surface, which significantly affect projectedappearances. A projector’s six-degree-of-freedom (6DOF) transformrelative to a rigid-body projection object was successfully estimated bymatching its original model with measured information such as color[33], edges [26], and depth images [36]. However, these method cannotbe applied to projection surfaces having either symmetrical structures(e.g., flat, cylindrical, and spherical surfaces) or periodic shapes (e.g.,wavy surfaces) because their 6DOF poses cannot be estimated uniquely.In contrast to the works dealing with rigid-body objects, Bermanoet al. proposed a method to manipulate the appearance of a humanface, a non-rigid, and deformable surface, by applying a commerciallyavailable face tracker [7]. However, this method worked only for avery specific surface (i.e., face). Recently, Miyashita et al. proposedthe application of high-speed cameras measuring the surface normalof a projection object and to manipulate the apparent surface materialby projecting directionally varying colors based on target BRDFs [24].Although this method works for various types of surfaces includingfluid, it does not support the attachment of texture onto a moving object.Generally speaking, marker-based methods work more robustly in sit-uations where the marker-less methods do not work well. Researchers,as well as media artists, created impressive dynamic PM experiencesby leveraging the tremendous advancements in motion capture tech-nologies [1]. However, it is unfortunate that we may see motion capturemarkers on a projected surface in an installation, which significantlydegrade a sense of immersion in the experience. Therefore, one of themain technical issues in the marker-based approach is to make markersunnoticeable to human observers, while being detectable by a camera.Researchers proposed drawing AR markers on a surface with IR inks,which absorb incident light in the IR spectrum [28, 30]. The markershave very low contrast in the visible spectrum, although they are stillvisible for human observers. Asayama et al. proposed a framework ofvisually diminishing markers by applying a closed-loop radiometriccompensation technique [4, 5]. However, their methods require severalfeedbacks of projection and capturing, which can take longer than a sec-ond to converge. Other researchers tried to hide markers from humanobservers by embedding spatial holes inside projection objects, whichcan be detected only by Terahertz [42] or IR [20] imaging. Anothertechnical issue concerns the size of a marker. Previous techniquesapplied relatively large markers in representing IDs of markers andincreasing the robustness in marker detection [5, 28]. As a result, themarkers cannot be placed on a strongly uneven surface that can easilycause occlusion of a part of each marker, resulting in marker detec-tion and recognition failure. Although small markers can be used tosolve this problem, they limit the range of the distance between thecamera and the object where the markers in the captured images can becorrectly recognized.In this study, we apply an active marker to solve the above-mentionedproblems of the marker-based methods. Several active marker ap-proaches for AR have been proposed. The HiBall Tracker is a motioncapture system for virtual reality (VR) and AR applications that mea-sures the 6DOF pose of the tracker on which multiple photo-diodes(PDs) are installed for measuring the positions of ceiling-mountedIR LEDs that are sequentially flashed [39]. Matsushita et al. firstlyproposed the application of blinking light from an IR LED as an ARmarker [21]. They used a customized camera called ID CAM thatcan decode the ID of an AR marker by itself. While these systems2 use binary codes, Naimark and Foxlin proposed the use of amplitudemodulation codes that enables decoding of an AR marker’s ID withoutsynchronization between LEDs and cameras [27]. Recently, the syn-chronization problem has been solved by phase shifting technique ofthe blinking pattern [2]. Mohan et al. proposed Bokode, an LED-basedspatial marker [25]. A tiny lens is placed on an object’s surface andthe marker is embedded behind the lens at its focal distance. A camerafocusing at infinity is used to capture the marker. Active markers werealso realized by an opposite way, where a projector projects per-pixelIDs which are measured by PDs. Most previous works developed spe-cial high-speed projectors to embed binary codes [32, 43]. On the otherhand, Kitajima et al. proposed to directly use a normal laser projectorand decode the pixel position information from the raster scanningtiming [18]. Although these previous works worked well, LEDs orPDs with electrical wires had to be manually embedded underneath aprojection surface at predefined positions. This process is cumbersomeand causes significant errors in pose estimation of the surfaces.Recently, the huge potential in 3D printing of optics has been rec-ognized in digital fabrication as well as optics research fields. Forinstance, researchers showed the possibility of a multi-material 3Dprinter for embedding optical fibers in a 3D printed object [40]. Weapply this concept to avoid a complex work of embedding active markerLEDs in a projection object. In particular, we propose to attach theLEDs at the bottom of the projection object and connect them to mark-ers on the surface using printed optical fibers. Attaching LEDs to thebottom of the object is much easier than embedding them underneaththe surface. In addition, the aperture of an optical fiber is typicallyvery small, and thus, the markers are potentially unnoticeable to humanobservers. Furthermore, the blinking patterns from the optical fiberare theoretically detectable from a camera at a wide range of distances.However, owing to low transparency of currently available clear mate-rials (e.g., Stratasys
VeroClear ) used in multi-material 3D printers andtheir low printing resolutions, the light throughput of a printed opticalfiber significantly reduces according to the increase in the length andcurvature of the fiber. Pereira et al. proposed optimizing the routesof a bunch of printed optical fibers to maximize light throughput [29].However, their technique does not consider a complex situation wherefibers cross each other in a printed object. In this paper, we present anovel fiber routing algorithm for handling such a situation to realizereliable active markers. A closely related previous work proposed acomputational tool for determining internal pipe routes in 3D printedobjects for various interactive applications [35]. However, this did notconsider the optical fiber routing problem. To the best of our knowl-edge, we are the first group that tries to optimize the routes of printedfibers.
CTIVE M ARKERS
This section discusses our active marker design. First, we explain theblinking or temporal patterns of the markers. Second, we present how torecognize markers on a projection surface from captured images. Third,we introduce our proposed automatic marker placement algorithm thatdecides the position of each marker on the surface to make its poseestimation robust. The last subsection explains our computer visiontechnique to estimate the pose of the projection object from capturedmarkers.
To make the system as simple as possible, it is preferable that thesystem does not have a synchronization circuit between a camera andactive markers. Instead, we embed a synchronization signal into theblinking pattern of IR light. Particularly, we apply an m-sequence, apseudorandom binary sequence of 2 m − m bit data. In this study, we assignthe pattern ID (denoted as u ptn ) of 1 to an m-sequence code. Weassign other pattern IDs ( u ptn = , ,..., N ptn ) to arbitrary binary codes Fig. 3. Example of connected adjacent markers overlaid on a simulatedimage and a lookup table of the markers. whose code lengths are b ptn such that (( m − ) mod b ptn ) = b ptn ≤ ( m − ) /
2. Namely, these binary codes repeat ( m − ) / b ptn times in a cycle of the m-sequence code. To reduce the bit depth of thepatterns and decrease the number of LEDs, we assign the same patternID to multiple markers.The proposed system captures the sequence of blinking patterns ofmultiple markers using a camera, from which we identify the phase andpattern IDs. As mentioned above, we can determine the phase of them-sequence from the latest m bit information of the pattern u ptn = u ptn (cid:54) =
1) may correspond to it. In thatcase, we analyze at most the latest 2 b ptn bits and determine the patternID as 1 if the cycle of the pattern does not correspond to b ptn . If thepattern ID is 1, then we determine the phase by comparing the obtainedbit information with the original m-sequence. We then determine thepattern IDs of the other patterns based on the phase. Markers are placed on surface points of a projection object. We refer tothese surface points as marker points. As already mentioned, we assignthe same pattern ID to multiple markers; thus, the number of patternID is smaller than that of marker ID. We can recognize the marker IDof a marker point from the pattern IDs of that point and the adjacentmarker points. Previous methods applied the same approach [28, 38].They placed markers on grid points, and this geometric constraint madethe retrieval of adjacent marker information simple. However, it is notalways possible to place markers on grid points in our proposed methodowing to a constraint in optical fiber routing as described in Section 4.Therefore, we propose a marker ID recognition method that allows formore flexible marker placement.In offline, we build a lookup table, in which each marker point p d islinked to the corresponding marker ID u mkr ( p d ) , pattern ID u ptn ( p d ) ,and the sequence of the pattern IDs of adjacent markers u ptn ( p d ) (Fig-ure 3). Considering occlusion of adjacent markers during marker detec-tion, u ptn ( p d ) is generated based on simulated images that are renderedby capturing the computer graphics model of the projection objectfrom various viewpoints. The viewpoints are uniformly distributedover a sphere centered at the center of the model. Particularly, theviewpoints are assigned to the vertices of 4-frequency dodecahedralgeodesic sphere in our experiment. In each rendered image, we connectmarkers using the Delaunay triangulation algorithm. Then, for eachmarker point p d , we trace the directly connected marker points in aclockwise direction to generate the sequence of the pattern IDs of adja-cent marker points in the viewpoint. The sequence potentially variesamong different viewpoints due to occlusions. Therefore, we select themost frequently occurring sequence and store it in the lookup table as u ptn ( p d ) .In online, at first, we identify the pattern IDs of marker points ina captured image using the method discussed in Section 3.1. After-ward, we identify their marker IDs as follows. The pattern ID of eachcaptured marker point p c is represented as u ptn ( p c ) . As in the offlinesimulation, we connect marker points in the captured image using theDelaunay triangulation algorithm. For each marker point, we obtain the3equence of the pattern IDs of the connected markers in a clockwisedirection, denoted as u ptn ( p c ) . Next, we search in the lookup tablefor marker points that have the same pattern ID of u ptn ( p c ) . Amongthese marker points in the lookup table, we find the one (denoted asˆ p d ) corresponding to p c such that the Levenshtein distance between u ptn ( p c ) and u ptn ( ˆ p d ) is minimum. Finally, we identify p c ’s markerID u mkr ( p c ) as u mkr ( ˆ p d ) . Markers’ locations on a projection surface can significantly affect theestimation performance of the surface pose, and thus, they need to becarefully determined. Because we estimate the pose by solving PnP(Perspective-n-Point) problem [19], more than four markers need to bealways visible to the camera. Therefore, a straightforward strategy isto maximize the number of markers and equally distribute them overthe surface. However, we need to consider other constraints that areunique to our optical fiber-based markers. First, due to the internalvolume of the optical fibers (see Section 4), the distance between twomarkers on the surface should be large enough to avoid collision offibers. Second, due to the narrow light directivity of the printed fiber,the blinking light from the marker becomes too dark to be detectedby a camera when viewed from shallow angles. In addition, patternIDs need to be carefully assigned to the markers so that the marker IDrecognition robustly works for various viewing directions.We propose an automatic marker placement algorithm to spreadmultiple active markers over the surface of a projection object suchthat its pose can be estimated from a captured image of the surface bya camera placed at arbitrary directions. The algorithm assigns eachmarker to a surface point where marker occlusion and intensity drop-offof IR light do not significantly affect the pose estimation performanceof the object. The algorithm at first determines initial candidate markerpoints on the surface by evaluating the visibility of IR lights. Giventhe 3D model of the projection object, we compute the following value s ( p m ) representing the suitability of marker place at each surface point p m : s ( p m ) = ∑ v vis ( p m , v ) θ ( p m , v ) , (1)where vis ( p m , v ) denotes a binary variable representing the visibilityof a surface point p m from a viewpoint v . vis ( p m , v ) is 0 when p m is not visible from v , and 1 otherwise. θ ( p m , v ) represents the anglebetween the normal vector of p m and incident vector from v to p m . Theviewpoints are the same as those used in Section 3.2. We add surfacepoints having locally maximum values of s ( p m ) to a candidate list ofmarker points as follows. First, we store all surface points in the list.Then, we randomly select a point p m , and remove points p (cid:48) m aroundit from the candidate list if s ( p (cid:48) m ) < s ( p m ) and the distance betweenthese two points is less than a predefined threshold. After repeating thisprocess for all remaining points in the candidate list, we obtain the finalcandidate list in which surface points having spatially locally maximumvalues of s ( p m ) remain. Figure 4(a, b) shows the visualization of s ( p m ) of a wavy cone surface and the initial candidate marker points.Our algorithm then assigns the optimal pattern ID u ptn ( p m ) to eachsurface point p m in the candidate list based on their adjacent relation-ships. In this process, inappropriate marker positions are also discardedfrom the list. As described above, the same pattern ID is assignedto multiple markers. The assignment is optimized through a geneticalgorithm (GA), in which the array of pattern IDs is a chromosome.As described in Section 3.1, u ptn ( p m ) = u ptn ( p m ) ≥ u ptn ( p m ) = p m is notsuitable for a marker point and discarded in the following optimization.The optimization is performed based on simulated images that arerendered by capturing the 3DCG model of a projection object fromvarious viewpoints v . The viewpoints are the same as those used inSection 3.2. Our GA algorithm evaluates each chromosome basedon the correctness of marker ID recognition and the visibility of m-sequence markers. Suppose E rcg ( v ) and E mseq ( v ) evaluate the formerand the latter for each rendered image of viewpoint v , respectively, the Fig. 4. Marker placement for a wavy cone surface: (a) Visualization of s ( p m ) (red: large, blue: small). (b) Initial candidate marker places asred points. (c) Pattern ID assignment by GA (top: the objective value ofEquation 2 is improved in the iterations, bottom: marker assignments atthe first and last iterations where red numbers indicate that the markerIDs of these markers cannot be recognized in the simulation). objective of the GA ismaximize ∑ v E rcg ( v ) E mseq ( v ) . (2)The first term, E rcg ( v ) , is computed as follows. At first, for eachmarker point p m , we obtain the pattern ID u ptn ( p m , v ) and the sequenceof adjacent pattern IDs u ptn ( p m , v ) in the rendered image of viewpoint v , which are then used to recognize the marker ID u mkr ( p m , v ) , byapplying the algorithm discussed in Section 3.2. Subsequently, wecompute E rcg ( p m , v ) such that it takes the maximum value when themarker ID is correctly recognized and the sequence of adjacent patternIDs is the same as the most frequently occurring sequence from all theviewpoints, which is denoted as u ptn ( p m ) . Thus, E rcg ( p m , v ) = (3) ( u ptn ( p m ) = ) k − | u ptn ( p m , v ) − u ptn ( p m ) | LD ( u mkr ( p m , v ) is correct ) − k ( otherwise ) , where | · | LD represents the computation of Levenshtein distance. k is an arbitrary parameter. A large k value ensures that markers are4 correctly recognized from various viewpoints, while reducing the num-ber of markers (i.e., increasing the number of points of u ptn ( p m ) = E rcg ( v ) is computed as E rcg ( v ) = ∑ p m E rcg ( p m , v ) . (4)The second term of Equation (2), E mseq ( v ) , evaluates the number ofm-sequence markers visible in each rendered image. We denote thenumber as n mseq ( v ) . Theoretically, one m-sequence marker is sufficientto acquire the phase of blinking patterns. However, due to occlusion andimage noise, it is possible that the m-sequence marker is not recognizedin a captured image even when it is within the view frustum of a camera.Suppose the number of m-sequence markers needed to be within thefrustum is denoted as k ( > ) , the second term is E mseq ( v ) = g ( min ( k , n msec ( v ))) , (5)where g can be arbitrary monotonically increasing function. We found g ( x ) = √ x + p m whose pattern ID is 0 from the candidate list of marker points. Wefinally apply the p m in the latest candidate list as marker points andtheir pattern IDs as u ptn ( p m ) . Figure 4(c) shows the result of the patternID assignment by GA. To estimate the pose of the projection object, we apply the followingcomputer vision algorithm to the current captured image. First, webinarize the image, extract blobs of the bright pixels, and exclude thoseof small areas as noise. Then, we compute the bounding boxes (BBs)of the remaining blobs. We track the markers by updating their BBs. Ifthe BB of a blob is overlapped with that of a marker, we regard that theblob belongs to the marker and set the blob’s BB as the marker’s BB.If the BB of a marker is not updated, we regard that the LED of themarker turns off. If the BB is not updated for more than the period ofthe m-sequence, we regard that the marker is no longer visible from thecamera. For each marker, if the marker ID is already identified in theprevious frame, we use it in the current frame. Otherwise, we identifyit by the method described in Section 3.2. Then, we estimate the poseof the projection object by solving PnP problem using a RANSAC(RANdom SAmpling Consensus) algorithm. Due to the image noise,some marker IDs are possibly not correctly identified. Therefore, wecorrect them using the estimated pose.
RINTED O PTICAL F IBERS
We embed optical fibers inside a projection object, both of whichare printed out from a multi-material 3D printer. The optical fibersconnect each IR LED to corresponding marker points on the projectionsurface, which share the same pattern ID. We carefully design thefibers as outlined herein. First, we provide the structure of the opticalfiber. Second, we explain the computational model that computes thelight throughput of fiber and determine the parameters of the modelthrough an experiment. The last two subsections describe our fiberroute optimization framework.
Figure 5 shows the internal structure of an optical fiber printed out froma multi-material 3D printer. In this paper, optical fibers are printed outfrom Stratasys Objet260 Connex3 which can print a 3D object with3 materials and support material. We employ white (
VeroPureWhite ,RGD837), black (
TangoBlack , FLX973), clear (
VeroClear , RGD810),and support (
SolubleSupport , SUP706) materials. A typical opticalfiber consists of a core surrounded by a transparent cladding materialwith a lower index of refraction. According to previous works, weapply the clear and support materials as the core and cladding of ouroptical fiber, respectively [29, 40]. To avoid crosstalk of IR light leakedfrom fibers inside a projection object, we cover them with a thin layerof the black material. We cover the object surface with a thin layer of
Fig. 5. Optical fiber design. the white material to increase both light diffusion of IR lights and theinvisibility of the markers for human observers.The diameter of the core should be large to increase the light through-put. On the other hand, thin optical fibers make it possible to embedlarge number of markers. To balance them, we determine the diame-ter of the core as 1.75 mm through several trials and errors. We alsofind that the total internal reflection occurs when the thickness of thecladding is more than 0.5 mm. The leak of IR light does not occurwhen the thickness of the black layer is more than 0.5 mm. Therefore,the diameter of the core, the thickness of the cladding, and that of theblack layer are set as 1.75 mm, 0.5 mm, and 0.5 mm, respectively.Namely, the thickness of our printed optical fiber is 3.75 mm. Theprojection object is filled with the support material and covered with a0.4 mm layer of the white material. The thickness of the white materialis also determined through several trials and errors. Note that previousresearchers reported that the core of much smaller diameter workedfor their printed optical fibers [40]. We consider that this inconsistencycomes from the following two factors. First, the light wavelengths aredifferent between our method and [40]. Second, our method needs torecognize blinking patterns by a camera, which requires higher contrastfibers than [40].
To optimize the routes of optical fibers inside the projection object, itis necessary to know the light throughput of the fiber. According toPereira et al. [29], the light throughput T f of a route f can be modeledusing the Lambert-Beer law as follows: T f = exp ( − (cid:90) f a ) , (6) a = c + c exp ( − c r ) , (7)where a and r represent the absorption coefficient and the curvatureradius of the fiber, respectively. c , c , and c are arbitrary coefficients.The light throughput is generally characterized by the luminance ofan emitted light from the end of the fiber. However, considering theuse of fibers for active markers, we characterize the throughput as thelongest distance between the end of a fiber and a camera where theemitted blinking pattern of IR light can be detected. According to theinverse square law of illuminance, the longest distance d cam can beexpressed as: d cam = c ( I LED T f ) = c exp ( − (cid:90) f a ) , (8)where c and I LED represent an arbitrary coefficient and the light inten-sity of an IR LED attached to the fiber, respectively, and c = c ( I LED ) .Once we obtain the parameters c , c , c , and c , we can compute thelight throughput of the fiber in any given route.To calibrate the parameters, we printed out 33 optical fibers bycombining 3 lengths (40, 50, and 60 mm) and 11 curvatures (20, 25,5 ig. 6. Printed optical fibers for parameter identification: (left) internalstructure, (right) printed fibers.Fig. 7. Measurement of d cam (the longest distance between a fiber andcamera where blinking binary pattern is detectable): (left) measurementsetup, (right) result. . . . , and 70 mm) (Figure 6). We attached an IR LED (OSI3CA5111A,850 nm) onto one end of each fiber and turned it on and off repeatedly.A camera (XIMEA MQ003MG-CM) captured the LED’s blinkingpattern emitted from the other end of the fiber. We changed the distancebetween the camera and the fiber and recorded the longest distancewhere the binary pattern can be detected (Fiure 7(left)). Figure 7(right)shows the results of the experiment. We fitted the model (Equation(8)) to our data (outliers excluded), and obtained the following results: c = . × − , c = . × − , c = . × − , and c = . × . The residual standard error of the fitting was 281.2 mm. We compute the routes of optical fibers connecting each IR LED to thecorresponding marker points on the projection surface that share thesame pattern ID. The luminance of IR light emitted from the markerpoints should be maximized so that a camera can detect the blinkingpatterns from as far distance as possible. At the same time, collisionsbetween fibers transmitting different pattern IDs must be avoided, andall fibers must be inside the projection object. In this study, the routeis represented as NURBS (Non-Uniform Rational B-Spline) curve.Our route determination algorithm consists of two parts: initial routecomputation and refinement. The former is discussed here, and thelatter is discussed in Section 4.4.The initial route should balance the following four demands: (1)the route should be as short as possible and (2) the curvature radius ofeach part of the fiber should be as large as possible to increase the lightthroughput, (3) collisions among fibers transmitting different patternIDs should be avoided, and (4) the fibers should be inside the object.At first, we set the initial route of each fiber f , which consists offour initial control points ( p f = , , ,
4) as shown in Figure 8. Controlpoints of p f = p f = p f = p f = t -th iteration as n f ( t ) . In the former process, we add new control points at every middleposition of two adjacent control points excluding the pairs of the points Fig. 8. Initial routing of a printed optical fiber. of p f = p f = n f ( t ) − n f ( t ) . Afterward, we updatethe IDs of control points such that the IDs are assigned in the orderof control points from the point at the tip of the IR LED (i.e., p f = p f = , , n f ( t ) − , n f ( t ) according to the followingexpression: x p f ( t + ) = x p f ( t ) + l p f ( t ) + r p f ( t ) + f p f ( t ) + s p f ( t ) , (9)where x p f ( t ) denotes the position of the control point p f after the t -thiteration. l p f ( t ) adjusts the position x p f ( t ) such that the fiber becomes shorter.The shortest length between the points of p f = n f ( t ) − ( n f ( t ) − ) − l p f ( t ) moves thecontrol point p f to a direction in which both the distance from x p f − ( t ) to x p f ( t ) and that from x p f ( t ) to x p f + ( t ) get closer to the sectionalshortest length. r p f ( t ) adjusts the position x p f ( t ) so that the curvatureradius of the fiber around the control point p f becomes larger. Itmoves the control point away from the center of the circle passingthrough x p f − ( t ) , x p f ( t ) , and x p f + ( t ) . f p f ( t ) adjusts the position x p f ( t ) to avoid collision of the fiber f and another fiber f (cid:48) transmittinga different pattern ID. Suppose there are multiple control points of f (cid:48) that are closer to the control point p f than a predefined distance, wedenote them as ˆ p f (cid:48) . Then, f p f ( t ) denotes the sum of vectors from each x ˆ p f (cid:48) to x p f . s p f ( t ) moves the control point p f away from the closestsurface point when the distance between these two points is shorterthan a predefined threshold.In each iteration, we operate the former process (i.e., adding newcontrol points) once, and then the latter process (i.e., moving them)1,000 times. After five iterations, we obtained 35 control points in totalwhose positions are the initial route of each printed optical fiber. We refine the initial route using the following optimization framework.The target of the route optimization is to find the optimal set of f suchthatminimize ∑ f [ { − E cam ( f ) } + k E fib ( f ) + k E sur f ( f ) + k E reg ( f )] , (10)where E cam ( f ) , E fib ( f ) , and E sur f ( f ) are used to evaluate the lightthroughput of the fiber f , the collision between f and other fibers, andthe distance of f from the object surface, respectively. E reg ( f ) works asa regularizer. k , k , and k are arbitrary weights. The optimization isperformed using Adam (Adaptive Moment Estimation) algorithm [17]. E cam ( f ) represents the estimated maximum distance from the markerpoint of f to a place where the camera used in Section 4.2 can detectthe blinking pattern. We approximate the route of fiber as consistingof arcs belonging to different control points. The arc of a control point p f is defined as follows. First, it is a part of the circumscribed circlepassing through x p f − , x p f , and x p f + . Second, its length l p f is halfof the sum of the distances from x p f − to x p f and that from x p f and x p f + . Due to the multiplicative nature of light absorption, we model6 Table 1. Binary codes for seven LEDs.
M-sequence 100011110101100Others 001, 010, 011, 100, 101, 110 E cam ( f ) as the product of d cam at each control point based on Equation(8) as E cam ( f ) = ∏ p f exp ( − l p f a p f ) , (11)where l p f = ( | x p f − − x p f | + | x p f + − x p f | ) , (12) a p f = c + c exp ( − c r p f ) , (13)and r p f denotes the radius of the circumscribed circle. E fib ( f ) increaseswhen the distance from the fiber f to other fibers f (cid:48) (through whichdifferent blinking patterns are transmitted), decreases. This term helpsto prevent crosstalk in the blinking patterns. Thus, E fib ( f ) = ∑ p f ∑ f (cid:48) ∑ p f (cid:48) exp ( − k | x p f − x p f (cid:48) | ) , (14)where k represents an arbitrary coefficient. E sur f ( f ) increases whenthe distance between the fiber f and the surface of the projection objectdecreases. This term constrains the fiber to remain inside the projectionobject. Suppose a surface point of the projection object and its positionare denoted as p s and x p s , respectively, E sur f ( f ) can be computed usingthe following expression: E sur f ( f ) = ∑ p f ∑ p s exp ( − k | x p f − x p s | ) , (15)where k represents an arbitrary coefficient. The last term, E reg ( f ) ,plays the role of a regularizer. For each control point p f , if the distancefrom the previous control point p f − p f +
1, the approximation of the routeas a group of arcs in Equation (11) will no longer be valid. Thus, weformulate E reg ( f ) to minimize the difference between the two distancesas: E reg ( f ) = ∑ p f (cid:12)(cid:12) | x p f − − x p f | − | x p f + − x p f | (cid:12)(cid:12) . (16) XPERIMENTS
We fabricated three objects with optical fibers and conducted experi-ments to validate the effectiveness of the proposed method. First, weintroduce the details of the fabricated objects and a prototype PM sys-tem. Second, we present an evaluation experiment of pose estimationerrors. Finally, we present dynamic PM results.
We printed out three target objects with different shapes (wavy-cone,building, and bunny) using Stratasys Objet260 Connex3 as shown inFigures 1 and 9. We modeled a wavy-cone shape to show that theproposed method works even for a surface having symmetric structureand strongly uneven and curved shape, for which conventional marker-less and spatial-pattern marker-based methods theoretically do not workwell as discussed in Section 2. As a more practical projection target,we modeled the building surface by assuming to use it in an appearancedesign scenario. It also has an almost symmetric structure and someuneven parts. We selected the Stanford bunny to show the robustnessof our method against occlusions (i.e., the ears occlude the body whenviewed from a certain viewing area). The sizes of the objects were111 × ×
87 [mm] (wavy-cone), 87 × ×
101 [mm] (building),and 110 × ×
148 [mm] (bunny).We prepared seven small holes in the bottom surfaces of the objectsas LED sockets as shown in Figures 1 and 9. The same hole layout wasshared among the objects so that the same base component consisting
Fig. 9. Printed objects and IR LEDs of the base component (bottomright).Fig. 10. The internal structures of the printed objects. Fibers with thesame color are connected to the same LED. of an electrical circuit, battery, and LEDs is reusable. The center of theholes was used for an LED transmitting an m-sequence code, and theothers were used for the other binary codes. Therefore, the number ofcodes N ptn was 7 (Table 1). We used an m-sequence whose code lengthwas 15 (i.e., m = b ptn as 3. We determined the marker placements for allthe objects using our method in Section 3.3 where the parameters k and k were set as 5 and 2, respectively. As a result, 43 (wavy-cone),57 (building), and 65 (bunny) markers were placed on the objects’surfaces.The routes of optical fibers were optimized using the proposedmethod as discussed in Sections 4.3 and 4.4. The parameters of k , k , k , k , and k were set as 1.5, 0.5, 1.0, 1.5, and 1.5, respectively.The internal structures of the objects were shown in Figures 1 and 10.To evaluate the effectiveness of the optimization method, we checked E cam ( f ) , the computed longest distance from each marker point to thecamera used in Section 4.2. The average value of E cam ( f ) of the initialroute was 1618.7 mm and that of the optimized route was 2028.2 mmin the wavy-cone object. The average values of E cam ( f ) were improvedin the other objects, too (building: 868.4 mm to 1407.9 mm, bunny:843.0 mm to 1543.0 mm). In addition, the fibers of different patternIDs did not collide with each other, and all the fibers were placed insidethe objects. Particularly, in the bunny object, fibers were successfullyrouted through the narrow neck.We built a projector-camera system as shown in Figure 11. Weapplied an off-the-shelf industrial camera (Basler acA720-520um, 525fps, 720 ×
540 pixels) with an IR-pass/VIS-cut filter. It has been pointed7 ig. 11. Projector-camera system. The colored axes show the IR cameracoordinate system.Fig. 12. Pose estimation error: (top) position estimation error for atranslation, (bottom) orientation estimation error for a rotation. out that low-latency augmentation is crucial in dynamic PM appli-cations [7, 24, 28, 38]. Therefore, we applied a 1,000 fps projector(Inrevium, TB-UK-DYNAFLASH, 8-bit grayscale, 1024 ×
768 pixels).The camera and projector were connected to a PC (CPU: Intel Core-i75960X 3.0 GHz, RAM: 32 GB). The base component consisted ofseven LEDs (OSI3CA5111A, 850 nm), an Arduino-compatible micro-computer (Japanino) for controlling the LEDs, and a mobile battery(Figure 9). The blinking speed was 350 bit/s. A camcorder in the figurewas used to record videos of projected results.
We quantitatively evaluated the pose estimation accuracy of the pro-posed method by measuring two errors; one for translation and the otherfor rotation. First, we translated each of the objects (wavy-cone, bunny,and building) along a straight line, and estimated its position in theIR camera coordinate system at five locations. The distance betweenthe adjacent locations is 250 mm, and thus, the overall measurementrange was 1 m. Figure 12(top) shows the Euclidean distance betweenthe grand truth and the estimated position. We confirmed that the er-rors were 1.9 mm on average and less than 7.0 mm at any estimatedlocations in the range of 1 m.Second, we rotated each of the objects about an axis using a turntable,and estimated its pose every 30 degrees. Figure 12(bottom) shows the absolute errors of the estimated rotational angles about the axis. Weconfirmed that the errors were 0.9 degrees on average and less than 1.4degrees at any rotational angle.
We conducted a dynamic PM experiment using the bunny and buildingobjects. We moved the objects by hands in front of the projector-camerasystem. The movement consisted of translations along the xyz-axes ofthe IR camera coordinate system and rotation about the y-axis. We alsocovered the objects with hands to investigate the robustness againstocclusion. We tried to hold the objects such that they could not movewhile covering them.Figures 13 and 14 show the results of the bunny and building objects,respectively. From the results of translation and rotation, we confirmedthat the projection images could be aligned onto both objects whichwere largely translated (approx. 800 mm along z-axis) and rotated(approx. 180 degrees about y-axis). From the results of occlusionexperiments conducted, we confirmed that the projected images werestably aligned onto both objects even when more than half of theobject was occluded by the hands. Although the numbers of detectedmarkers were decreased owing to the occlusions, no significant errorwas observed in the pose estimation of the objects. The processingtime for a captured image was 2-3 ms. In addition, we informallyconfirmed that the markers were unnoticeable to human observersunder projection.
ISCUSSION
From the results of the pose estimation evaluation presented in Section5.2, we consider the pose estimation errors (1.9 mm in position and 0.9degrees in orientation estimations) sufficiently small to geometricallyalign projection images with perceptually acceptable accuracies, con-sidering the size of the objects (approximately 120 × ×
120 [mm]).There was a relatively large error ( > We discuss limitations of the proposal. First, the current system appliesthe fixed shape base component. Although it is convenient and useful,it restricts the shape of a projection object. To overcome this limitation,our system needs to allow the electronic devices of the base component(e.g., LEDs and a battery) to be positioned within the projection object8
Fig. 13. Projection result of the bunny object in translation, rotation, and occlusion: (left) the time series of 3D position and orientation of the object inthe IR camera coordinate system, and the number of markers whose pattern/marker IDs were detected, (right) captured image of the projectedobject by the camcorder and visualization of detected markers and estimated pose of the object superimposed in captured image by the camera atframes indicated by the numbers in a circle. of an arbitrary shape. To this end, we need to extend our currentoptimization framework such that not only optical fiber routes but theplacements of the electronic devices are jointly optimized. We considerthat this can be simply achieved by adding new constraints about theplacements in the current optimization framework.Second, we discuss limitations due to the current 3D printing tech-nology. We used a state-of-the-art multi-material 3D printer. However,because the current clear material is not perfectly transparent, whichlowers the light throughput of a fabricated fiber, both the length andcurvature of the fiber would be limited. As a result, we could not makethe projection objects larger than a certain size. Also, the aperture ofour fabricated fiber (core diameter: 1.75 mm) was much larger thanthe commercially available optical fiber (core diameter: 0.01–0.06mm) owing to the low spatial resolution of the current printer. Thislimits the number of fibers embedded in a projection object, and con-sequently, the number of markers. Our method theoretically worksfor any shapes as long as marker points on the surface and the bottomsurface is connected. However, in practice, the above-mentioned largeaperture problem limits the shape of a projection object to thick andlarge shapes. Our method is not applicable to an object consistingof thinner or narrower shapes than the fiber aperture (e.g., hourglassshape). Because 3D printing technology is evolving at an acceleratedrate, we believe that these limitations will be solved in the future.Future advancements in 3D printing technology also lead to interest-ing future research directions. Specifically, once a printer supports fab-ricating flexible optical fibers, we can achieve novel types of dynamicPMs. For instance, we can align projected images onto articulatedmultiple rigid surfaces such as robots. We can also realize a dynamicPM on a non-rigid, deformable surface using the proposed method.
ONCLUSION
This paper presented a novel active marker for dynamic PM, whichemits a temporal blinking pattern of IR light representing its ID. Weapplied a multi-material 3D printer to fabricate a projection object withoptical fibers, which guided IR light from LEDs attached on the bottom of the object. We proposed an automatic marker placement algorithmto spread multiple active markers over the surface of a projection objectin such a way that its pose can be robustly estimated using capturedimages from arbitrary directions. We also developed an optimizationframework for determining the routes of the optical fibers so that colli-sions of the fibers were avoided and loss of light intensity in the fibersminimized. Through experiments conducted, we confirmed that themarkers can be embedded on strongly curved surfaces and informallyconfirmed that they are unnoticeable under projection. In addition, theworking range of our system (1 m) was significantly larger than thatof previous marker-based methods ( <
350 mm). We were also able toaccurately align projected images onto an object with relatively smallpose estimation errors (1.9 mm in position and 0.9 degrees in orienta-tion estimations). Based on these results, we believe that our proposedmethod has the potential to expand the applicability of dynamic PM.We consider the following technical extension. The current systemassigns static pattern IDs (LEDs’ blinking patterns) to the markers.As future work, we will dynamically assign the IDs only to a part ofmarkers which are visible from the camera. By this, we can reduce thenumber of the IDs and shorten their bit depths. Consequently, the poseestimation of a fast moving object can be more robust. A CKNOWLEDGMENTS
This work has been supported by the JSPS KAKENHI under grantnumber 15H05925. R EFERENCES [1] INORI (Prayer). https://vimeo.com/103425574 . Accessed: 2018-01-27.[2] K. Ahuja, S. Pareddy, R. Xiao, M. Goel, and C. Harrison. Lightanchors:Appropriating point lights for spatially-anchored augmented reality in-terfaces. In
Proceedings of the 32nd Annual ACM Symposium on UserInterface Software and Technology , UIST ’19, pp. 189–196. Associationfor Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3332165.3347884 ig. 14. Projection result of the building object in translation, rotation, and occlusion: (left) the time series of 3D position and orientation of the objectin the IR camera coordinate system, and the number of markers whose pattern/marker IDs were detected, (right) captured image of the projectedobject by the camcorder and visualization of detected markers and estimated pose of the object superimposed in captured image by the camera atframes indicated by the numbers in a circle. [3] D. G. Aliaga, A. J. Law, and Y. H. Yeung. A virtual restoration stage forreal-world objects. ACM Transactions on Graphics (TOG) , 27(5):149,2008.[4] H. Asayama, D. Iwai, and K. Sato. Diminishable visual markers onfabricated projection object for dynamic spatial augmented reality. In
SIGGRAPH Asia 2015 Emerging Technologies , p. 7. ACM, 2015.[5] H. Asayama, D. Iwai, and K. Sato. Fabricating diminishable visual markersfor geometric registration in projection mapping.
IEEE transactions onvisualization and computer graphics , 24(2):1091–1102, 2018.[6] D. Bandyopadhyay, R. Raskar, and H. Fuchs. Dynamic shader lamps:Painting on movable objects. In
Augmented Reality, 2001. Proceedings.IEEE and ACM International Symposium on , pp. 207–216. IEEE, 2001.[7] A. H. Bermano, M. Billeter, D. Iwai, and A. Grundh¨ofer. Makeup lamps:Live augmentation of human faces via projection.
Computer GraphicsForum , 36(2):311–323, 2017.[8] O. Bimber and R. Raskar.
Spatial Augmented Reality: Merging Real andVirtual Worlds . A. K. Peters, Ltd., Natick, MA, USA, 2005.[9] X. Cao, C. Forlines, and R. Balakrishnan. Multi-user interaction usinghandheld projectors. In
Proceedings of the 20th annual ACM symposiumon User interface software and technology , pp. 43–52. ACM, 2007.[10] M. Flagg and J. M. Rehg. Projector-guided painting. In
Proceedings of the19th annual ACM symposium on User interface software and technology ,pp. 235–244. ACM, 2006.[11] A. Grundh¨ofer and D. Iwai. Recent advances in projection mappingalgorithms, hardware and applications.
Computer Graphics Forum , 2018.To appear.[12] D. Iwai, R. Matsukage, S. Aoyama, T. Kikukawa, and K. Sato. Geometri-cally consistent projection-based tabletop sharing for remote collaboration.
IEEE Access , 2017. To appear.[13] D. Iwai and K. Sato. Limpid desk: see-through access to disorderlydesktop in projection-based mixed reality. In
Proceedings of the ACMsymposium on Virtual reality software and technology , pp. 112–115. ACM,2006.[14] D. Iwai and K. Sato. Document search support by making physicaldocuments transparent in projection-based mixed reality.
Virtual reality ,15(2-3):147–160, 2011.[15] B. R. Jones, H. Benko, E. Ofek, and A. D. Wilson. Illumiroom: peripheral projected illusions for interactive experiences. In
Proceedings of theSIGCHI Conference on Human Factors in Computing Systems , pp. 869–878. ACM, 2013.[16] T. Karitsuka and K. Sato. A wearable mixed reality with an on-boardprojector. In
Proceedings of the 2nd IEEE/ACM International Symposiumon Mixed and Augmented Reality , p. 321. IEEE Computer Society, 2003.[17] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014.[18] Y. Kitajima, D. Iwai, and K. Sato. Simultaneous projection and positioningof laser projector pixels.
IEEE transactions on visualization and computergraphics , 23(11):2419–2429, 2017.[19] V. Lepetit, F. Moreno-Noguer, and P. Fua. Epnp: An accurate o(n) solutionto the pnp problem.
International Journal of Computer Vision , 81(2):155,Jul 2008. doi: 10.1007/s11263-008-0152-6[20] D. Li, A. S. Nair, S. K. Nayar, and C. Zheng. Aircode: Unobtrusivephysical tags for digital fabrication. In
Proceedings of the 30th AnnualACM Symposium on User Interface Software and Technology , UIST ’17,pp. 449–460. ACM, New York, NY, USA, 2017.[21] N. Matsushita, D. Hihara, T. Ushiro, S. Yoshimura, J. Rekimoto, and Y. Ya-mamoto. Id cam: A smart camera for scene capturing and id recognition.In
Mixed and Augmented Reality, 2003. Proceedings. The Second IEEEand ACM International Symposium on , pp. 227–236. IEEE, 2003.[22] C. Menk, E. Jundt, and R. Koch. Visualisation techniques for using spatialaugmented reality in the design process of a car.
Computer GraphicsForum , 30(8):2354–2366, 2011.[23] M. Mine, D. Rose, B. Yang, J. van Baar, and A. Grundh¨ofer. Projection-based augmented reality in disney theme parks.
Computer , (7):32–40,2012.[24] L. Miyashita, Y. Watanabe, and M. Ishikawa. Midas projection: Mark-erless and modelless dynamic projection mapping for material represen-tation.
ACM Trans. Graph. , 37(6):196:1–196:12, Dec. 2018. doi: 10.1145/3272127.3275045[25] A. Mohan, G. Woo, S. Hiura, Q. Smithwick, and R. Raskar. Bokode:Imperceptible visual tags for camera based interaction from a distance.
ACM Trans. Graph. , 28(3):98:1–98:8, July 2009.[26] Y. Morikubo, E. S. Lorenzo, D. Miyazaki, and N. Hashimoto. Tangibleprojection mapping: Dynamic appearance augmenting of objects in hands. In SIGGRAPH Asia 2018 Emerging Technologies , SA ’18, pp. 14:1–14:2.ACM, New York, NY, USA, 2018. doi: 10.1145/3275476.3275494[27] L. Naimark and E. Foxlin. Encoded led system for optical trackers. In
Mixed and Augmented Reality, 2005. Proceedings. Fourth IEEE and ACMInternational Symposium on , pp. 150–153. IEEE, 2005.[28] G. Narita, Y. Watanabe, and M. Ishikawa. Dynamic projection mappingonto deforming non-rigid surface using deformable dot cluster marker.
IEEE transactions on visualization and computer graphics , 23(3):1235–1248, 2017.[29] T. Pereira, S. Rusinkiewicz, and W. Matusik. Computational light routing:3d printed optical fibers for sensing and display.
ACM Transactions onGraphics (TOG) , 33(3):24, 2014.[30] P. Punpongsanon, D. Iwai, and K. Sato. Projection-based visualizationof tangential deformation of nonrigid surface by deformation estimationusing infrared texture.
Virtual Reality , 19(1):45–56, 2015.[31] K. Qiu, F. Zhang, and M. Liu. Let the light guide us: Vlc-based localiza-tion.
IEEE Robotics & Automation Magazine , 23(4):174–183, 2016.[32] R. Raskar, H. Nii, B. Dedecker, Y. Hashimoto, J. Summet, D. Moore,Y. Zhao, J. Westhues, P. Dietz, J. Barnwell, et al. Prakash: lighting awaremotion capture using photosensing markers and multiplexed illuminators.In
ACM Transactions on Graphics (TOG) , vol. 26, p. 36. ACM, 2007.[33] C. Resch, P. Keitler, and G. Klinker. Sticky projections—a new approachto interactive shader lamp tracking. In
Mixed and Augmented Reality(ISMAR), 2014 IEEE International Symposium on , pp. 151–156. IEEE,2014.[34] A. Rivers, A. Adams, and F. Durand. Sculpting by numbers.
ACMTransactions on Graphics (TOG) , 31(6):157, 2012.[35] V. Savage, R. Schmidt, T. Grossman, G. Fitzmaurice, and B. Hartmann.A series of tubes: Adding interactivity to 3d prints using internal pipes.In
Proceedings of the 27th Annual ACM Symposium on User InterfaceSoftware and Technology , UIST ’14, pp. 3–12. ACM, 2014.[36] C. Siegl, M. Colaianni, L. Thies, J. Thies, M. Zollh¨ofer, S. Izadi, M. Stam-minger, and F. Bauer. Real-time pixel luminance optimization for dy-namic multi-projection mapping.
ACM Transactions on Graphics (TOG) ,34(6):237, 2015.[37] T. Takezawa, D. Iwai, K. Sato, T. Hara, Y. Takeda, and K. Murase. Materialsurface reproduction and perceptual deformation with projection mappingfor car interior design. In
Proceedings of IEEE Conference on VirtualReality and 3D User Interfaces (IEEE VR) , 2019. (accepted).[38] Y. Watanabe, T. Kato, et al. Extended dot cluster marker for high-speed 3dtracking in dynamic projection mapping. In
Mixed and Augmented Reality(ISMAR), 2017 IEEE International Symposium on , pp. 52–61. IEEE, 2017.[39] G. Welch, G. Bishop, L. Vicci, S. Brumback, K. Keller, and D. Colucci.High-performance wide-area optical tracking: The hiball tracking system.
Presence: Teleoperators & Virtual Environments , 10(1):1–21, 2001.[40] K. Willis, E. Brockmeyer, S. Hudson, and I. Poupyrev. Printed optics: 3dprinting of embedded optical elements for interactive devices. In
Proceed-ings of the 25th annual ACM symposium on User interface software andtechnology , pp. 589–598. ACM, 2012.[41] K. D. Willis, I. Poupyrev, S. E. Hudson, and M. Mahler. Sidebyside: ad-hoc multi-user interaction with handheld projectors. In
Proceedings of the24th annual ACM symposium on User interface software and technology ,pp. 431–440. ACM, 2011.[42] K. D. D. Willis and A. D. Wilson. Infrastructs: Fabricating informationinside physical objects for imaging in the terahertz region.
ACM Trans.Graph. , 32(4):138:1–138:10, July 2013.[43] L. Zhou, S. Fukushima, and T. Naemura. Dynamically reconfigurableframework for pixel-level visible light communication projector. In
EProc.SPIE 8979, Emerging Digital Micromirror Device Based Systems andApplications VI , p. 89790J. International Society for Optics and Photonics,2014., p. 89790J. International Society for Optics and Photonics,2014.