Lightform: Procedural Effects for Projected AR
Brittany Factura, Laura LaPerche, Phil Reyneri, Brett Jones, Kevin Karsch
LLightform: Procedural Effects for Projected AR
Brittany Factura
Lightform, Inc.
Laura LaPerche
Lightform, Inc.
Phil Reyneri
Lightform, Inc.
Brett Jones
Lightform, Inc.
Kevin Karsch
Lightform, Inc.
Figure 1: Lightform demonstration showing an automatic effect. From left to right: LF1 with laptop running Lightform Creatorin the background; a scene during LF1 structured light scanning; screenshot of Lightform Creator during effect generation;the scene with the automatic ”TRON” effect applied (original scene inset).
CCS CONCEPTS • Computing methodologies → Mixed / augmented reality; Re-construction;
KEYWORDS projection mapping, structured light, procedural effects
ACM Reference format:
Brittany Factura, Laura LaPerche, Phil Reyneri, Brett Jones, and KevinKarsch. 2018. Lightform: Procedural Effects for Projected AR. In
Proceedingsof SIGGRAPH ’18 Studio, Vancouver, BC, Canada, August 12-16, 2018,
Projected augmented reality, also called projection mapping orvideo mapping, is a form of augmented reality that uses projectedlight to directly augment 3D surfaces, as opposed to using pass-through screens or headsets. The value of projected AR is its abilityto add a layer of digital content directly onto physical objects orenvironments in a way that can be instantaneously viewed bymultiple people, unencumbered by a screen or additional setup.Because projected AR typically involves projecting onto non-flat,textured objects (especially those that are conventionally not usedas projection surfaces), the digital content needs to be mapped andaligned to precisely fit the physical scene to ensure a compellingexperience. Current projected AR techniques require extensivecalibration at the time of installation, which is not conducive toiteration or change, whether intentional (the scene is reconfigured)or not (the projector is bumped or settles). The workflows areundefined and fragmented, thus making it confusing and difficultfor many to approach projected AR. For example, a digital artist
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).
SIGGRAPH ’18 Studio, Vancouver, BC, Canada © 2018 Copyright held by the owner/author(s). 978-1-4503-5819-4/18/08...$15.00DOI: 10.1145/3214822.3214823 may have the software expertise to create AR content, but could notcomplete an installation without experience in mounting, blending,and realigning projector(s); the converse is true for many A/Vinstallation teams/professionals. Projection mapping has thereforebeen limited to high-end event productions, concerts, and films,because it requires expensive, complex tools, and skilled teams($100K+ budgets).Lightform provides a technology that makes projected AR ap-proachable, practical, intelligent, and robust through integratedhardware and computer-vision software. Lightform brings togetherand unites a currently fragmented workflow into a single cohe-sive process that provides users with an approachable and robustmethod to create and control projected AR experiences.
Lightform is comprised of two solutions: a small camera/computerdevice called the ”LF1” which physically attaches to a projector,and a desktop software application called ”Lightform Creator” thatallows a user to control the LF1 and also create projected AR contentthat can be uploaded and played through the LF1. Using computervision and machine learning, we automate and simplify many ofthe pain points associated with projected AR, such as (re)alignmentand content creation. We believe that these tools will help makeprojected AR faster, easier and cheaper than before.
The LF1 contains mobile processors as well as a 12 megapixel(4096x3072) RGB imaging sensor. It can be connected to a net-work over WiFi or Ethernet, and has an HDMI port for connectingto most any projector. Modern projectors come in a variety ofthrow ratios (field of view), and the LF1 supports different lensconfigurations to support many of these.The LF1 acts as a media playback and hosting server, but mostimportantly, it is responsible for acquiring structured light scans a r X i v : . [ c s . G R ] D ec IGGRAPH ’18 Studio, August 12-16, 2018, Vancouver, BC, Canada Factura, LaPerche, Reyneri, Jones, and Karsch of the scene, and computing procedural AR content (effects). Thedata acquired by the LF1 is also sent to Lightform Creator for userswho wish to create more complex AR experiences.
Structured light scanning isone of the first and most important steps of the Lightform workflow,enabling both realignment and content creation. We have imple-mented a visible structured light algorithm (inspired by Yamazakiet al. [Yamazaki et al. 2011]) on the LF1 for remotely capturingdetailed scene information. A structured light scan operates byprojecting patterns which are captured by a camera (in this case,the LF1). This provides a dense pixel correspondence from projectorpixels to/from camera pixels. This mapping can be thought of inmuch the same way as stereo camera calibration and reconstruc-tion [Hartley and Zisserman 2003], and we apply similar techniquesto extract 3D information (disparity and depth) from the scene. Oneimportant benefit that this provides is the ability to reconstruct a projector image , in other words, a remapping of camera pixels intothe projector’s domain to obtain an image of the scene as if takenwith the projector’s optical parameters and point-of-view. The pro-jector image is extremely important for creating procedural effects,as it provides an image that can be understood and processed byvision techniques (e.g. segmentation, edge detection, etc) and effect-generating algorithms. A user can author content directly on topof this image, and it is automatically aligned with the real world,eliminating the need for traditional mapping workflows.
Lightformuses the structured light scan data to create instant effects, whichmakes it easy to produce complex, dynamic motion content with-out motion graphics or multimedia expertise. Using the projectorimage , effects are created by applying various image processing andcomputer vision techniques that result in interesting and uniqueanimations that adapt to the scene. For example, the ”TRON” effecttraces edges present in the scene. This effect first applies Cannyedge detection to the projector image, then uses a shader to performedge tracing on the detected edges. The result is an animation inwhich the edges of the scene glow and change over time. We showseveral of these effects in the accompanying video, and we thinkof these as AR filters (in the image processing sense) for the realworld.
Shadertoy
Lightform Creator is a cross-platform desktop software application.It can control the LF1 (trigger a scan, stream video/camera images,play/pause content, etc), but most importantly, it is a tool for easilycreating projected AR content. This tool can be described as a hybridbetween 2D multimedia editing software (e.g. Adobe’s After Effects)and 3D modeling software (e.g. Autodesk’s Maya), however ouraim is to simplify the user interactions through the use of machine Procedural, in this sense, implies that effects adapt to the contents (color and/orgeometry) of a given scene automatically . learning and computer vision. For the purposes of this paper, wewill focus on Lightform Creator’s procedural effects capabilities.After capturing a scan and receiving the projector image fromthe LF1, Lightform Creator allows a user to quickly create masks(regions where content should be projected in a scene) with vision-assisted tools similar to Photoshop’s Quick Select, Magic Wand andMagnetic Lasso tools. Once masks have been created, the user canchoose from a list of procedural effects to be applied to the scene.For example, TRON traces the edges of a scene, another distorts ascene to attract attention, and another makes the scene appear asif it were a hand-drawn cartoon; many effects are inspired by theIllumiroom project [Jones et al. 2013]. See the accompanying videofor a demonstration. In the past, projected AR has been demonstrated primarily in theacademic community and in high-budget concerts and shows. Light-form encourages and enables these as well as many new AR appli-cations; different from existing AR technology, these do not requireheadsets or additional hardware, and can be consumed by multipleviewers at the same time.We are focused on allowing projected AR experiences to beshared, friction-less, cost-effective, and hidden by presenting anout-of-home technology that can be scaled to many viewers andviewed naturally with the naked human eye. We believe in the im-portance of replacing printed signs or television screens to overlaydigital information onto the real world using a technology that isinvisible, seamless, and unencumbered by screens, thus providingan out-of-home digital display experience that is novel to thosethat experience it. Digital data, art, and ”magic” can be seamlesslyintegrated into everyday spaces.We are particularly interested in using Lightform to communi-cate and display information or attract attention towards a specificobject or venue (digital signage), create stand-alone art pieces (am-biance), design a space (place-making and identity), or purely toprovide entertainment and joy (art/performance). Specifically, arange of projects can be enabled with projected AR: custom digitalcontent can augment and transform a mural; a cafe menu can beeasily animated with dynamic and updatable items without affect-ing the aesthetic of a restaurant; compelling effects can animateonto a 3D sculpture; in general, spaces can be transformed com-pletely, allowing viewers to engage with their surroundings in newand meaningful ways.
We will demonstrate an interactive tabletop where visitors willbe able to customize their own projected AR scene using variousphysical objects and a drawing station. The LF1 will scan the setupthe visitors have created, and users will then have the ability toapply effects onto their scene. We will also demonstrate a secondLF1 setup running premade effects onto a variety of art pieces andsculptures within the booth.
REFERENCES
Richard Hartley and Andrew Zisserman. 2003.
Multiple View Geometry in ComputerVision (2 ed.). Cambridge University Press, New York, NY, USA.Brett R. Jones, Hrvoje Benko, Eyal Ofek, and Andrew D. Wilson. 2013. IllumiRoom:Peripheral Projected Illusions for Interactive Experiences. In
Proceedings of the ightform: Procedural Effects for Projected AR SIGGRAPH ’18 Studio, August 12-16, 2018, Vancouver, BC, Canada
SIGCHI Conference on Human Factors in Computing Systems (CHI ’13) . ACM, NewYork, NY, USA, 869–878.S. Yamazaki, M. Mochimaru, and T. Kanade. 2011. Simultaneous self-calibration of aprojector and a camera using structured light. In