Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lior Shapira is active.

Publication


Featured researches published by Lior Shapira.


user interface software and technology | 2014

RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units

Brett R. Jones; Rajinder Sodhi; Michael Murdock; Ravish Mehra; Hrvoje Benko; Andrew D. Wilson; Eyal Ofek; Blair MacIntyre; Nikunj Raghuvanshi; Lior Shapira

RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.


european conference on computer vision | 2014

A Contour Completion Model for Augmenting Surface Reconstructions

Nathan Silberman; Lior Shapira; Ran Gal; Pushmeet Kohli

The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbitrary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to the surfaces being occluded by other objects in the scene. In this paper, we address the problem of completing and refining such reconstructions. We propose a method for scene completion that can infer the layout of the complete room and the full extent of partially occluded objects. We propose a new probabilistic model, Contour Completion Random Fields, that allows us to complete the boundaries of occluded surfaces. We evaluate our method on synthetic and real world reconstructions of 3D scenes and show that it quantitatively and qualitatively outperforms standard methods. We created a large dataset of partial and complete reconstructions which we will make available to the community as a benchmark for the scene completion task. Finally, we demonstrate the practical utility of our algorithm via an augmented-reality application where objects interact with the completed reconstructions inferred by our method.


international symposium on mixed and augmented reality | 2014

FLARE: Fast layout for augmented reality applications

Ran Gal; Lior Shapira; Eyal Ofek; Pushmeet Kohli

Creating a layout for an augmented reality (AR) application which embeds virtual objects in a physical environment is difficult as it must adapt to any physical space. We propose a rule-based framework for generating object layouts for AR applications. Under our framework, the developer of an AR application specifies a set of rules (constraints) which enforce self-consistency (rules regarding the inter-relationships of application components) and scene-consistency (application components are consistent with the physical environment they are placed in). When a user enters a new environment, we create, in real-time, a layout for the application, which is consistent with the defined constraints (as much as possible). We find the optimal configurations for each object by solving a constraint-satisfaction problem. Our stochastic move making algorithm is domain-aware, and allows us to efficiently converge to a solution for most rule-sets. In the paper we demonstrate several augmented reality applications that automatically adapt to different rooms and changing circumstances in each room.


international symposium on mixed and augmented reality | 2016

Reality Skins: Creating Immersive and Tactile Virtual Environments

Lior Shapira; Daniel Freedman

Reality Skins enables mobile and large-scale virtual reality experiences, dynamically generated based on the users environment. A head-mounted display (HMD) coupled with a depth camera is used to scan the users surroundings: reconstruct geometry, infer floor plans, and detect objects and obstacles. From these elements we generate a Reality Skin, a 3D environment which replaces office or apartment walls with the corridors of a spaceship or underground tunnels, replacing chairs and desks, sofas and beds with crates and computer consoles, fungi and crumbling ancient statues. The placement of walls, furniture and objects in the Reality Skin attempts to approximate reality, such that the user can move around, and touch virtual objects with tactile feedback from real objects. Each possible reality skins world consists of objects, materials and custom scripts. Taking cues from the users surroundings, we create a unique environment combining these building blocks, attempting to preserve the geometry and semantics of the real world.We tackle 3D environment generation as a constraint satisfaction problem, and break it into two parts: First, we use a Markov Chain Monte-Carlo optimization, over a simple 2D polygonal model, to infer the layout of the environment (the structure of the virtual world). Then, we populate the world with various objects and characters, attempting to satisfy geometric (virtual objects should align with objects in the environment), semantic (a virtual chair aligns with a real one), physical (avoid collisions, maintain stability) and other constraints. We find a discrete set of transformations for each object satisfying unary constraints, incorporate pairwise and higher-order constraints, and optimize globally using a very recent technique based on semidefinite relaxation.


international conference on acoustics, speech, and signal processing | 2013

Image deblurring using maps of highlights

Fabiane Queiroz; Tsang Ing Ren; Lior Shapira; Ron Banner

In deblurring an image, we seek to recover the original sharp image. However, without knowledge of the blurring process, we cannot expect to recover the image perfectly. We propose a deblurring method of a single-image where the blur kernel is directly estimated from highlight spots or streaks with high intensity value. These highlighted points can be represented by specular reflection of light that may appear in the eye, on a shiny surface, or in small light sources present in the image. In this work, we detect automatically these high-lighted points in a blurred image. Therefore, creating a map of highlight, which is used as a guide to extract automatically a single highlight from the blurred image. Due to its unique nature, it is demosntrated that the highlighted points are a good estimation of the blur kernel and a sharp image is restored using these kernel. The experimental results show the performance of this method in comparison to several other deblurring methods.


Computer Vision and Image Understanding | 2017

ASIST: Automatic semantically invariant scene transformation

Or Litany; Tal Remez; Daniel Freedman; Lior Shapira; Alexander M. Bronstein; Ran Gal

We present ASIST, a technique for transforming point clouds by replacing objects with their semantically equivalent counterparts. Transformations of this kind have applications in virtual reality, repair of fused scans, and robotics. ASIST is based on a unified formulation of semantic labeling and object replacement; both result from minimizing a single objective. We present numerical tools for the efficient solution of this optimization problem. The method is experimentally assessed on new datasets of both synthetic and real point clouds, and is additionally compared to two recent works on object replacement on data from the corresponding papers.


international symposium on mixed and augmented reality | 2016

TactileVR: Integrating Physical Toys into Learn and Play Virtual Reality Experiences

Lior Shapira; Judith Amores; Xavier Benavides

We present TactileVR, a proof-of-concept virtual reality system in which a user is free to move around and interact with physical objects and toys, which are represented in the virtual world. By integrating tracking information from the head, hands and feet of the user, as well as the objects, we infer complex gestures and interactions such as shaking a toy, rotating a steering wheel, or clapping your hands. We create educational and recreational experiences for kids, which promote exploration and discovery, while feeling intuitive and safe. In each experience objects have a unique appearance and behavior e.g. in an electric circuits lab toy blocks serve as switches, batteries and light bulbs.We conducted a user study with children ages 5–11, who experienced TactileVR and interacted with virtual proxies of physical objects. Children took instantly to the TactileVR environment, intuitively discovered a variety of interactions, and completed tasks faster than with non-tactile virtual objects. Moreover, the presence of physical toys created the opportunity for collaborative play, even when only some of the kids were using a VR headset.


international symposium on mixed and augmented reality | 2016

The RealityMashers: Augmented Reality Wide Field-of-View Optical See-Through Head Mounted Displays

Jaron Lanier; Victor Mateevitsi; Kishore Rathinavel; Lior Shapira; Joseph Menke; Patrick Therien; Joshua Hudman; Gheric Speiginer; Andrea Stevenson Won; Andrzej Banburski; Xavier Benavides; Judith Amores; Javier Porras Lurashi; Wayne Chang

Optical see-through (OST) displays can overlay computer generated graphics on top of the physical world, effectually fusing the two worlds together. However, current OST displays have a limited (compared to the human) field-of-view (FOV) and are powered by laptops which hinders their mobility. Furthermore the systems are designed for single-user experiences and therefore cannot be used for collocated multi-user applications. In this paper we contribute the design of the RealityMashers, two wide FOV OST displays that can be manufactured using rapid-prototyping techniques. We also contribute preliminary user feedback providing insights into enhancing future RealityMasher experiences. By providing the RealityMashers schematics we hope to make Augmented Reality more accessible and as a result accelerate the research in the field.


Archive | 2015

Contour completion for augmenting surface reconstructions

Lior Shapira; Ran Gal; Eyal Ofek; Pushmeet Kohli; Nathan Silberman


Archive | 2013

PROTECTING PRIVACY IN WEB-BASED IMMERSIVE AUGMENTED REALITY

David Molnar; John Vilk; Eyal Ofek; Alexander Moshchuk; Jiahe Wang; Ran Gal; Lior Shapira; Douglas C. Burger; Blair MacIntyre; Benjamin Livshits

Collaboration


Dive into the Lior Shapira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judith Amores

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xavier Benavides

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge