Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eyal Ofek is active.

Publication


Featured researches published by Eyal Ofek.


computer vision and pattern recognition | 2010

Detecting text in natural scenes with stroke width transform

Boris Epshtein; Eyal Ofek; Yonatan Wexler

We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Full-frame video stabilization with motion inpainting

Yasuyuki Matsushita; Eyal Ofek; Weina Ge; Xiaoou Tang; Heung-Yeung Shum

Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos


international conference on computer graphics and interactive techniques | 2008

Image-based façade modeling

Jianxiong Xiao; Tian Fang; Ping Tan; Peng Zhao; Eyal Ofek; Long Quan

We propose in this paper a semi-automatic image-based approach to facade modeling that uses images captured along streets and relies on structure from motion to recover camera positions and point clouds automatically as the initial stage for modeling. We start by considering a building facade as a flat rectangular plane or a developable surface with an associated texture image composited from the multiple visible images. A facade is then decomposed and structured into a Directed Acyclic Graph of rectilinear elementary patches. The decomposition is carried out top-down by a recursive subdivision, and followed by a bottom-up merging with the detection of the architectural bilateral symmetry and repetitive patterns. Each subdivided patch of the flat facade is augmented with a depth optimized using the 3D points cloud. Our system also allows for an easy user feedback in the 2D image space for the proposed decomposition and augmentation. Finally, our approach is demonstrated on a large number of facades from a variety of street-side images.


computer vision and pattern recognition | 2005

Full-frame video stabilization

Yasuyuki Matsushita; Eyal Ofek; Xiaoou Tang; Heung-Yeung Shum

Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing low resolution stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighbouring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.


user interface software and technology | 2014

RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units

Brett R. Jones; Rajinder Sodhi; Michael Murdock; Ravish Mehra; Hrvoje Benko; Andrew D. Wilson; Eyal Ofek; Blair MacIntyre; Nikunj Raghuvanshi; Lior Shapira

RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.


interactive 3d graphics and games | 2005

Interactive deformation of light fields

Billy Chen; Eyal Ofek; Heung-Yeung Shum; Marc Levoy

We present a software pipeline that enables an animator to deform light fields. The pipeline can be used to deform complex objects, such as furry toys, while maintaining photo-realistic quality. Our pipeline consists of three stages. First, we split the light field into sub-light fields. To facilitate splitting of complex objects, we employ a novel technique based on projected light patterns. Second, we deform each sub-light field. To do this, we provide the animator with controls similar to volumetric free-form deformation. Third, we recombine and render each sub-light field. Our rendering technique properly handles visibility changes due to occlusion among sub-light fields. To ensure consistent illumination of objects after they have been deformed, our light fields are captured with the light source fixed to the camera, rather than being fixed to the object. We demonstrate our deformation pipeline using synthetic and photographically acquired light fields. Potential applications include animation, interior design, and interactive gaming.


international conference on computer graphics and interactive techniques | 2005

Modeling hair from multiple views

Yichen Wei; Eyal Ofek; Long Quan; Heung-Yeung Shum

In this paper, we propose a novel image-based approach to model hair geometry from images taken at multiple viewpoints. Unlike previous hair modeling techniques that require intensive user interactions or rely on special capturing setup under controlled illumination conditions, we use a handheld camera to capture hair images under uncontrolled illumination conditions. Our multi-view approach is natural and flexible for capturing. It also provides inherent strong and accurate geometric constraints to recover hair models.In our approach, the hair fibers are synthesized from local image orientations. Each synthesized fiber segment is validated and optimally triangulated from all visible views. The hair volume and the visibility of synthesized fibers can also be reliably estimated from multiple views. Flexibility of acquisition, little user interaction, and high quality results of recovered complex hair models are the key advantages of our method.


international conference on computer graphics and interactive techniques | 1998

Interactive reflections on curved objects

Eyal Ofek; Ari Rappoport

Global view-dependent illumination phenomena, in particular reflections, greatly enhance the realism of computer-generated imagery. Current interactive rendering methods do not provide satisfactory support for reflections on curved objects. In this paper we present a novel method for interactive computation of reflections on curved objects. We transform potentially reflected scene objects according to reflectors, to generate virtual objects. These are rendered by the graphics system as ordinary objects, creating a reflection image that is blended with the primary image. Virtual objects are created by tessellating scene objects and computing a virtual vertex for each resulting scene vertex. Virtual vertices are computed using a novel space subdivision, the reflection subdivision. For general polygonal mesh reflectors, we present an associated approximate acceleration scheme, the explosion map. For specific types of objects (e.g., linear extrusions of planar curves) the reflection subdivision can be reduced to a 2-D one that is utilized more accurately and efficiently. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism.


human factors in computing systems | 2016

Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences

Mahdi Azmandian; Mark S. Hancock; Hrvoje Benko; Eyal Ofek; Andrew D. Wilson

Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We propose a solution that overcomes this limitation by hacking human perception. We have created a framework for repurposing passive haptics, called haptic retargeting, that leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: world manipulation, body manipulation and a hybrid technique which combines both world and body manipulation. Our study results indicate that all our haptic retargeting techniques improve the sense of presence when compared to typical wand-based 3D control of virtual objects. Furthermore, our hybrid haptic retargeting achieved the highest satisfaction and presence scores while limiting the visible side-effects during interaction.


eurographics | 2010

Seamless Montage for Texturing Models

Ran Gal; Yonathan Wexler; Eyal Ofek; Hugues Hoppe; Daniel Cohen-Or

We present an automatic method to recover high‐resolution texture over an object by mapping detailed photographs onto its surface. Such high‐resolution detail often reveals inaccuracies in geometry and registration, as well as lighting variations and surface reflections. Simple image projection results in visible seams on the surface. We minimize such seams using a global optimization that assigns compatible texture to adjacent triangles. The key idea is to search not only combinatorially over the source images, but also over a set of local image transformations that compensate for geometric misalignment. This broad search space is traversed using a discrete labeling algorithm, aided by a coarse‐to‐fine strategy. Our approach significantly improves resilience to acquisition errors, thereby allowing simple and easy creation of textured models for use in computer graphics.

Collaboration


Dive into the Eyal Ofek's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge