Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ran Gal is active.

Publication


Featured researches published by Ran Gal.


ACM Transactions on Graphics | 2006

Salient geometric features for partial shape matching and similarity

Ran Gal; Daniel Cohen-Or

This article introduces a method for partial matching of surfaces represented by triangular meshes. Our method matches surface regions that are numerically and topologically dissimilar, but approximately similar regions. We introduce novel local surface descriptors which efficiently represent the geometry of local regions of the surface. The descriptors are defined independently of the underlying triangulation, and form a compatible representation that allows matching of surfaces with different triangulations. To cope with the combinatorial complexity of partial matching of large meshes, we introduce the abstraction of salient geometric features and present a method to construct them. A salient geometric feature is a compound high-level feature of nontrivial local shapes. We show that a relatively small number of such salient geometric features characterizes the surface well for various similarity applications. Matching salient geometric features is based on indexing rotation-invariant features and a voting scheme accelerated by geometric hashing. We demonstrate the effectiveness of our method with a number of applications, such as computing self-similarity, alignments, and subparts similarity.


international conference on computer graphics and interactive techniques | 2009

iWIRES: an analyze-and-edit approach to shape manipulation

Ran Gal; Olga Sorkine; Niloy J. Mitra; Daniel Cohen-Or

Man-made objects are largely dominated by a few typical features that carry special characteristics and engineered meanings. State-of-the-art deformation tools fall short at preserving such characteristic features and global structure. We introduce iWIRES, a novel approach based on the argument that man-made models can be distilled using a few special 1D wires and their mutual relations. We hypothesize that maintaining the properties of such a small number of wires allows preserving the defining characteristics of the entire object. We introduce an analyze-and-edit approach, where prior to editing, we perform a light-weight analysis of the input shape to extract a descriptive set of wires. Analyzing the individual and mutual properties of the wires, and augmenting them with geometric attributes makes them intelligent and ready to be manipulated. Editing the object by modifying the intelligent wires leads to a powerful editing framework that retains the original design intent and object characteristics. We show numerous results of manipulation of man-made shapes using our editing technique.


eurographics symposium on rendering techniques | 2006

Feature-aware texturing

Ran Gal; Olga Sorkine; Daniel Cohen-Or

We present a method for inhomogeneous 2D texture mapping guided by a feature mask, that preserves some regions of the image, such as foreground objects or other prominent parts. The method is able to arbitrarily warp a given image while preserving the shape of its features by constraining their deformation to be a similarity transformation. In particular, our method allows global or local changes to the aspect ratio of the texture without causing undesirable shearing to the features. The algorithmic core of our method is a particular formulation of the Laplacian editing technique, suited to accommodate similarity constraints on parts of the domain. The method is useful in digital imaging, texture design and any other applications involving image warping, where parts of the image have high familiarity and should retain their shape after modification.


IEEE Transactions on Visualization and Computer Graphics | 2007

Pose-Oblivious Shape Signature

Ran Gal; Ariel Shamir; Daniel Cohen-Or

A 3D shape signature is a compact representation for some essence of a shape. Shape signatures are commonly utilized as a fast indexing mechanism for shape retrieval. Effective shape signatures capture some global geometric properties which are scale, translation, and rotation invariant. In this paper, we introduce an effective shape signature which is also pose-oblivious. This means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations. Although some topology-based matching methods can be considered pose-oblivious as well, our new signature retains the simplicity and speed of signature indexing. Moreover, contrary to topology-based methods, the new signature is also insensitive to the topology change of the shape, allowing us to match similar shapes with different genus. Our shape signature is a 2D histogram which is a combination of the distribution of two scalar functions defined on the boundary surface of the 3D shape. The first is a definition of a novel function called the local-diameter function. This function measures the diameter of the 3D shape in the neighborhood of each vertex. The histogram of this function is an informative measure of the shape which is insensitive to pose changes. The second is the centricity function that measures the average geodesic distance from one vertex to all other vertices on the mesh. We evaluate and compare a number of methods for measuring the similarity between two signatures, and demonstrate the effectiveness of our pose-oblivious shape signature within a 3D search engine application for different databases containing hundreds of models


eurographics | 2010

Seamless Montage for Texturing Models

Ran Gal; Yonathan Wexler; Eyal Ofek; Hugues Hoppe; Daniel Cohen-Or

We present an automatic method to recover high‐resolution texture over an object by mapping detailed photographs onto its surface. Such high‐resolution detail often reveals inaccuracies in geometry and registration, as well as lighting variations and surface reflections. Simple image projection results in visible seams on the surface. We minimize such seams using a global optimization that assigns compatible texture to adjacent triangles. The key idea is to search not only combinatorially over the source images, but also over a set of local image transformations that compensate for geometric misalignment. This broad search space is traversed using a discrete labeling algorithm, aided by a coarse‐to‐fine strategy. Our approach significantly improves resilience to acquisition errors, thereby allowing simple and easy creation of textured models for use in computer graphics.


ACM Transactions on Graphics | 2007

Volume and shape preservation via moving frame manipulation

Yaron Lipman; Daniel Cohen-Or; Ran Gal; David Levin

This article introduces a method for mesh editing that is aimed at preserving shape and volume. We present two new developments: The first is a minimization of a functional expressing a geometric distance measure between two isometric surfaces. The second is a local volume analysis linking the volume of an object to its surface curvature. Our method is based upon the moving frames representation of meshes. Applying a rotation field to the moving frames defines an isometry. Given rotational constraints, the mesh is deformed by an optimal isometry defined by minimizing the distance measure between original and deformed meshes. The resulting isometry nicely preserves the surface details, but when large rotations are applied, the volumetric behavior of the model may be unsatisfactory. Using the local volume analysis, we define a scalar field by which we scale the moving frames. Scaled and rotated moving frames restore volumetric properties of the original mesh, while properly maintaining the surface details. Our results show that even extreme deformations can be applied to meshes, with only minimal distortion of surface details and object volume.


symposium on geometry processing | 2007

Surface reconstruction using local shape priors

Ran Gal; Ariel Shamir; Tal Hassner; Mark Pauly; Daniel Cohen-Or

We present an example-based surface reconstruction method for scanned point sets. Our approach uses a database of local shape priors built from a set of given context models that are chosen specifically to match a specific scan. Local neighborhoods of the input scan are matched with enriched patches of these models at multiple scales. Hence, instead of using a single prior for reconstruction, our method allows specific regions in the scan to match the most relevant prior that fits best. Such high confidence matches carry relevant information from the prior models to the scan, including normal data and feature classification, and are used to augment the input point-set. This allows to resolve many ambiguities and difficulties that come up during reconstruction, e.g., distinguishing between signal and noise or between gaps in the data and boundaries of the model. We demonstrate how our algorithm, given suitable prior models, successfully handles noisy and under-sampled point sets, faithfully reconstructing smooth regions as well as sharp features.


non-photorealistic animation and rendering | 2007

3D collage: expressive non-realistic modeling

Ran Gal; Olga Sorkine; Alla Sheffer; Daniel Cohen-Or

The ability of computer graphics to represent images symbolically has so far been used mostly to render existing models with greater clarity or with greater visual appeal. In this work, we present a method aimed at harnessing this symbolic representation power to increase the expressiveness of the 3D models themselves. We achieve this through modification of the actual representation of 3D shapes rather than their images. In particular, we focus on 3D collage creation, namely, a generation of compound representations of objects. The ability of such representations to convey multiple meanings has been recognized for centuries. At the same time, it has also been acknowledged that for humans, the creation of compound 3D shapes is extremely taxing. Thus, this expressive but technically challenging artistic medium is a particularly good candidate to address using computer graphics methods. We present an algorithm for 3D collage generation that serves as an artistic tool performing the challenging 3D processing tasks, thus enabling the artist to focus on the creative side of the process.


european conference on computer vision | 2014

A Contour Completion Model for Augmenting Surface Reconstructions

Nathan Silberman; Lior Shapira; Ran Gal; Pushmeet Kohli

The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbitrary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to the surfaces being occluded by other objects in the scene. In this paper, we address the problem of completing and refining such reconstructions. We propose a method for scene completion that can infer the layout of the complete room and the full extent of partially occluded objects. We propose a new probabilistic model, Contour Completion Random Fields, that allows us to complete the boundaries of occluded surfaces. We evaluate our method on synthetic and real world reconstructions of 3D scenes and show that it quantitatively and qualitatively outperforms standard methods. We created a large dataset of partial and complete reconstructions which we will make available to the community as a benchmark for the scene completion task. Finally, we demonstrate the practical utility of our algorithm via an augmented-reality application where objects interact with the completed reconstructions inferred by our method.


international symposium on mixed and augmented reality | 2014

FLARE: Fast layout for augmented reality applications

Ran Gal; Lior Shapira; Eyal Ofek; Pushmeet Kohli

Creating a layout for an augmented reality (AR) application which embeds virtual objects in a physical environment is difficult as it must adapt to any physical space. We propose a rule-based framework for generating object layouts for AR applications. Under our framework, the developer of an AR application specifies a set of rules (constraints) which enforce self-consistency (rules regarding the inter-relationships of application components) and scene-consistency (application components are consistent with the physical environment they are placed in). When a user enters a new environment, we create, in real-time, a layout for the application, which is consistent with the defined constraints (as much as possible). We find the optimal configurations for each object by solving a constraint-satisfaction problem. Our stochastic move making algorithm is domain-aware, and allows us to efficiently converge to a solution for most rule-sets. In the paper we demonstrate several augmented reality applications that automatically adapt to different rooms and changing circumstances in each room.

Collaboration


Dive into the Ran Gal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge