Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guy Rosman is active.

Publication


Featured researches published by Guy Rosman.


computer vision and pattern recognition | 2015

RGBD-fusion: Real-time high precision depth recovery

Roy Or El; Guy Rosman; Aaron Wetzler; Ron Kimmel; Alfred M. Bruckstein

The popularity of low-cost RGB-D scanners is increasing on a daily basis. Nevertheless, existing scanners often cannot capture subtle details in the environment. We present a novel method to enhance the depth map by fusing the intensity and depth information to create more detailed range profiles. The lighting model we use can handle natural scene illumination. It is integrated in a shape from shading like technique to improve the visual fidelity of the reconstructed object. Unlike previous efforts in this domain, the detailed geometry is calculated directly, without the need to explicitly find and integrate surface normals. In addition, the proposed method operates four orders of magnitude faster than the state of the art. Qualitative and quantitative visual and statistical evidence support the improvement in the depth obtained by the suggested method.


International Journal of Computer Vision | 2010

Nonlinear Dimensionality Reduction by Topologically Constrained Isometric Embedding

Guy Rosman; Michael M. Bronstein; Alexander M. Bronstein; Ron Kimmel

Many manifold learning procedures try to embed a given feature data into a flat space of low dimensionality while preserving as much as possible the metric in the natural feature space. The embedding process usually relies on distances between neighboring features, mainly since distances between features that are far apart from each other often provide an unreliable estimation of the true distance on the feature manifold due to its non-convexity. Distortions resulting from using long geodesics indiscriminately lead to a known limitation of the Isomap algorithm when used to map non-convex manifolds. Presented is a framework for nonlinear dimensionality reduction that uses both local and global distances in order to learn the intrinsic geometry of flat manifolds with boundaries. The resulting algorithm filters out potentially problematic distances between distant feature points based on the properties of the geodesics connecting those points and their relative distance to the boundary of the feature manifold, thus avoiding an inherent limitation of the Isomap algorithm. Since the proposed algorithm matches non-local structures, it is robust to strong noise. We show experimental results demonstrating the advantages of the proposed approach over conventional dimensionality reduction techniques, both global and local in nature.


computer vision and pattern recognition | 2014

A Mixture of Manhattan Frames: Beyond the Manhattan World

Julian Straub; Guy Rosman; Oren Freifeld; John J. Leonard; John W. Fisher

Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches to scene representation exploit this phenomenon via the somewhat restrictive assumption that every plane is perpendicular to one of the axes of a single coordinate system. Known as the Manhattan-World model, this assumption is widely used in computer vision and robotics. The complexity of many real-world scenes, however, necessitates a more flexible model. We propose a novel probabilistic model that describes the world as a mixture of Manhattan frames: each frame defines a different orthogonal coordinate system. This results in a more expressive model that still exploits the orthogonality constraints. We propose an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that utilizes the geometry of the unit sphere. We demonstrate the versatility of our Mixture-of-Manhattan-Frames model by describing complex scenes using depth images of indoor scenes as well as aerial-LiDAR measurements of an urban center. Additionally, we show that the model lends itself to focal-length calibration of depth cameras and to plane segmentation.


international conference on scale space and variational methods in computer vision | 2011

Group-Valued regularization framework for motion segmentation of dynamic non-rigid shapes

Guy Rosman; Michael M. Bronstein; Alexander M. Bronstein; Alon Wolf; Ron Kimmel

Understanding of articulated shape motion plays an important role in many applications in the mechanical engineering, movie industry, graphics, and vision communities. In this paper, we study motion-based segmentation of articulated 3D shapes into rigid parts. We pose the problem as finding a group-valued map between the shapes describing the motion, forcing it to favor piecewise rigid motions. Our computation follows the spirit of the Ambrosio-Tortorelli scheme for Mumford-Shah segmentation, with a diffusion component suited for the group nature of the motion model. Experimental results demonstrate the effectiveness of the proposed method in non-rigid motion segmentation.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Multi-Region Active Contours with a Single Level Set Function

Anastasia Dubrovina-Karni; Guy Rosman; Ron Kimmel

Segmenting an image into an arbitrary number of coherent regions is at the core of image understanding. Many formulations of the segmentation problem have been suggested over the past years. These formulations include, among others, axiomatic functionals, which are hard to implement and analyze, and graph-based alternatives, which impose a non-geometric metric on the problem. We propose a novel method for segmenting an image into an arbitrary number of regions using an axiomatic variational approach. The proposed method allows to incorporate various generic region appearance models, while avoiding metrication errors. In the suggested framework, the segmentation is performed by level set evolution. Yet, contrarily to most existing methods, here, multiple regions are represented by a single non-negative level set function. The level set function evolution is efficiently executed through the Voronoi Implicit Interface Method for multi-phase interface evolution. The proposed approach is shown to obtain accurate segmentation results for various natural 2D and 3D images, comparable to state-of-the-art image segmentation algorithms.


international conference on scale space and variational methods in computer vision | 2007

Efficient Beltrami filtering of color images via vector extrapolation

Lorina Dascal; Guy Rosman; Ron Kimmel

The Beltrami image flow is an effective non-linear filter, often used in color image processing. It was shown to be closely related to the median, total variation, and bilateral filters. It treats the image as a 2D manifold embedded in a hybrid spatial-feature space. Minimization of the image area surface yields the Beltrami flow. The corresponding diffusion operator is anisotropic and strongly couples the spectral components. Thus, there is so far no implicit nor operator splitting based numerical scheme for the PDE that describes Beltrami flow in color. Usually, this flow is implemented by explicit schemes, which are stable only for very small time steps and therefore require many iterations. At the other end, vector extrapolation techniques accelerate the convergence of vector sequences, without explicit knowledge of the sequence generator. In this paper, we propose to use the minimum polynomial extrapolation (MPE) and reduced rank extrapolation (RRE) vector extrapolation methods for accelerating the convergence of the explicit schemes for the Beltrami flow. Experiments demonstrate their stability and efficiency compared to explicit schemes.


Computer Graphics Forum | 2013

Patch‐Collaborative Spectral Point‐Cloud Denoising

Guy Rosman; Anastasia Dubrovina; Ron Kimmel

We present a new framework for point cloud denoising by patch‐collaborative spectral analysis. A collaborative generalization of each surface patch is defined, combining similar patches from the denoised surface. The Laplace–Beltrami operator of the collaborative patch is then used to selectively smooth the surface in a robust manner that can gracefully handle high levels of noise, yet preserves sharp surface features. The resulting denoising algorithm competes favourably with state‐of‐the‐art approaches, and extends patch‐based algorithms from the image processing domain to point clouds of arbitrary sampling. We demonstrate the accuracy and noise‐robustness of the proposed algorithm on standard benchmark models as well as range scans, and compare it to existing methods for point cloud denoising.


european conference on computer vision | 2012

Fast regularization of matrix-valued images

Guy Rosman; Yu Wang; Xue-Cheng Tai; Ron Kimmel; Alfred M. Bruckstein

Regularization of images with matrix-valued data is important in medical imaging, motion analysis and scene understanding. We propose a novel method for fast regularization of matrix group-valued images. Using the augmented Lagrangian framework we separate total- variation regularization of matrix-valued images into a regularization and a projection steps. Both steps are computationally efficient and easily parallelizable, allowing real-time regularization of matrix valued images on a graphic processing unit. We demonstrate the effectiveness of our method for smoothing several group-valued image types, with applications in directions diffusion, motion analysis from depth sensors, and DT-MRI denoising.


international conference on computer vision | 2012

Group-Valued regularization for analysis of articulated motion

Guy Rosman; Alexander M. Bronstein; Michael M. Bronstein; Xue-Cheng Tai; Ron Kimmel

We present a novel method for estimation of articulated motion in depth scans. The method is based on a framework for regularization of vector- and matrix- valued functions on parametric surfaces. We extend augmented-Lagrangian total variation regularization to smooth rigid motion cues on the scanned 3D surface obtained from a range scanner. We demonstrate the resulting smoothed motion maps to be a powerful tool in articulated scene understanding, providing a basis for rigid parts segmentation, with little prior assumptions on the scene, despite the noisy depth measurements that often appear in commodity depth scanners.


international conference on scale space and variational methods in computer vision | 2011

Over-Parameterized optical flow using a stereoscopic constraint

Guy Rosman; Shachar Shem-Tov; David Bitton; Tal Nir; Gilad Adiv; Ron Kimmel; Arie Feuer; Alfred M. Bruckstein

The success of variational methods for optical flow computation lies in their ability to regularize the problem at a differential (pixel) level and combine piecewise smoothness of the flow field with the brightness constancy assumptions. However, the piecewise smoothness assumption is often motivated by heuristic or algorithmic considerations. Lately, new priors were proposed to exploit the structural properties of the flow. Yet, most of them still utilize a generic regularization term. In this paper we consider optical flow estimation in static scenes. We show that introducing a suitable motion model for the optical flow allows us to pose the regularization term as a geometrically meaningful one. The proposed method assumes that the visible surface can be approximated by a piecewise smooth planar manifold. Accordingly, the optical flow between two consecutive frames can be locally regarded as a homography consistent with the epipolar geometry and defined by only three parameters at each pixel. These parameters are directly related to the equation of the scene local tangent plane, so that their spatial variations should be relatively small, except for creases and depth discontinuities. This leads to a regularization term that measures the total variation of the model parameters and can be extended to a Mumford-Shah segmentation of the visible surface. This new technique yields significant improvements over state of the art optical flow computation methods for static scenes.

Collaboration


Dive into the Guy Rosman's collaboration.

Top Co-Authors

Avatar

Ron Kimmel

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniela Rus

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander M. Bronstein

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John W. Fisher

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alfred M. Bruckstein

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorina Dascal

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mikhail Volkov

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge