Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Haber is active.

Publication


Featured researches published by Tom Haber.


computer vision and pattern recognition | 2012

Enhancing underwater images and videos by fusion

Cosmin Ancuti; Codruta Orniana Ancuti; Tom Haber; Philippe Bekaert

This paper describes a novel strategy to enhance underwater videos and images. Built on the fusion principles, our strategy derives the inputs and the weight measures only from the degraded version of the image. In order to overcome the limitations of the underwater medium we define two inputs that represent color corrected and contrast enhanced versions of the original underwater image/frame, but also four weight maps that aim to increase the visibility of the distant objects degraded due to the medium scattering and absorption. Our strategy is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. Our fusion framework also supports temporal coherence between adjacent frames by performing an effective edge preserving noise reduction strategy. The enhanced images and videos are characterized by reduced noise level, better exposed-ness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, the utility of our enhancing technique is proved for several challenging applications.


computer vision and pattern recognition | 2009

Relighting objects from image collections

Tom Haber; Christian Fuchs; Philippe Bekaer; Hans-Peter Seidel; Michael Goesele; Hendrik P. A. Lensch

We present an approach for recovering the reflectance of a static scene with known geometry from a collection of images taken under distant, unknown illumination. In contrast to previous work, we allow the illumination to vary between the images, which greatly increases the applicability of the approach. Using an all-frequency relighting framework based on wavelets, we are able to simultaneously estimate the per-image incident illumination and the per-surface point reflectance. The wavelet framework allows for incorporating various reflection models. We demonstrate the quality of our results for synthetic test cases as well as for several datasets captured under laboratory conditions. Combined with multi-view stereo reconstruction, we are even able to recover the geometry and reflectance of a scene solely using images collected from the Internet.


ACM Transactions on Graphics | 2012

Reflectance model for diffraction

Tom Cuypers; Tom Haber; Philippe Bekaert; Se Baek Oh; Ramesh Raskar

We present a novel method of simulating wave effects in graphics using ray-based renderers with a new function: the Wave BSDF (Bidirectional Scattering Distribution Function). Reflections from neighboring surface patches represented by local BSDFs are mutually independent. However, in many surfaces with wavelength-scale microstructures, interference and diffraction requires a joint analysis of reflected wavefronts from neighboring patches. We demonstrate a simple method to compute the BSDF for the entire microstructure, which can be used independently for each patch. This allows us to use traditional ray-based rendering pipelines to synthesize wave effects. We exploit the Wigner Distribution Function (WDF) to create transmissive, reflective, and emissive BSDFs for various diffraction phenomena in a physically accurate way. In contrast to previous methods for computing interference, we circumvent the need to explicitly keep track of the phase of the wave by using BSDFs that include positive as well as negative coefficients. We describe and compare the theory in relation to well-understood concepts in rendering and demonstrate a straightforward implementation. In conjunction with standard raytracers, such as PBRT, we demonstrate wave effects for a range of scenarios such as multibounce diffraction materials, holograms, and reflection of high-frequency surfaces.


2006 IEEE Symposium on Interactive Ray Tracing | 2006

The Quantized kd-Tree: Efficient Ray Tracing of Compressed Point Clouds

Erik Hubo; Tom Mertens; Tom Haber; Philippe Bekaert

Both ray tracing and point-based representations provide means to efficiently display very complex 3D models. Computational efficiency has been the main focus of previous work on ray tracing point-sampled surfaces. For very complex models efficient storage in the form of compression becomes necessary in order to avoid costly disk access. However, as ray tracing requires neighborhood queries, existing compression schemes cannot be applied because of their sequential nature. This paper introduces a novel acceleration structure called the quantized kd-tree, which offers both efficient traversal and storage. The gist of our new representation lies in quantizing the kd-tree splitting plane coordinates. We show that the quantized kd-tree reduces the memory footprint up to 18 times, not compromising performance. Moreover, the technique can also be employed to provide LOD (level-of-detail) to reduce aliasing problems, with little additional storage cost


Computers & Graphics | 2008

Special Section: Point-Based Graphics: Self-similarity based compression of point set surfaces with application to ray tracing

Erik Hubo; Tom Mertens; Tom Haber; Philippe Bekaert

Many real-world, scanned surfaces contain repetitive structures, like bumps, ridges, creases, and so on. We present a compression technique that exploits self-similarity within a point-sampled surface. Our method replaces similar surface patches with an instance of a representative patch. We use a concise shape descriptor to identify and cluster similar patches. Decoding is achieved through simple instancing of the representative patches. Encoding is efficient, and can be applied to large data sets consisting of millions of points. Moreover, our technique offers random access to the compressed data, making it applicable to ray tracing, and easily allows for storing additional point attributes, like normals.


SPBG | 2007

Self-Similarity-Based Compression of Point Clouds, with Application to Ray Tracing

Erik Hubo; Tom Mertens; Tom Haber; Philippe Bekaert

Many real-world, scanned surfaces contain repetitive structures, like bumps, ridges, creases, and so on. We present a compression technique that exploits self-similarity within a point-sampled surface. Our method replaces similar surface patches with an instance of a representative patch. We use a concise shape descriptor to identify and cluster similar patches. Decoding is achieved through simple instancing of the representative patches. Encoding is efficient, and can be applied to large datasets consisting of millions of points. Moreover, our technique offers random access to the compressed data, making it applicable to ray tracing, and easily allows for storing additional point attributes, like normals.


international conference on image processing | 2011

Fusion-based restoration of the underwater images

Codruta Orniana Ancuti; Cosmin Ancuti; Tom Haber; Philippe Bekaert

In this paper we introduce a novel strategy that effectively enhance the visibility of underwater images. Our method is build-up on the fusion strategy that takes a sequence of inputs derived from the initial image. Practically, our fusion-based method aims to yield a final image that overcomes the deficiencies existing in the degraded input images by employing several weight maps that discriminate the regions characterized by poor visibility. The extensive experiments demonstrate the utility of our solution since the visibility range of the underwater images is significantly increased by improving both the scene contrast and the color appearance.


The Visual Computer | 2008

Video enhancement using reference photographs

Cosmin Ancuti; Tom Haber; Tom Mertens; Philippe Bekaert

Digital video cameras are becoming commonplace in many households, but they still leave something to be desired in terms of image quality. Their poor light sensitivity make images noisy and blurry, and internal storage bandwidth limits the frame resolution. We present a technique for enhancing a low quality video sequence, using a set of high quality reference photographs, taken of the same scene. Our technique generates a high quality frame by copying information from the photographs in a patch-wise fashion. The copying is guided by a sparse set of reliable correspondences between the video frames and photographs. Our technique is purely image-based, and does not require depth estimation. A robust descriptor is employed for establishing valid matches between the video frames and the photographs. Then, the geometric transformation is estimated between every corresponding patch. With only a few reference photographs, we are able to reduce noise and motion blur, and more important, increase resolution by a factor of 6 (see Fig. 1).


international conference on multimedia and expo | 2011

Stroke-based creation of depth maps

Mark Gerrits; Bert de Decker; Cosmin Ancuti; Tom Haber; Codruta Orniana Ancuti; Tom Mertens; Philippe Bekaert

Depth information opens up a lot of possibilities for meaningful editing of photographs. So far, it has only been possible to acquire depth information by either using additional hardware, restrictive scene assumptions or extensive manual input. We developed a novel user-assisted technique for creating adequate depth maps with an intuitive stroke-based user interface. Starting from absolute depth constraints as well as surface normal constraints, we optimize for a feasible depth map over the image. We introduce a suitable smoothness constraint that respects image edges and accounts for slanted surfaces. We illustrate the usefulness of our technique by several applications such as depth of field reduction and advanced compositing.


international conference on parallel processing | 2012

An investigation into the performance of reduction algorithms under load imbalance

Petar Marendi; Jan Lemeire; Tom Haber; Dean Vučini; Peter Schelkens

Today, most reduction algorithms are optimized for balanced workloads; they assume all processes will start the reduction at about the same time. However, in practice this is not always the case and significant load imbalances may occur and affect the performance of said algorithms. In this paper we investigate the impact of such imbalances on the most commonly employed reduction algorithms and propose a new algorithm specifically adapted to the presented context. Firstly, we analyze the optimistic case where we have a priori knowledge of all imbalances and propose a near-optimal solution. In the general case, where we do not have any foreknowledge of the imbalances, we propose a dynamically rebalanced tree reduction algorithm. We show experimentally that this algorithm performs better than the default OpenMPI and MVAPICH2 implementations.

Collaboration


Dive into the Tom Haber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom Vander Aa

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Erik Hubo

University of Hasselt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bruno De Fraine

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge