Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tamy Boubekeur is active.

Publication


Featured researches published by Tamy Boubekeur.


IEEE Transactions on Visualization and Computer Graphics | 2011

Sketch-Based Image Retrieval: Benchmark and Bag-of-Features Descriptors

Mathias Eitz; Kristian Hildebrand; Tamy Boubekeur; Marc Alexa

We introduce a benchmark for evaluating the performance of large-scale sketch-based image retrieval systems. The necessary data are acquired in a controlled user study where subjects rate how well given sketch/image pairs match. We suggest how to use the data for evaluating the performance of sketch-based image retrieval systems. The benchmark data as well as the large image database are made publicly available for further studies of this type. Furthermore, we develop new descriptors based on the bag-of-features approach and use the benchmark to demonstrate that they significantly outperform other descriptors in the literature.


international conference on computer graphics and interactive techniques | 2012

Sketch-based shape retrieval

Mathias Eitz; Ronald Richter; Tamy Boubekeur; Kristian Hildebrand; Marc Alexa

We develop a system for 3D object retrieval based on sketched feature lines as input. For objective evaluation, we collect a large number of query sketches from human users that are related to an existing data base of objects. The sketches turn out to be generally quite abstract with large local and global deviations from the original shape. Based on this observation, we decide to use a bag-of-features approach over computer generated line drawings of the objects. We develop a targeted feature transform based on Gabor filters for this system. We can show objectively that this transform is better suited than other approaches from the literature developed for similar tasks. Moreover, we demonstrate how to optimize the parameters of our, as well as other approaches, based on the gathered sketches. In the resulting comparison, our approach is significantly better than any other system described so far.


acm multimedia | 2011

Evaluating a dancer's performance using kinect-based skeleton tracking

Dimitrios S. Alexiadis; Philip Kelly; Petros Daras; Noel E. O'Connor; Tamy Boubekeur; Maher Ben Moussa

In this work, we describe a novel system that automatically evaluates dance performances against a gold-standard performance and provides visual feedback to the performer in a 3D virtual environment. The system acquires the motion of a performer via Kinect-based human skeleton tracking, making the approach viable for a large range of users, including home enthusiasts. Unlike traditional gaming scenarios, when the motion of a user must by kept in synch with a pre-recorded avatar that is displayed on screen, the technique described in this paper targets online interactive scenarios where dance choreographies can be set, altered, practiced and refined by users. In this work, we have addressed some areas of this application scenario. In particular, a set of appropriate signal processing and soft computing methodologies is proposed for temporally aligning dance movements from two different users and quantitatively evaluating one performance against another.


Computers & Graphics | 2010

Technical Section: An evaluation of descriptors for large-scale image retrieval from sketched feature lines

Mathias Eitz; Kristian Hildebrand; Tamy Boubekeur; Marc Alexa

We address the problem of fast, large scale sketch-based image retrieval, searching in a database of over one million images. We show that current retrieval methods do not scale well towards large databases in the context of interactively supervised search and propose two different approaches for which we objectively evaluate that they significantly outperform existing approaches. The proposed descriptors are constructed such that both the full color image and the sketch undergo exactly the same preprocessing steps. We first search for an image with similar structure, analyzing gradient orientations. Then, best matching images are clustered based on dominant color distributions, to offset the lack of color-based decision during the initial search. Overall, the query results demonstrate that the system offers intuitive access to large image databases using a user-friendly sketch-and-browse interface.


sketch based interfaces and modeling | 2009

A descriptor for large scale image retrieval based on sketched feature lines

Mathias Eitz; Kristian Hildebrand; Tamy Boubekeur; Marc Alexa

We address the problem of large scale sketch based image retrieval, searching in a database of over a million images. The search is based on a descriptor that elegantly addresses the asymmetry between the binary user sketch on the one hand and the full color image on the other hand. The proposed descriptor is constructed such that both the full color image and the sketch undergo exactly the same preprocessing steps. We also design an adapted version of the descriptor proposed for MPEG-7 and compare their performance on a database of 1.5 million images. Best matching images are clustered based on color histograms, to offset the lacking color in the query. Overall, the query results demonstrate that the system allows users an intuitive access to large image databases.


siggraph eurographics conference on graphics hardware | 2005

Generic mesh refinement on GPU

Tamy Boubekeur; Christophe Schlick

Many recent publications have shown that a large variety of computation involved in computer graphics can be moved from the CPU to the GPU, by a clever use of vertex or fragment shaders. Nonetheless there is still one kind of algorithms that is hard to translate from CPU to GPU: mesh refinement techniques. The main reason for this, is that vertex shaders available on current graphics hardware do not allow the generation of additional vertices on a mesh stored in graphics hardware. In this paper, we propose a general solution to generate mesh refinement on GPU. The main idea is to define a generic refinement pattern that will be used to virtually create additional inner vertices for a given polygon. These vertices are then translated according to some procedural displacement map defining the underlying geometry (similarly, the normal vectors may be transformed according to some procedural normal map). For illustration purpose, we use a tesselated triangular pattern, but many other refinement patterns may be employed. To show its flexibility, the technique has been applied on a large variety of refinement techniques: procedural displacement mapping, as well as more complex techniques such as curved PN-triangles or ST-meshes.


Computer Graphics Forum | 2008

A Flexible Kernel for Adaptive Mesh Refinement on GPU

Tamy Boubekeur; Christophe Schlick

We present a flexible GPU kernel for adaptive on‐the‐fly refinement of meshes with arbitrary topology. By simply reserving a small amount of GPU memory to store a set of adaptive refinement patterns, on‐the‐fly refinement is performed by the GPU, without any preprocessing nor additional topology data structure. The level of adaptive refinement can be controlled by specifying a per‐vertex depth‐tag, in addition to usual position, normal, color and texture coordinates. This depth‐tag is used by the kernel to instanciate the correct refinement pattern, which will map a refined connectivity on the input coarse polygon. Finally, the refined patch produced for each triangle can be displaced by the vertex shader, using any kind of geometric refinement, such as Bezier patch smoothing, scalar valued displacement, procedural geometry synthesis or subdivision surfaces. This refinement engine does neither require multipass rendering nor any use of fragment processing nor special preprocess of the input mesh structure. It can be implemented on any GPU with vertex shading capabilities.


international conference on computer graphics and interactive techniques | 2009

PhotoSketch: a sketch based image query and compositing system

Mathias Eitz; Kristian Hildebrand; Tamy Boubekeur; Marc Alexa

We introduce a system for progressively creating images through a simple sketching and compositing interface. A large database of over 1.5 million images is searched for matches to a users binary outline sketch; the results of this search can be combined interactively to synthesize the desired image. We introduce image descriptors for the task of estimating the difference between images and binary outline sketches. The compositing part is based on graph cut and Poisson blending. We demonstrate that the resulting system allows generating complex images in an intuitive way.


eurographics | 2012

SHREC'12 track: sketch-based 3D shape retrieval

Bo Li; Tobias Schreck; Afzal Godil; Marc Alexa; Tamy Boubekeur; Benjamin Bustos; Jipeng Chen; Mathias Eitz; Takahiko Furuya; Kristian Hildebrand; Songhua Huang; Henry Johan; Arjan Kuijper; Ryutarou Ohbuchi; Ronald Richter; Jose M. Saavedra; Maximilian Scherer; Tomohiro Yanagimachi; Gang Joon Yoon; Sang Min Yoon

Sketch-based 3D shape retrieval has become an important research topic in content-based 3D object retrieval. The aim of this track is to measure and compare the performance of sketch-based 3D shape retrieval methods implemented by different participants over the world. The track is based on a new sketch-based 3D shape benchmark, which contains two types of sketch queries and two versions of target 3D models. In this track, 7 runs have been submitted by 5 groups and their retrieval accuracies were evaluated using 7 commonly used retrieval performance metrics. We hope that the benchmark, its corresponding evaluation code, and the comparative evaluation results of the state-of-the-art sketch-based 3D model retrieval algorithms will contribute to the progress of this research direction for the 3D model retrieval community.


eurographics | 2011

ManyLoDs: parallel many-view level-of-detail selection for real-time global illumination

Matthias Holländer; Tobias Ritschel; Elmar Eisemann; Tamy Boubekeur

Level‐of‐Detail structures are a key component for scalable rendering. Built from raw 3D data, these structures are often defined as Bounding Volume Hierarchies, providing coarse‐to‐fine adaptive approximations that are well‐adapted for many‐view rasterization. Here, the total number of pixels in each view is usually low, while the cost of choosing the appropriate LoD for each view is high. This task represents a challenge for existing GPU algorithms. We propose ManyLoDs, a new GPU algorithm to efficiently compute many LoDs from a Bounding Volume Hierarchy in parallel by balancing the workload within and among LoDs. Our approach is not specific to a particular rendering technique, can be used on lazy representations such as polygon soups, and can handle dynamic scenes. We apply our method to various many‐view rasterization applications, including Instant Radiosity, Point‐Based Global Illumination, and reflection/refraction mapping. For each of these, we achieve real‐time performance in complex scenes at high resolutions.

Collaboration


Dive into the Tamy Boubekeur's collaboration.

Top Co-Authors

Avatar

Marc Alexa

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Mathias Eitz

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kristian Hildebrand

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Isabelle Bloch

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elmar Eisemann

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge