Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristian Hildebrand is active.

Publication


Featured researches published by Kristian Hildebrand.


international conference on computer graphics and interactive techniques | 2012

Sketch-based shape retrieval

Mathias Eitz; Ronald Richter; Tamy Boubekeur; Kristian Hildebrand; Marc Alexa

We develop a system for 3D object retrieval based on sketched feature lines as input. For objective evaluation, we collect a large number of query sketches from human users that are related to an existing data base of objects. The sketches turn out to be generally quite abstract with large local and global deviations from the original shape. Based on this observation, we decide to use a bag-of-features approach over computer generated line drawings of the objects. We develop a targeted feature transform based on Gabor filters for this system. We can show objectively that this transform is better suited than other approaches from the literature developed for similar tasks. Moreover, we demonstrate how to optimize the parameters of our, as well as other approaches, based on the gathered sketches. In the resulting comparison, our approach is significantly better than any other system described so far.


Computers & Graphics | 2010

Technical Section: An evaluation of descriptors for large-scale image retrieval from sketched feature lines

Mathias Eitz; Kristian Hildebrand; Tamy Boubekeur; Marc Alexa

We address the problem of fast, large scale sketch-based image retrieval, searching in a database of over one million images. We show that current retrieval methods do not scale well towards large databases in the context of interactively supervised search and propose two different approaches for which we objectively evaluate that they significantly outperform existing approaches. The proposed descriptors are constructed such that both the full color image and the sketch undergo exactly the same preprocessing steps. We first search for an image with similar structure, analyzing gradient orientations. Then, best matching images are clustered based on dominant color distributions, to offset the lack of color-based decision during the initial search. Overall, the query results demonstrate that the system offers intuitive access to large image databases using a user-friendly sketch-and-browse interface.


sketch based interfaces and modeling | 2009

A descriptor for large scale image retrieval based on sketched feature lines

Mathias Eitz; Kristian Hildebrand; Tamy Boubekeur; Marc Alexa

We address the problem of large scale sketch based image retrieval, searching in a database of over a million images. The search is based on a descriptor that elegantly addresses the asymmetry between the binary user sketch on the one hand and the full color image on the other hand. The proposed descriptor is constructed such that both the full color image and the sketch undergo exactly the same preprocessing steps. We also design an adapted version of the descriptor proposed for MPEG-7 and compare their performance on a database of 1.5 million images. Best matching images are clustered based on color histograms, to offset the lacking color in the query. Overall, the query results demonstrate that the system allows users an intuitive access to large image databases.


Computer Graphics Forum | 2012

crdbrd: Shape Fabrication by Sliding Planar Slices

Kristian Hildebrand; Bernd Bickel; Marc Alexa

We introduce an algorithm and representation for fabricating 3D shape abstractions using mutually intersecting planar cut‐outs. The planes have prefabricated slits at their intersections and are assembled by sliding them together. Often such abstractions are used as a sculptural art form or in architecture and are colloquially called ‘cardboard sculptures’. Based on an analysis of construction rules, we propose an extended binary space partitioning tree as an efficient representation of such cardboard models which allows us to quickly evaluate the feasibility of newly added planar elements. The complexity of insertion order quickly increases with the number of planar elements and manual analysis becomes intractable. We provide tools for generating cardboard sculptures with guaranteed constructibility. In combination with a simple optimization and sampling strategy for new elements, planar shape abstraction models can be designed by iteratively adding elements. As an output, we obtain a fabrication plan that can be printed or sent to a laser cutter. We demonstrate the complete process by designing and fabricating cardboard models of various well‐known 3D shapes.


Computers & Graphics | 2013

SMI 2013: Orthogonal slicing for additive manufacturing

Kristian Hildebrand; Bernd Bickel; Marc Alexa

Most additive manufacturing technologies work by layering, i.e. slicing the shape and then generating each slice independently. This introduces an anisotropy into the process, often as different accuracies in the tangential and normal directions, but also in terms of other parameters such as build speed or tensile strength and strain. We model this as an anisotropic cubic element. Our approach then finds a compromise between modeling each part of the shape individually in the best possible direction and using one direction for the whole shape part. In particular, we compute an orthogonal basis and consider only the three basis vectors as slice normals (i.e. fabrication directions). Then we optimize a decomposition of the shape along this basis so that each part can be consistently sliced along one of the basis vectors. In simulation, we show that this approach is superior to slicing the whole shape in one direction, only. It also has clear benefits if the shape is larger than the build volume of the available equipment.


international conference on computer graphics and interactive techniques | 2009

PhotoSketch: a sketch based image query and compositing system

Mathias Eitz; Kristian Hildebrand; Tamy Boubekeur; Marc Alexa

We introduce a system for progressively creating images through a simple sketching and compositing interface. A large database of over 1.5 million images is searched for matches to a users binary outline sketch; the results of this search can be combined interactively to synthesize the desired image. We introduce image descriptors for the task of estimating the difference between images and binary outline sketches. The compositing part is based on graph cut and Poisson blending. We demonstrate that the resulting system allows generating complex images in an intuitive way.


eurographics | 2012

SHREC'12 track: sketch-based 3D shape retrieval

Bo Li; Tobias Schreck; Afzal Godil; Marc Alexa; Tamy Boubekeur; Benjamin Bustos; Jipeng Chen; Mathias Eitz; Takahiko Furuya; Kristian Hildebrand; Songhua Huang; Henry Johan; Arjan Kuijper; Ryutarou Ohbuchi; Ronald Richter; Jose M. Saavedra; Maximilian Scherer; Tomohiro Yanagimachi; Gang Joon Yoon; Sang Min Yoon

Sketch-based 3D shape retrieval has become an important research topic in content-based 3D object retrieval. The aim of this track is to measure and compare the performance of sketch-based 3D shape retrieval methods implemented by different participants over the world. The track is based on a new sketch-based 3D shape benchmark, which contains two types of sketch queries and two versions of target 3D models. In this track, 7 runs have been submitted by 5 groups and their retrieval accuracies were evaluated using 7 commonly used retrieval performance metrics. We hope that the benchmark, its corresponding evaluation code, and the comparative evaluation results of the state-of-the-art sketch-based 3D model retrieval algorithms will contribute to the progress of this research direction for the 3D model retrieval community.


international conference on computer graphics and interactive techniques | 2011

Throwable panoramic ball camera

Jonas Pfeil; Kristian Hildebrand; Carsten Gremzow; Bernd Bickel; Marc Alexa

Acquiring panoramic images using stitching takes a lot of time and moving objects may cause ghosting. It is also difficult to obtain a full spherical panorama, because the downward picture cannot be captured while the camera is mounted on the tripod.


international conference on computer graphics and interactive techniques | 2010

Sketch-based 3D shape retrieval

Mathias Eitz; Kristian Hildebrand; Tamy Boubekeur; Marc Alexa

As large collections of 3D models are starting to become as common as public image collections, the need arises to quickly locate models in such collections. Models are often insufficiently annotated such that a keyword based search is not promising. Our approach for content based searching of 3D models relies entirely on visual analysis and is based on the observation that a large part of our perception of shapes stems from their salient features, usually captured by dominant lines in their display. Recent research on such feature lines has shown that 1) people mostly draw the same lines when asked to depict a certain model and 2) the shape of an object is well represented by the set of feature lines generated by recent NPR line drawing algorithms [Cole et al. 2009]. Consequently, we suggest an image based approach for 3D shape retrieval, exploiting the similarity of human sketches and the results of current line drawing algorithms. Our search engine takes a sketch of the desired model drawn by a user as the input and compares this sketch to a set of line drawings automatically generated for each of the models in the collection.


ieee symposium on information visualization | 2005

PRISAD: a partitioned rendering infrastructure for scalable accordion drawing

James Slack; Kristian Hildebrand; Tamara Munzner

We present PRISAD, the first generic rendering infrastructure for information visualization applications that use the accordion drawing technique: rubber sheet navigation with guaranteed visibility for marked areas of interest. Our new rendering algorithms are based on the partitioning of screen space, which allows us to handle dense dataset regions correctly. The algorithms in previous work led to incorrect visual representations because of overculling, and to inefficiencies due to overdrawing multiple items in the same region. Our pixel based drawing infrastructure guarantees correctness by eliminating overculling, and improves rendering performance with tight bounds on overdrawing. PRITree and PRISeq are applications built on PRISAD, with the feature sets of TreeJuxtaposer and SequenceJuxtaposer, respectively. We describe our PRITree and PRISeq dataset traversal algorithms, which are used for efficient rendering, culling, and layout of datasets within the PRISAD framework. We also discuss PRITree node marking techniques, which offer order-of-magnitude improvements to both memory and time performance versus previous range storage and retrieval techniques. Our PRITree implementation features a five fold increase in rendering speed for nontrivial tree structures, and also reduces memory requirements in some real world datasets by up to eight times, so we are able to handle trees of several million nodes. PRISeq renders fifteen times faster and handles datasets twenty times larger than previous work.

Collaboration


Dive into the Kristian Hildebrand's collaboration.

Top Co-Authors

Avatar

Marc Alexa

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Mathias Eitz

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Bickel

Institute of Science and Technology Austria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcus A. Magnor

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ronald Richter

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

James Slack

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Tamara Munzner

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Przemyslaw Musialski

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge