Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Hentschel is active.

Publication


Featured researches published by Christian Hentschel.


adaptive multimedia retrieval | 2007

Automatic Image Annotation Using a Visual Dictionary Based on Reliable Image Segmentation

Christian Hentschel; Sebastian Stober; Andreas Nürnberger; Marcin Detyniecki

Recent approaches in Automatic Image Annotation (AIA) try to combine the expressiveness of natural language queries with approaches to minimize the manual effort for image annotation. The main idea is to infer the annotations of unseen images using a small set of manually annotated training examples. However, typically these approaches suffer from low correlation between the globally assigned annotations and the local features used to obtain annotations automatically. In this paper we propose a framework to support image annotations based on a visual dictionary that is created automatically using a set of locally annotated training images. We designed a segmentation and annotation interface to allow for easy annotation of the traing data. In order to provide a framework that is easily extendable and reusable we make broad use of the MPEG-7 standard.


adaptive multimedia retrieval | 2006

SAFIRE: towards standardized semantic rich image annotation

Christian Hentschel; Andreas Nürnberger; Ingo Schmitt; Sebastian Stober

Most of the currently existing image retrieval systems make use of either low-level features or semantic (textual) annotations. A combined usage during annotation and retrieval is rarely attempted. In this paper, we propose a standardized annotation framework that integrates semantic and feature based information about the content of images. The presented approach is based on the MPEG-7 standard with some minor extensions. The proposed annotation system SAFIRE (Semantic Annotation Framework for Image REtrieval) enables the combined use of low-level features and annotations that can be assigned to arbitrary hierarchically organized image segments. Besides the framework itself, we discuss query formalisms required for this unified retrieval approach.


nordic conference on human-computer interaction | 2010

Evaluation of adaptive SpringLens: a multi-focus interface for exploring multimedia collections

Sebastian Stober; Christian Hentschel; Andreas Nürnberger

Sometimes users of a multimedia retrieval system are not able to explicitly state their information need. They rather want to browse a collection in order to get an overview and to discover interesting content. In previous work, we have presented a novel interface implementing a fish-eye-based approach for browsing high-dimensional multimedia data that has been projected onto display space. The impact of projection errors is alleviated by introducing an adaptive nonlinear multi-focus zoom lens. This work describes the evaluation of this approach in a user study where participants are asked to solve an exploratory image retrieval task using the SpringLens interface. As a baseline, the usability of the interface is compared to a common pan-and-zoom-based interface. The results of a survey and the analysis of recorded screencasts and eye tracking data are presented.


international symposium on neural networks | 2010

Multi-facet exploration of image collections with an adaptive multi-focus zoomable interface

Sebastian Stober; Christian Hentschel; Andreas Nürnberger

Sometimes it is not possible for a user to state a retrieval goal explicitly a priori. One common way to support such exploratory retrieval scenarios is to give an overview using a neighborhood-preserving projection of the collection onto two dimensions. However, neighborhood cannot always be preserved in the projection because of the dimensionality reduction. Further, there is usually more than one way to look at a collection of images - and diversity grows with the number of features that can be extracted. We describe an adaptive zoomable interface for exploration that addresses both problems: It makes use of a complex non-linear multi-focal zoom lens that exploits the distorted neighborhood relations introduced by the projection. We further introduce the concept of facet distances representing different aspects of image similarity. Given user-specific weightings of these aspects, the system can adapt to the users way of exploring the collection by manipulation of the neighborhoods as well as the projection.


international conference on image processing | 2016

Fine tuning CNNS with scarce training data — Adapting imagenet to art epoch classification

Christian Hentschel; Timur Pratama Wiradarma; Harald Sack

Deep Convolutional Neural Networks (CNN) have recently been shown to outperform previous state of the art approaches for image classification. Their success must in parts be attributed to the availability of large labeled training sets such as provided by the ImageNet benchmarking initiative. When training data is scarce, however, CNNs have proven to fail to learn descriptive features. Recent research shows that supervised pre-training on external data followed by domain-specific fine-tuning yields a significant performance boost when external data and target domain show similar visual characteristics. Transfer-learning from a base task to a highly dissimilar target task, however, has not yet been fully investigated. In this paper, we analyze the performance of different feature representations for classification of paintings into art epochs. Specifically, we evaluate the impact of training set sizes on CNNs trained with and without external data and compare the obtained models to linear models based on Improved Fisher Encodings. Our results underline the superior performance of fine-tuned CNNs but likewise propose Fisher Encodings in scenarios were training data is limited.


conference on multimedia modeling | 2015

What Image Classifiers Really See – Visualizing Bag-of-Visual Words Models

Christian Hentschel; Harald Sack

Bag-of-Visual-Words (BoVW) features which quantize and count local gradient distributions in images similar to counting words in texts have proven to be powerful image representations. In combination with supervised machine learning approaches, models for nearly every visual concept can be learned. BoVW feature extraction, however, is performed by cascading multiple stages of local feature detection and extraction, vector quantization and nearest neighbor assignment that makes interpretation of the obtained image features and thus the overall classification results very difficult. In this work, we present an approach for providing an intuitive heat map-like visualization of the influence each image pixel has on the overall classification result. We compare three different classifiers (AdaBoost, Random Forest and linear SVM) that were trained on the Caltech-101 benchmark dataset based on their individual classification performance and the generated model visualizations. The obtained visualizations not only allow for intuitive interpretation of the classification results but also help to identify sources of misclassification due to badly chosen training examples.


nordic conference on human-computer interaction | 2014

Visual berrypicking in large image collections

Thomas Low; Christian Hentschel; Sebastian Stober; Harald Sack; Andreas Nürnberger

Exploring image collections using similarity-based two-dimensional maps is an ongoing research area that faces two main challenges: with increasing size of the collection and complexity of the similarity metric projection accuracy rapidly degrades and computational costs prevent online map generation. We propose a prototype that creates the impression of panning a large (global) map by aligning inexpensive small maps showing local neighborhoods. By directed hopping from one neighborhood to the next the user is able to explore the whole image collection. Additionally, the similarity metric can be adapted by weighting image features and thus users benefit from a more informed navigation.


international conference on semantic systems | 2013

Generating a linked soccer dataset

Tanja Bergmann; Stefan Bunk; Johannes Eschrig; Christian Hentschel; Magnus Knuth; Harald Sack; Ricarda Schüler

The provision of Linked Data about sporting results enables extensive statistics, while connections to further datasets allow enhanced and sophisticated analyses. Moreover, providing sports data as Linked Open Data may promote new applications, which are currently impossible due to the locked nature of todays proprietary sports databases. Though the sport domain is strongly underrepresented in the Linked Open Data Cloud. In this paper we present a dataset containing information about soccer entities crawled from heterogeneous sources and linked to related entities from the LOD cloud. To enable easy exploration and to illustrate the capabilities of the dataset a web interface is providing a structured overview and extensive statistics.


adaptive multimedia retrieval | 2011

Classifying images at scene level: comparing global and local descriptors

Christian Hentschel; Sebastian Gerke; Eugene Mbanya

In this paper we compare two state-of-the-art approaches for image classification. The first approach follows the Bag-of-Keypoints method for classifying images based on local image pattern frequency distribution. The second approach computes the gist of an image by computing global image statistics. Both approaches are explained in detail and their performance is compared using a subset of images taken from the ImageClef 2011 PhotoAnnotation task. The images were selected based on the assumption they could be better described using global features. Results show that while Bag-of-Keypoints-like classification performs better even for global concepts the classification accuracy of the global descriptor remains acceptable at a much smaller computational footprint.


business information systems | 2016

Quantitative Analysis of Art Market Using Ontologies, Named Entity Recognition and Machine Learning: A Case Study

Dominik Filipiak; Henning Agt-Rickauer; Christian Hentschel; Agata Filipowska; Harald Sack

In the paper we investigate new approaches to quantitative art market research, such as statistical analysis and building of market indices. An ontology has been designed to describe art market data in a unified way. To ensure the quality of information in the knowledge base of the ontology, data enrichment techniques such as named entity recognition (NER) or data linking are also involved. By using techniques from computer vision and machine learning, we predict a style of a painting. This paper comes with a case study example being a detailed validation of our approach.

Collaboration


Dive into the Christian Hentschel's collaboration.

Top Co-Authors

Avatar

Harald Sack

Hasso Plattner Institute

View shared research outputs
Top Co-Authors

Avatar

Andreas Nürnberger

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Sebastian Stober

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Magnus Knuth

Hasso Plattner Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Bunk

Hasso Plattner Institute

View shared research outputs
Top Co-Authors

Avatar

Tanja Bergmann

Hasso Plattner Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge