Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan-Mark Geusebroek is active.

Publication


Featured researches published by Jan-Mark Geusebroek.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Visual Word Ambiguity

Jan C. van Gemert; Cor J. Veenman; Arnold W. M. Smeulders; Jan-Mark Geusebroek

This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.


International Journal of Computer Vision | 2005

The Amsterdam Library of Object Images

Jan-Mark Geusebroek; Gertjan J. Burghouts; Arnold W. M. Smeulders

We present the ALOI collection of 1,000 objects recorded under various imaging circumstances. In order to capture the sensory variation in object recordings, we systematically varied viewing angle, illumination angle, and illumination color for each object, and additionally captured wide-baseline stereo images. We recorded over a hundred images of each object, yielding a total of 110,250 images for the collection. These images are made publicly available for scientific research purposes.


acm multimedia | 2006

The challenge problem for automated detection of 101 semantic concepts in multimedia

Cees G. M. Snoek; Marcel Worring; Jan C. van Gemert; Jan-Mark Geusebroek; Arnold W. M. Smeulders

We introduce the challenge problem for generic video indexing to gain insight in intermediate steps that affect performance of multimedia analysis methods, while at the same time fostering repeatability of experiments. To arrive at a challenge problem, we provide a general scheme for the systematic examination of automated concept detection methods, by decomposing the generic video indexing problem into 2 unimodal analysis experiments, 2 multimodal analysis experiments, and 1 combined analysis experiment. For each experiment, we evaluate generic video indexing performance on 85 hours of international broadcast news data, from the TRECVID 2005/2006 benchmark, using a lexicon of 101 semantic concepts. By establishing a minimum performance on each experiment, the challenge problem allows for component-based optimization of the generic indexing issue, while simultaneously offering other researchers a reference for comparison during indexing methodology development. To stimulate further investigations in intermediate analysis steps that inuence video indexing performance, the challenge offers to the research community a manually annotated concept lexicon, pre-computed low-level multimedia features, trained classifier models, and five experiments together with baseline performance, which are all available at http://www.mediamill.nl/challenge/.


european conference on computer vision | 2008

Kernel Codebooks for Scene Categorization

Jan C. van Gemert; Jan-Mark Geusebroek; Cor J. Veenman; Arnold W. M. Smeulders

This paper introduces a method for scene categorization by modeling ambiguity in the popular codebook approach. The codebook approach describes an image as a bag of discrete visual codewords, where the frequency distributions of these words are used for image categorization. There are two drawbacks to the traditional codebook model: codeword uncertainty and codeword plausibility. Both of these drawbacks stem from the hard assignment of visual features to a single codeword. We show that allowing a degree of ambiguity in assigning codewords improves categorization performance for three state-of-the-art datasets.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Color invariance

Jan-Mark Geusebroek; R. van den Boomgaard; Arnold W. M. Smeulders; Hugo Geerts

This paper presents the measurement of colored object reflectance, under different, general assumptions regarding the imaging conditions. We exploit the Gaussian scale-space paradigm for color images to define a framework for the robust measurement of object reflectance from color images. Object reflectance is derived from a physical reflectance model based on the Kubelka-Munk theory for colorant layers. Illumination and geometrical invariant properties are derived from the reflectance model. Invariance and discriminative power of the color invariants is experimentally investigated, showing the invariants to be successful in discounting shadow, illumination, highlights, and noise. Extensive experiments show the different invariants to be highly discriminative, while maintaining invariance properties. The presented framework for color measurement is well-founded in the physics of color as well as in measurement science. Hence, the proposed invariants are considered more adequate for the measurement of invariant color features than existing methods.


Computer Vision and Image Understanding | 2009

Performance evaluation of local colour invariants

Gertjan J. Burghouts; Jan-Mark Geusebroek

In this paper, we compare local colour descriptors to grey-value descriptors. We adopt the evaluation framework of Mikolayzcyk and Schmid. We modify the framework in several ways. We decompose the evaluation framework to the level of local grey-value invariants on which common region descriptors are based. We compare the discriminative power and invariance of grey-value invariants to that of colour invariants. In addition, we evaluate the invariance of colour descriptors to photometric events such as shadow and highlights. We measure the performance over an extended range of common recording conditions including significant photometric variation. We demonstrate the intensity-normalized colour invariants and the shadow invariants to be highly distinctive, while the shadow invariants are more robust to both changes of the illumination colour, and to changes of the shading and shadows. Overall, the shadow invariants perform best: they are most robust to various imaging conditions while maintaining discriminative power. When plugged into the SIFT descriptor, they show to outperform other methods that have combined colour information and SIFT. The usefulness of C-colour-SIFT for realistic computer vision applications is illustrated for the classification of object categories from the VOC challenge, for which a significant improvement is reported.


IEEE Transactions on Image Processing | 2003

Fast anisotropic Gauss filtering

Jan-Mark Geusebroek; Arnold W. M. Smeulders; van de Weijer

We derive the decomposition of the anisotropic Gaussian in a one-dimensional (1-D) Gauss filter in the x-direction followed by a 1-D filter in a nonorthogonal direction phi. So also the anisotropic Gaussian can be decomposed by dimension. This appears to be extremely efficient from a computing perspective. An implementation scheme for normal convolution and for recursive filtering is proposed. Also directed derivative filters are demonstrated. For the recursive implementation, filtering an 512 x 512 image is performed within 40 msec on a current state of the art PC, gaining over 3 times in performance for a typical filter, independent of the standard deviations and orientation of the filter. Accuracy of the filters is still reasonable when compared to truncation error or recursive approximation error. The anisotropic Gaussian filtering method allows fast calculation of edge and ridge maps, with high spatial and angular accuracy. For tracking applications, the normal anisotropic convolution scheme is more advantageous, with applications in the detection of dashed lines in engineering drawings. The recursive implementation is more attractive in feature detection applications, for instance in affine invariant edge and ridge detection in computer vision. The proposed computational filtering method enables the practical applicability of orientation scale-space analysis.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Edge and corner detection by photometric quasi-invariants

J. van de Weijer; Th. Gevers; Jan-Mark Geusebroek

Feature detection is used in many computer vision applications such as image segmentation, object recognition, and image retrieval. For these applications, robustness with respect to shadows, shading, and specularities is desired. Features based on derivatives of photometric invariants, which we is called full invariants, provide the desired robustness. However, because computation of photometric invariants involves nonlinear transformations, these features are unstable and, therefore, impractical for many applications. We propose a new class of derivatives which we refer to as quasi-invariants. These quasi-invariants are derivatives which share with full photometric invariants the property that they are insensitive for certain photometric edges, such as shadows or specular edges, but without the inherent instabilities of full photometric invariants. Experiments show that the quasi-invariant derivatives are less sensitive to noise and introduce less edge displacement than full invariant derivatives. Moreover, quasi-invariants significantly outperform the full invariant derivatives in terms of discriminative power.


Cytometry | 2000

Robust autofocusing in microscopy

Jan-Mark Geusebroek; Arnold W. M. Smeulders; Hugo Geerts

BACKGROUND A critical step in automatic microscopy is focusing. This report describes a robust and fast autofocus approach useful for a wide range of microscopic modalities and preparations. METHODS The focus curve is measured over the complete focal range, reducing the chance that the best focus position is determined by dust or optical artifacts. Convolution with the derivative of a Gaussian smoothing function reduces the effect of noise on the focus curve. The influence of mechanical tolerance is accounted for. RESULTS The method is shown to be robust in fluorescence, bright-field and phase contrast microscopy, in fixed and living cells, as well as in fixed tissue. The algorithm was able to focus accurately within 2 or 3 s, even under extremely noisy and low contrast imaging conditions. CONCLUSIONS The proposed method is generally applicable in light microscopy, whenever the image information content is sufficient. The reliability of the autofocus method allows for unattended operation on a large scale.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

The Semantic Pathfinder: Using an Authoring Metaphor for Generic Multimedia Indexing

G.G.M. Snoek; Marcel Worring; Jan-Mark Geusebroek; Dennis Koelma; Frank J. Seinstra; Arnold W. M. Smeulders

This paper presents the semantic pathfinder architecture for generic indexing of multimedia archives. The semantic pathfinder extracts semantic concepts from video by exploring different paths through three consecutive analysis steps, which we derive from the observation that produced video is the result of an authoring-driven process. We exploit this authoring metaphor for machine-driven understanding. The pathfinder starts with the content analysis step. In this analysis step, we follow a data-driven approach of indexing semantics. The style analysis step is the second analysis step. Here, we tackle the indexing problem by viewing a video from the perspective of production. Finally, in the context analysis step, we view semantics in context. The virtue of the semantic pathfinder is its ability to learn the best path of analysis steps on a per-concept basis. To show the generality of this novel indexing approach, we develop detectors for a lexicon of 32 concepts and we evaluate the semantic pathfinder against the 2004 NIST TRECVID video retrieval benchmark, using a news archive of 64 hours. Top ranking performance in the semantic concept detection task indicates the merit of the semantic pathfinder for generic indexing of multimedia archives

Collaboration


Dive into the Jan-Mark Geusebroek's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Theo Gevers

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joost van de Weijer

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge