Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anders Lindbjerg Dahl is active.

Publication


Featured researches published by Anders Lindbjerg Dahl.


International Journal of Computer Vision | 2012

Interesting Interest Points

Henrik Aanæs; Anders Lindbjerg Dahl; Kim Steenstrup Pedersen

Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard because the true correspondence is generally unknown. We overcome this by providing an extensive data set with known spatial correspondence. The data is acquired with a camera mounted on a 6-axis industrial robot providing very accurate camera positioning. Furthermore the scene is scanned with a structured light scanner resulting in precise 3D surface information. In total 60 scenes are depicted ranging from model houses, building material, fruit and vegetables, fabric, printed media and more. Each scene is depicted from 119 camera positions and 19 individual LED illuminations are used for each position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale Harris corner detector performs overall best followed by the Hessian based detectors and the difference of Gaussian (DoG). The methods based on scale space features have an overall better performance than other methods especially when varying the distance to the scene, where especially FAST corner detector, Edge Based Regions (EBR) and Intensity Based Regions (IBR) have a poor performance. The performance of Maximally Stable Extremal Regions (MSER) is moderate. We observe a relatively large decline in performance with both changes in viewpoint and light direction. Some of our observations support previous findings while others contradict these findings.


computer vision and pattern recognition | 2014

Large Scale Multi-view Stereopsis Evaluation

Rasmus Ramsbøl Jensen; Anders Lindbjerg Dahl; George Vogiatzis; Engil Tola; Henrik Aanæs

The seminal multiple view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis methodology. Although seminal, these benchmark datasets are limited in scope with few reference scenes. Here, we try to take these works a step further by proposing a new multi-view stereo dataset, which is an order of magnitude larger in number of scenes and with a significant increase in diversity. Specifically, we propose a dataset containing 80 scenes of large variability. Each scene consists of 49 or 64 accurate camera positions and reference structured light scans, all acquired by a 6-axis industrial robot. To apply this dataset we propose an extension of the evaluation protocol from the Middlebury evaluation, reflecting the more complex geometry of some of our scenes. The proposed dataset is used to evaluate the state of the art multiview stereo algorithms of Tola et al., Campbell et al. and Furukawa et al. Hereby we demonstrate the usability of the dataset as well as gain insight into the workings and challenges of multi-view stereopsis. Through these experiments we empirically validate some of the central hypotheses of multi-view stereopsis, as well as determining and reaffirming some of the central challenges.


international conference on 3d imaging, modeling, processing, visualization & transmission | 2011

Finding the Best Feature Detector-Descriptor Combination

Anders Lindbjerg Dahl; Henrik Aanæs; Kim Steenstrup Pedersen

Addressing the image correspondence problem by feature matching is a central part of computer vision and 3D inference from images. Consequently, there is a substantial amount of work on evaluating feature detection and feature description methodology. However, the performance of the feature matching is an interplay of both detector and descriptor methodology. Our main contribution is to evaluate the performance of some of the most popular descriptor and detector combinations on the DTU Robot dataset, which is a very large dataset with massive amounts of systematic data aimed at two view matching. The size of the dataset implies that we can also reasonably make deductions about the statistical significance of our results. We conclude, that the MSER and Difference of Gaussian (DoG) detectors with a SIFT or DAISY descriptor are the top performers. This performance is, however, not statistically significantly better than some other methods. As a byproduct of this investigation, we have also tested various DAISY type descriptors, and found that the difference among their performance is statistically insignificant using this dataset. Furthermore, we have not been able to produce results collaborating that using affine invariant feature detectors carries a statistical significant advantage on general scene types.


british machine vision conference | 2011

Learning Dictionaries of Discriminative Image Patches

Anders Lindbjerg Dahl; Rasmus Larsen

Remarkable results have been obtained using image models based on image patches, for example sparse generative models for image inpainting, noise reduction and superresolution, sparse texture segmentation or texton models. In this paper we propose a powerful and yet simple approach for segmentation using dictionaries of image patches with associated label data. The approach is based on ideas from sparse generative image models and texton based texture modeling. The intensity and label dictionaries are learned from training images with associated label information of (a subset) of the pixels based on a modified vector quantization approach. For new images the intensity dictionary is used to encode the image data and the label dictionary is used to build a segmentation of the image. We demonstrate the algorithm on composite and real texture images and show how successful training is possible even for noisy image and low-quality label training data. In our experimental evaluation we achieve state-of-the-art performance for segmentation.


Journal of Pathology Informatics | 2011

Learning histopathological patterns

Andreas Kårsnäs; Anders Lindbjerg Dahl; Rasmus Larsen

Aims: The aim was to demonstrate a method for automated image analysis of immunohistochemically stained tissue samples for extracting features that correlate with patient disease. We address the problem of quantifying tumor tissue and segmenting and counting cell nuclei. Materials and Methods: Our method utilizes a flexible segmentation method based on sparse coding trained from representative image samples. Nuclei counting is based on a nucleus model that takes size, shape, and nucleus probability into account. Nuclei clustering and overlays are resolved using a gray-weighted distance transform. We obtain a probability measure for pixels belonging to a nucleus from our segmentation procedure. Experiments are carried out on two sets of immunohistochemically stained images - one set based on the estrogen receptor (ER) and the other on antigen KI-67. For the nuclei separation we have selected 207 ER image samples from 58 tissue micro array-cores corresponding to 58 patients and 136 KI-67 image samples also from 58 cores. The images are hand-annotated by marking the center position of each nucleus. For the ER data we have a total of 1006 nuclei and for the KI-67 we have 796 nuclei. Segmentation performance was evaluated in terms of missing nuclei, falsely detected nuclei, and multiple detections. The proposed method is compared to state-of-the-art Bayesian classification. Statistical analysis used: The performance of the proposed method and a state-of-the-art algorithm including variations thereof is compared using the Wilcoxon rank sum test. Results: For both the ER experiment and the KI-67 experiment the proposed method exhibits lower error rates than the state-of-the-art method. Total error rates were 4.8 % and 7.7 % in the two experiments, corresponding to an average of 0.23 and 0.45 errors per image, respectively. The Wilcoxon rank sum tests show statistically significant improvements over the state-of-the-art method. Conclusions: We have demonstrated a method and obtained good performance compared to state-of-the-art nuclei separation. The segmentation procedure is simple, highly flexible, and we demonstrate how it, in addition to the nuclei separation, can perform precise segmentation of cancerous tissue. The complexity of the segmentation procedure is linear in the image size and the nuclei separation is linear in the number of nuclei. Additionally the method can be parallelized to obtain high-speed computations.


Meat Science | 2014

Vision-based method for tracking meat cuts in slaughterhouses.

Anders Boesen Lindbo Larsen; Marchen S. Hviid; Mikkel Engbo Jørgensen; Rasmus Larsen; Anders Lindbjerg Dahl

Meat traceability is important for linking process and quality parameters from the individual meat cuts back to the production data from the farmer that produced the animal. Current tracking systems rely on physical tagging, which is too intrusive for individual meat cuts in a slaughterhouse environment. In this article, we demonstrate a computer vision system for recognizing meat cuts at different points along a slaughterhouse production line. More specifically, we show that 211 pig loins can be identified correctly between two photo sessions. The pig loins undergo various perturbation scenarios (hanging, rough treatment and incorrect trimming) and our method is able to handle these perturbations gracefully. This study shows that the suggested vision-based approach to tracking is a promising alternative to the more intrusive methods currently available.


european conference on computer vision | 2012

Jet-Based local image descriptors

Anders Boesen Lindbo Larsen; Sune Darkner; Anders Lindbjerg Dahl; Kim Steenstrup Pedersen

We present a general novel image descriptor based on higherorder differential geometry and investigate the effect of common descriptor choices. Our investigation is twofold in that we develop a jet-based descriptor and perform a comparative evaluation with current state-of-the-art descriptors on the recently released DTU Robot dataset. We demonstrate how the use of higher-order image structures enables us to reduce the descriptor dimensionality while still achieving very good performance. The descriptors are tested in a variety of scenarios including large changes in scale, viewing angle and lighting. We show that the proposed jet-based descriptor is superior to state-of-the-art for DoG interest points and show competitive performance for the other tested interest points.


scandinavian conference on image analysis | 2011

Supercontinuum light sources for hyperspectral subsurface laser scattering: applications for food inspection

Otto Højager Attermann Nielsen; Anders Lindbjerg Dahl; Rasmus Larsen; Flemming Møller; Frederik Donbæk Nielsen; Carsten L. Thomsen; Henrik Aanæs; Jens Michael Carstensen

A materials structural and chemical composition influences its optical scattering properties. In this paper we investigate the use of subsurface laser scattering (SLS) for inferring structural and chemical information of food products. We have constructed a computer vision system based on a supercontinuum laser light source and an Acousto-Optic Tunable Filter (AOTF) to provide a collimated light source, which can be tuned to any wavelength in the range from 480 to 900 nm. We present the newly developed hyperspectral vision system together with a proof-of-principle study of its ability to discriminate between dairy products with either similar chemical or structural composition. The combined vision system is a new way for industrial food inspection allowing non-intrusive online process inspection of parameters that is hard with existing technology.


international conference on 3d imaging, modeling, processing, visualization & transmission | 2012

Multiple View Stereo by Reflectance Modeling

Sujung Kim; Seong Dae Kim; Anders Lindbjerg Dahl; Knut Conradsen; Rasmus Ramsbol Jensen; Henrik Aanæs

Multiple view stereo is typically formulated as an optimization problem over a data term and a prior term. The data term is based on the consistency of images projected on a hypothesized surface. This consistency is based on a measure denoted a visual metric, e.g. normalized cross correlation. Here we argue that a visual metric based on a surface reflectance model should be founded on more observations than the degrees of freedom (dof) of the reflectance model. If (partly) specular surfaces are to be handled, this implies a model with at least two dof. In this paper, we propose to construct visual metrics of more than one dof using the DAISY methodology, which compares favorably to the state of the art in the experiments carried out. These experiments are based on a novel data set of eight scenes with diffuse and specular surfaces and accompanying ground truth. The performance of six different visual metrics based on the DAISY framework is investigated experimentally, addressing whether a visual metric should be aggregated from a set of minimal images, which dof is best, or whether a combination of one and two dof should be used. Which metric performs best is dependent on the viewed scene, although there are clear tendencies for the two dof minimal metric to be the preferred one.


Proceedings of SPIE, the International Society for Optical Engineering | 2012

Classification of polarimetric SAR data using dictionary learning

Jacob Schack Vestergaard; Anders Lindbjerg Dahl; Rasmus Larsen; Allan Aasbjerg Nielsen

This contribution deals with classification of multilook fully polarimetric synthetic aperture radar (SAR) data by learning a dictionary of crop types present in the Foulum test site. The Foulum test site contains a large number of agricultural fields, as well as lakes, wooded areas, natural vegetation, grasslands and urban areas, which makes it ideally suited for evaluation of classification algorithms. Dictionary learning centers around building a collection of image patches typical for the classification problem at hand. This requires initial manual labeling of the classes present in the data and is thus a method for supervised classification. The methods aim to maintain a proficient number of typical patches and associated labels. Data is consecutively classified by a nearest neighbor search of the dictionary elements and labeled with probabilities of each class. Each dictionary element consists of one or more features, such as spectral measurements, in a neighborhood around each pixel. For polarimetric SAR data these features are the elements of the complex covariance matrix for each pixel. We quantitatively compare the effect of using different representations of the covariance matrix as the dictionary element features. Furthermore, we compare the method of dictionary learning, in the context of classifying polarimetric SAR data, with standard classification methods based on single-pixel measurements.

Collaboration


Dive into the Anders Lindbjerg Dahl's collaboration.

Top Co-Authors

Avatar

Rasmus Larsen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Henrik Aanæs

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Jens Michael Carstensen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jacob Lercke Skytte

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Jacob Schack Vestergaard

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter E. Holm

University of Copenhagen

View shared research outputs
Researchain Logo
Decentralizing Knowledge