Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henrik Aanæs is active.

Publication


Featured researches published by Henrik Aanæs.


symposium on geometry processing | 2009

Shape analysis using the auto diffusion function

Katarzyna Gebal; Jakob Andreas Bærentzen; Henrik Aanæs; Rasmus Larsen

Scalar functions defined on manifold triangle meshes is a starting point for many geometry processing algorithms such as mesh parametrization, skeletonization, and segmentation. In this paper, we propose the Auto Diffusion Function (ADF) which is a linear combination of the eigenfunctions of the Laplace‐Beltrami operator in a way that has a simple physical interpretation. The ADF of a given 3D object has a number of further desirable properties: Its extrema are generally at the tips of features of a given object, its gradients and level sets follow or encircle features, respectively, it is controlled by a single parameter which can be interpreted as feature scale, and, finally, the ADF is invariant to rigid and isometric deformations.


IEEE Transactions on Visualization and Computer Graphics | 2005

Signed distance computation using the angle weighted pseudonormal

Jakob Andreas Bærentzen; Henrik Aanæs

The normals of closed, smooth surfaces have long been used to determine whether a point is inside or outside such a surface. It is tempting to also use this method for polyhedra represented as triangle meshes. Unfortunately, this is not possible since, at the vertices and edges of a triangle mesh, the surface is not C/sup 1/ continuous, hence, the normal is undefined at these loci. In this paper, we undertake to show that the angle weighted pseudonormal (originally proposed by Thurmer and Wuthrich and independently by Sequin) has the important property that it allows us to discriminate between points that are inside and points that are outside a mesh, regardless of whether a mesh vertex, edge, or face is the closest feature. This inside-outside information is usually represented as the sign in the signed distance to the mesh. In effect, our result shows that this sign can be computed as an integral part of the distance computation. Moreover, it provides an additional argument in favor of the angle weighted pseudonormals being the natural extension of the face normals. Apart from the theoretical results, we also propose a simple and efficient algorithm for computing the signed distance to a closed C/sup 0/ mesh. Experiments indicate that the sign computation overhead when running this algorithm is almost negligible.


International Journal of Computer Vision | 2012

Interesting Interest Points

Henrik Aanæs; Anders Lindbjerg Dahl; Kim Steenstrup Pedersen

Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard because the true correspondence is generally unknown. We overcome this by providing an extensive data set with known spatial correspondence. The data is acquired with a camera mounted on a 6-axis industrial robot providing very accurate camera positioning. Furthermore the scene is scanned with a structured light scanner resulting in precise 3D surface information. In total 60 scenes are depicted ranging from model houses, building material, fruit and vegetables, fabric, printed media and more. Each scene is depicted from 119 camera positions and 19 individual LED illuminations are used for each position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale Harris corner detector performs overall best followed by the Hessian based detectors and the difference of Gaussian (DoG). The methods based on scale space features have an overall better performance than other methods especially when varying the distance to the scene, where especially FAST corner detector, Edge Based Regions (EBR) and Intensity Based Regions (IBR) have a poor performance. The performance of Maximally Stable Extremal Regions (MSER) is moderate. We observe a relatively large decline in performance with both changes in viewpoint and light direction. Some of our observations support previous findings while others contradict these findings.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2002

Robust factorization

Henrik Aanæs; Rune Fisker; Kalle Åström; Jens Michael Carstensen

Factorization algorithms for recovering structure and motion from an image stream have many advantages, but they usually require a set of well-tracked features. Such a set is in generally not available in practical applications. There is thus a need for making factorization algorithms deal effectively with errors in the tracked features. We propose a new and computationally efficient algorithm for applying an arbitrary error function in the factorization scheme. This algorithm enables the use of robust statistical techniques and arbitrary noise models for the individual features. These techniques and models enable the factorization scheme to deal effectively with mismatched features, missing features, and noise on the individual features. The proposed approach further includes a new method for Euclidean reconstruction that significantly improves convergence of the factorization algorithms. The proposed algorithm has been implemented as a modification of the Christy-Horaud factorization scheme, which yields a perspective reconstruction. Based on this implementation, a considerable increase in error tolerance is demonstrated on real and synthetic data. The proposed scheme can, however, be applied to most other factorization algorithms.


International Journal of Intelligent Systems Technologies and Applications | 2008

Fusion of stereo vision and Time-Of-Flight imaging for improved 3D estimation

Sigurjon Arni Gudmundsson; Henrik Aanæs; Rasmus Larsen

This paper suggests an approach in fusing two 3D estimation techniques: stereo vision and Time-Of-Flight (TOF) imaging. By mapping the TOF-depth measurements to stereo disparities, the correspondence between the images from a fast TOF-camera and standard high resolution camera pair are found, so the TOF depth measurements can be linked to the image pairs. In the same framework, a method is developed to initialise and constrain a hierarchical stereo matching algorithm. It is shown that in this way, higher spatial resolution is obtained than by only using the TOF camera and higher quality dense stereo disparity maps are the results of this data fusion.


IEEE Transactions on Geoscience and Remote Sensing | 2008

Model-Based Satellite Image Fusion

Henrik Aanæs; Johannes R. Sveinsson; Allan Aasbjerg Nielsen; Thomas Bøvith; Jon Atli Benediktsson

A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method.


international symposium on signals, circuits and systems | 2007

Environmental Effects on Measurement Uncertainties of Time-of-Flight Cameras

S.A. Guomundsson; Henrik Aanæs; Rasmus Larsen

In this paper the effect the environment has on the SwissRanger SR3000 time-of-flight camera is investigated. The accuracy of this camera is highly affected by the scene it is pointed at: such as the reflective properties, color and gloss. Also the complexity of the scene has considerable effects on the accuracy. To mention a few: The angle of the objects to the emitted light and the scattering effects of near objects. In this paper a general overview of known such inaccuracy factors are described, followed by experiments illustrating the additional uncertainty factors. Specifically we give a better description of how a surface color intensity influences the depth measurement, and illustrate how multiple reflections influence the resulting depth measurement.


computer vision and pattern recognition | 2014

Large Scale Multi-view Stereopsis Evaluation

Rasmus Ramsbøl Jensen; Anders Lindbjerg Dahl; George Vogiatzis; Engil Tola; Henrik Aanæs

The seminal multiple view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis methodology. Although seminal, these benchmark datasets are limited in scope with few reference scenes. Here, we try to take these works a step further by proposing a new multi-view stereo dataset, which is an order of magnitude larger in number of scenes and with a significant increase in diversity. Specifically, we propose a dataset containing 80 scenes of large variability. Each scene consists of 49 or 64 accurate camera positions and reference structured light scans, all acquired by a 6-axis industrial robot. To apply this dataset we propose an extension of the evaluation protocol from the Middlebury evaluation, reflecting the more complex geometry of some of our scenes. The proposed dataset is used to evaluate the state of the art multiview stereo algorithms of Tola et al., Campbell et al. and Furukawa et al. Hereby we demonstrate the usability of the dataset as well as gain insight into the workings and challenges of multi-view stereopsis. Through these experiments we empirically validate some of the central hypotheses of multi-view stereopsis, as well as determining and reaffirming some of the central challenges.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012

Classification of Pansharpened Urban Satellite Images

Frosti Palsson; Johannes R. Sveinsson; Jon Atli Benediktsson; Henrik Aanæs

The classification of high resolution urban remote sensing imagery is addressed with the focus on classification of imagery that has been pansharpened by a number of different pansharpening methods. The pansharpening process introduces some spectral and spatial distortions in the resulting fused multispectral image, the amount of which highly varies depending on which pansharpening technique is used. In the majority of the pansharpening techniques that have been proposed, there is a compromise between the spatial enhancement and the spectral consistency. Here we study the effects of the spectral and spatial distortions on the accuracy in classification of pansharpened imagery. We also study the performance in terms of accuracy of the various pansharpening techniques during classification with spatial information, obtained using mathematical morphology (MM). MM is used to derive local spatial information from the panchromatic data. Random Forests (RF) and Support Vector Machines (SVM) will be used as classifiers. Experiments are done for three different datasets that have been obtained by two different imaging sensors, IKONOS and QuickBird. These sensors deliver multispectral images that have four bands, R, G, B and near infrared (NIR). To further study the contribution of the NIR band, experiments are done using both the RGB bands and all four bands, respectively.


computer vision and pattern recognition | 2008

TOF imaging in Smart room environments towards improved people tracking

Sigurjon Arni Guomundsson; Rasmus Larsen; Henrik Aanæs; Montse Pardàs; Josep R. Casas

In this paper we present the use of time-of-flight (TOF) cameras in smart-rooms and how this leads to improved results in segmenting the people in the room from the background and consequently better 3D reconstruction of the people. A calibrated rig of one Swissranger SR3100 time-of-flight range camera and a high resolution standard camera is set in a smart-room consisting of 5 other standard cameras. A probabilistic background model is used to segment each view and a shape from silhouette 3D volume is constructed. It is shown that the presence of the range camera gives ways of eliminating regional artifacts and therefore a more robust input for higher level applications such people tracking or human motion analysis.

Collaboration


Dive into the Henrik Aanæs's collaboration.

Top Co-Authors

Avatar

Jakob Andreas Bærentzen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Rasmus Larsen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Jannik Boll Nielsen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Jeppe Revall Frisvad

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Anders Lindbjerg Dahl

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

François Anton

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

David Bue Pedersen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Jens Gravesen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Jakob Wilm

Technical University of Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge