Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henrik Haggrén is active.

Publication


Featured researches published by Henrik Haggrén.


Scandinavian Journal of Forest Research | 2004

Calibration of Laser-derived Tree Height Estimates by Means of Photogrammetric Techniques

Petri Rönnholm; Juha Hyyppä; Hannu Hyyppä; Henrik Haggrén; Xiaowei Yu; Harri Kaartinen

Techniques based on laser point clouds and digital terrestrial images were demonstrated for the calibration of tree-height estimation. Individual tree heights can be roughly estimated from laser scanning data by using the approximated ground level and the highest hit of the treetop. However, laser-derived measurements often underestimate tree heights. This underestimation can arise from various error sources. Digital terrestrial images can be used to verify and understand the behaviour of laser point clouds. When laser data are backprojected in a close-range image, it is possible to show where each laser beam has reflected. This, however, requires a proper orientation of the images. In this study an interactive orientation method was used to derive image orientations, using one laser strip at a time as the reference data. Consequently, the backprojection of laser point clouds confirmed the height underestimations found by comparing the tacheometer reference measurements with the laser-derived tree heights. In addition, by using the described procedure the cause of underestimating tree heights could be explained.


Laser radar technology and applications. Conference | 2000

Accuracy of laser scanning for DTM generation in forested areas

Juha M. Hyyppae; Ulla Pyysalo; Hannu Hyyppae; Henrik Haggrén; Georg S. Ruppert

This paper evaluates and discuses the accuracy of laser scanner in DTM (digital terrain model) generation in forested and suburban areas. Special emphasis is laid in order to optimize the selection of ground hits used for the creation of the DTM of future high-pulse-rate laser scanners. A novel DTM algorithm is depicted in detail. The algorithm is based on five phases: (1) calculation of the original reference surface, (2) classification of vegetation and removal of the vegetation from the reference surface, (3) classification of the original cloud of points using the reference surface, (4) calculation of the DTM based on the classified ground hits, and (5) interpolation of the missing points. Standard error of 15 cm was obtained for flat forest areas and the error increased with increasing terrain slope to the value of approximately 40 cm at the slope of 40%. The average standard error for forest area was slightly better than 25 cm. The laser-derived DTM of the forest road deviated only 8.5 cm from the true height. An optimum performance for the DTM generation was obtained by averaging the ground hits which located, at the maximum, 60 cm above the minimum terrain values. A simplified algorithm was suggested for more operational use based on the first pulse mode data. Special cases of the suburban area DTM were verified including terrain heights below the buildings and bridges, terrain heights of roads, terrain heights below large outdoor light fixture, to name but a few. About 100 special cases in suburban/urban environment for DTM verification were searched. The corresponding standard error between the laser-derived values and reference data was 45 cm.


Isprs Journal of Photogrammetry and Remote Sensing | 1998

Statistical analysis of two 3-D registration and modeling strategies

Olli Jokinen; Henrik Haggrén

Abstract The paper deals with the registration and modeling of multiple 3-D profile maps acquired from different viewpoints by light striping. We analyze the propagation of measurement and calibration errors to the registration parameters and further to the reconstructed model of the scene consisting of planar patches. The analysis is performed for two strategies. In the first strategy, the maps are registered simultaneously using the Levenberg–Marquardt method to update the registration parameters and the model computed afterwards while in the second one, the maps are registered sequentially against the model reconstructed up till then using the method of unit quaternions to update the registration parameters. Our statistical analysis thus combines the registration and modeling steps, and in registration, we determine the corresponding points either on the parametric domains of the maps or as closest points in 3-D using the information from the parametric domain to restrict the search for the closest points. We also point out the precise estimation of the covariance matrix of the solution given by the Levenberg–Marquardt method. We illustrate the results of our analysis with real data from a scale model of an urban area.


Sensors | 2009

Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks

Petri Rönnholm; Hannu Hyyppä; Juha Hyyppä; Henrik Haggrén

Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.


electronic imaging | 1998

Cocentric image capture for photogrammetric triangulation and mapping and for panoramic visualization

Henrik Haggrén; Petteri Pöntinen; Jykri Mononen

The paper deals with concentric image capturing and its use for mapping and visualization purposes. The work is based on a photogrammetric approach in composing hemispheric images from concentric image sequences.


urban remote sensing joint event | 2009

3D city model for mobile phone using MMS data

Lingli Zhu; Juha Hyyppä; Antero Kukko; Anttoni Jaakkola; Matti Lehtomäki; Harri Kaartinen; Ruizhi Chen; Ling Pei; Yuwei Chen; Hannu Hyyppä; Petri Rönnholm; Henrik Haggrén

Recently, research towards using 3D city models for personal navigation has been rapidly increasing. In this paper, an approach for 3D city model reconstruction for the application of mobile phone-based navigation is presented, which is based on data collected from vehicle-based mobile mapping system (MMS). Our method is performed based on three objectives: small model size, perfect accuracy control as well as good visual effect. Small model size is achieved by simplified object geometry and reduced texture resolution. Model accuracy is controlled by extracting building outlines from classified point cloud and overlapping with final 3D model. Model completeness is checked by comparing resulting model with original images. Good visual effect is realized by applying photo-realistic texture. Photorealistic texture provides rich information for the reconstructed 3D scene. By applying this approach, in test area, 3D city model is successfully reconstructed.


electronic imaging | 2005

Stereoscopy application of spherical imaging

Henrik Haggrén

Spherical images are linear images, which are exact in central projection. They are explicitly determined by the projection centre. The technical approach consists of collecting scenery through a single perspective and combining the images like panoramic mosaics. A general application of spherical imaging is hempispehric visualisation of space. In hemispheric visualisation, we distinguish between horizontal, half hemispheric, and full hemispheric imaging. The photogrammetric applications of spherical imaging aim at acquisition of 3D environmental or terrain models. Then the base to distance ratio is typically large. We assume, that the primary advantage of spherical imaging will be nevertheless on stereoscopy applications. We aim at full-scale stereoscopy with projection of spherical images in scale of 1:1. In case of full-scale stereoscopy, the stereoscopic plasticity will have a value of 1 and the base is typically short. Natural viewing would equal to base lengths of human eyes, i.e. to 65 mm. We present in the paper the Stereodrome, which is a physical realisation of full-scale stereo viewing. It consists of a photogrammetric workstation, a high-resolution stereo projector, necessary stereo eye-ware, and a back projection screen. Originally we have motivated us for building the Stereodrome by the fact that it is the only means to really see the behaviour of 3D point clouds in details. In the paper we will also discuss, in which way full-scale stereo display has been used for validating the quality of existing 3D geoinformation.


Digital Photogrammetry and Remote Sensing '95 | 1995

Airborne 3D profilometer

Henrik Haggrén; Terhikki Manninnen; Ilkka Peralainen; Jukka Pesonen; Petteri Pöntinen; Markku Rantasuo

Measurements of dimensions of inaccessible objects remains a challenge in spite of new technologies available. The authors of this article approach the problem with a photogrammetric method using a laser-camera combination installed in a helicopter. The method was applied to the topographic measurement of ice fields.


Applications in Optical Science and Engineering | 1993

Vision system for 3D car-body orientation

Henrik Haggrén

Automated photogrammetric vision systems, called photogrammetric stations, are used for industrial on-line control. The stations are on-site calibrated camera setups with necessary image processing in order to provide the manufacturing process with three-dimensional control data. One of the first operative industrial applications is the car body orientation within a seam sealing cell in automotive manufacturing.


Optics and Lasers in Engineering | 1989

Photogrammetric machine vision

Henrik Haggrén

Abstract Photogrammetry is the art, science and technology of obtaining reliable three-dimensional information about physical objects and the environment through processes of recording, measuring, and interpreting photographic images and patterns of electromagnetic radiant energy and other phenomena. In real-time photogrammetry, and specifically when applied to machine vision, the solid state video cameras act as dynamic two-dimensional records of scences containing all the actual information for continuous gathering of three-dimensional object space data. Both passive and active real-time photogrammetric systems are discussed.

Collaboration


Dive into the Henrik Haggrén's collaboration.

Top Co-Authors

Avatar

Hannu Hyyppä

Finnish Geodetic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juha Hyyppä

National Land Survey of Finland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antero Kukko

Finnish Geodetic Institute

View shared research outputs
Top Co-Authors

Avatar

Olli Jokinen

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Petteri Pöntinen

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Harri Kaartinen

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge