Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henrik Malm is active.

Publication


Featured researches published by Henrik Malm.


international conference on computer vision | 2007

Adaptive enhancement and noise reduction in very low light-level video

Henrik Malm; Magnus Oskarsson; Eric J. Warrant; Petrik Clarberg; Jon Hasselgren; Calle Lejdfors

A general methodology for noise reduction and contrast enhancement in very noisy image data with low dynamic range is presented. Video footage recorded in very dim light is especially targeted. Smoothing kernels that automatically adapt to the local spatio-temporal intensity structure in the image sequences are constructed in order to preserve and enhance fine spatial detail and prevent motion blur. In color image data, the chromaticity is restored and demosaicing of raw RGB input data is performed simultaneously with the noise reduction. The method is very general, contains few user-defined parameters and has been developed for efficient parallel computation using a GPU. The technique has been applied to image sequences with various degrees of darkness and noise levels, and results from some of these tests, and comparisons to other methods, are presented. The present work has been inspired by research on vision in nocturnal animals, particularly the spatial and temporal visual summation that allows these animals to see in dim light.


international conference on robotics and automation | 2002

Force control and visual servoing using planar surface identification

Tomas Olsson; Johan Bengtsson; Rolf Johansson; Henrik Malm

When designing flexible multi-sensor based robot systems, one important problem is how to combine the measurements from disparate sensors such as cameras and force sensors. In this paper, we present a method for combining direct force control and visual servoing in the presence of unknown planar surfaces. The control algorithm involves a force feedback control loop and a vision based reference trajectory as a feed-forward signal. The vision system is based on a constrained image-based visual servoing algorithm designed for surface following, where the location and orientation of the planar constraint surface is estimated online using position-, force- and visual data. We show how data from a simple and efficient camera calibration method can be used in combination with force and position data to improve the estimation and reference trajectories. The method is validated through experiments involving force controlled drawing on an unknown surface. The robot will grasp a pen and use it to draw lines between a number of markers drawn on a white-board, while the contact force is kept constant. Despite its simplicity, the performance of the method is satisfactory.


intelligent robots and systems | 2003

Simplified intrinsic camera calibration and hand-eye calibration for robot vision

Henrik Malm; Anders Heyden

In this paper we investigate how intrinsic camera calibration and hand-eye calibration can be performed on a robot vision system using the simplest possible motions and a planar calibration object. The standard methods on plane-based camera calibration are extended with theory on how to use pure translational motions for the intrinsic calibration and we see how hand-eye calibration can be performed within the same framework. The calibration of two cameras in a stereo head configuration is shown to be an interesting application of the developed theory. Results of experiments on a real robot vision system are presented.


computer vision and pattern recognition | 2001

Stereo head calibration from a planar object

Henrik Malm; Anders Heyden

A technique for stereo camera calibration from a known planar calibration object is proposed. Both the intrinsic parameters and the relative orientation are calculated. The proposed algorithm uses the two homogeneous linear constraints on the image of the absolute conic arising from the plane homographies for each camera and position. In addition to these, linear constraints on the relation between the images of the absolute conic for the two cameras in the stereo pair are calculated. The derivation of these constraints is made possible by considering the rigidity of the stereo pair. Using all these constraints simultaneously in a linear system gives a stereo calibration technique that is less sensitive to noise, compared to just using the single camera constraints. As a part of the algorithm, a simple way to calculate the infinite homography from two stereo views of a plane is presented.The performance of the algorithm is shown in experiments on both simulated and real data.


Proceedings of the IEEE; 102(10), pp 1411-1426 (2014) | 2014

The Remarkable Visual Abilities of Nocturnal Insects: Neural Principles and Bioinspired Night-Vision Algorithms

Eric J. Warrant; Magnus Oskarsson; Henrik Malm

Despite their tiny eyes and brains, nocturnal insects have remarkable visual abilities. Recent work - particularly on fast-flying moths and bees and on ball-rolling dung beetles - has shown that nocturnal insects are able to distinguish colors, to detect faint movements, to learn visual landmarks, to orient to the faint pattern of polarized light produced by the moon, and to navigate using the stars. These impressive visual abilities are the result of exquisitely adapted eyes and visual systems, the product of millions of years of evolution. Even though we are only at the threshold of understanding the neural mechanisms responsible for reliable nocturnal vision, growing evidence suggests that the neural summation of photons in space and time is critically important: even though vision in dim light becomes necessarily coarser and slower, those details that are preserved are seen clearly. These benefits of spatio-temporal summation have obvious implications for dim-light video technologies. In addition to reviewing the visual adaptations of nocturnal insects, we here describe an algorithm inspired by nocturnal visual processing strategies - from amplification of primary image signals to optimized spatio-temporal summation to reduce noise - that dramatically increases the reliability of video collected in dim light, including the preservation of color.


international conference on pattern recognition | 2006

Motion Dependent Spatiotemporal Smoothing for Noise Reduction in Very Dim Light Image Sequences

Henrik Malm; Eric J. Warrant

A new method for noise reduction using spatiotemporal smoothing is presented in this paper. The method is developed especially for reducing the noise that arises when acquiring video sequences with a camera under very dim light conditions. The work is inspired by research on the vision of nocturnal animals and the adaptive spatial and temporal summation that is prevalent in the visual systems of these animals. From analysis using the so-called structure tensor in the three-dimensional spatiotemporal space, motion segmentation and global ego-motion estimation, Gaussian shaped smoothing kernels are oriented mainly in the direction of the motion and in spatially homogeneous directions. In static areas, smoothing along the temporal dimension is favoured for maximum preservation of structure. The technique has been applied to various dim light image sequences and results of these experiments are presented here


IEEE Transactions on Robotics | 2006

Extensions of Plane-Based Calibration to the Case of Translational Motion in a Robot Vision Setting

Henrik Malm; Anders Heyden

In this paper, a technique for calibrating a camera using a planar calibration object with known metric structure, when the camera (or the calibration plane) undergoes pure translational motion, is presented. The study is an extension of the standard formulation of plane-based camera calibration where the translational case is considered as degenerate. We derive a flexible and straightforward way of using different amounts of knowledge of the translational motion for the calibration task. The theory is mainly applicable in a robot vision setting, and the calculation of the hand-eye orientation and the special case of stereo head calibration are also being addressed. Results of experiments on both computer-generated and real image data are presented. The paper covers the most useful instances of applying the technique to a real system and discusses the degenerate cases that needs to be considered. The paper also presents a method for calculating the infinite homography between the two image planes in a stereo head, using the homographies estimated between the calibration plane and the image planes. Its possible usage and usefulness for simultaneous calibration of the two cameras in the stereo head are discussed and illustrated using experiments


international conference on pattern recognition | 2000

A new approach to hand-eye calibration

Henrik Malm; Anders Heyden

Traditionally, hand-eye calibration has been done using point correspondences, reducing the problem to a matrix equation. This approach requires reliably detected and tracked points between images taken from fairly widespread locations. We present a new approach to performing hand-eye calibration. The novelty of the proposed method lies in the fact that instead of point correspondences, normal derivatives of the image flow field are used. First, two different small translational motions are made, enabling the direction of the optical axis to be computed from image derivatives only. Next, at least two different rotational motions are made, enabling also the translational part of the hand-eye transformation to be estimated. It is also shown how to compute a depth reconstruction from the information obtained in the hand-eye calibration algorithm. Finally, we discuss how to calculate the derivatives and present some experiments on synthetic data.


international conference on control, automation, robotics and vision | 2002

Self-calibration from image derivatives for active vision systems

Henrik Malm; Anders Heyden

In this paper we show how to calibrate a camera, mounted on a robot, with respect to the intrinsic camera parameters when the so-called hand-eye transformation between the robot hand and the camera is unknown. The calibration is based directly on the spatial and temporal derivatives in an image sequence and do not need any matching and tracking of features or a reference object. The calibration is to be performed on an active robot vision system where the motion of the robot hand can be controlled. A minimum of 3 non-coplanar translations of the robot hand are needed for the calculation. In conjunction with the intrinsic camera calibration the orientation of the camera, with respect to the robot hand, is calculated. The position of the camera can then also be obtained. At each stage only the image derivatives and the known motion of the robot hand are used. For the full, intrinsic and extrinsic, calibration a total of 5 distinct motions are used. The algorithm has been tested in extensive experiments with respect to e.g. noise sensitivity.


european conference on computer vision | 2000

Hand-Eye Calibration from Image Derivatives

Henrik Malm; Anders Heyden

In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed method comprise a simple way to calculate the hand-eye calibration when a camera is mounted on a robot. Firstly, it is shown how the orientation of the optical axis can be estimated from at least two different translational motions of the robot. Secondly, it is shown how the other parameters can be obtained using at least two different motions containing also a rotational part. In both stages, only image gradients are used, i.e. no point matches are needed. As a by-product, both the motion field and the depth of the scene can be obtained. The proposed method is illustrated in experiments using both simulated and real data.

Collaboration


Dive into the Henrik Malm's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge