Hans Jørgen Andersen
Aalborg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hans Jørgen Andersen.
Robotics and Autonomous Systems | 2001
Moritz Störring; Hans Jørgen Andersen; Erik Granum
Abstract Skin colour is an often used feature in human face and motion tracking. It has the advantages of being orientation and size invariant and it is fast to process. The major disadvantage is that it becomes unreliable if the illumination changes. In this paper, skin colour is modelled based on a reflectance model of the skin, the parameters of the camera and light sources. In particular, the location of the skin colour area in the chromaticity plane is modelled for different and mixed light sources. The model is empirically validated. It has applications in adaptive segmentation of skin colour and in the estimation of the current illumination in camera images containing skin colour.
intelligent robots and systems | 2010
Mikael Svenstrup; Thomas Bak; Hans Jørgen Andersen
This paper presents a trajectory planning algorithm for a robot operating in dynamic human environments. Environments such as pedestrian streets, hospital corridors, train stations or airports. We formulate the problem as planning a minimal cost trajectory through a potential field, defined from the perceived position and motion of persons in the environment.
international conference on robotics and automation | 2009
Mikael Svenstrup; Søren Tranberg; Hans Jørgen Andersen; Thomas Bak
This paper introduces a new method to determine a persons pose based on laser range measurements. Such estimates are typically a prerequisite for any human-aware robot navigation, which is the basis for effective and timeextended interaction between a mobile robot and a human. The robot uses observed information from a laser range finder to detect persons and their position relative to the robot. This information together with the motion of the robot itself is fed through a Kalman filter, which utilizes a model of the human kinematic movement to produce an estimate of the persons pose. The resulting pose estimates are used to identify humans who wish to be approached and interacted with. The behaviour of the robot is based on adaptive potential functions adjusted accordingly such that the persons social spaces are respected. The method is tested in experiments that demonstrate the potential of the combined pose estimation and adaptive behaviour approach.
robot and human interactive communication | 2009
Søren Tranberg Hansen; Mikael Svenstrup; Hans Jørgen Andersen; Thomas Bak
Respecting peoples social spaces is an important prerequisite for acceptable and natural robot navigation in human environments. In this paper, we describe an adaptive system for mobile robot navigation based on estimates of whether a person seeks to interact with the robot or not. The estimates are based on run-time motion pattern analysis compared to stored experience in a database. Using a potential field centered around the person, the robot positions itself at the most appropriate place relative to the person and the interaction status. The system is validated through qualitative tests in a real world setting. The results demonstrate that the system is able to learn to navigate based on past interaction experiences, and to adapt to different behaviors over time.
Pattern Recognition Letters | 2003
Moritz Störring; Tomas Kocka; Hans Jørgen Andersen; Erik Granum
New human computer interfaces are using computer vision systems to track faces and hands. A critical task in such systems is the segmentation. An often used approach is colour based segmentation, approximating the skin chromaticities with a statistical model, e.g. with mean value and covariance matrix. The advantage of this approach is that it is invariant to size and orientation and fast to compute. A disadvantage is that it is sensitive to changes of the illumination, and in particular to changes in the illumination colour.This paper investigates (1) how accurately the covariance matrix of skin chromaticities might be modelled for different illumination colours using a physics-based approach, (2) how this may be used as a feature to classify between skin and other materials. Results are presented using real image data taken under different illumination colours and from subjects with different shades of skin. The eigenvectors of the modelled and measured covariances deviate in orientation about 4°. The feature to distinguish skin from other surfaces is tested on sequences with changing illumination conditions containing hands and other materials. In most cases it is possible to distinguish between skin and other objects.
Computers and Electronics in Agriculture | 2001
John A. Marchant; Hans Jørgen Andersen; Christine M. Onyango
Abstract This paper uses data collected from an earlier reported imaging sensor to investigate the classification of vegetation from background. The sensor uses three wavebands, red; green; and near infra-red (NIR). A classification method (the alpha method) is introduced which is based on a model of the light source and the reflecting surface. The alpha method is compared with two ratio methods of classification (red/NIR and red/green) and two single waveband methods of classification (NIR and green intensity). The Receiver Operating Characteristic Curve (ROC) is used to evaluate the classifications on realistic test images. ROCs compare the ‘true positive ratio’ with ‘the false positive ratio’ as the classification parameter varies. The area under the ROC gives a measure of how well an algorithm performs. Measurements on the ROC show that the alpha and ratio methods all perform reasonably well with the red/green ratio giving slightly poorer performance than the alpha method and the red/NIR ratio. The single waveband methods perform significantly less well with green intensity easily the worst. The alpha and ratio methods have ‘best’ thresholds that correspond with detectable histogram features when there is a significant amount of vegetation in the image. The physical basis for the alpha method means that there is a detectable mode in the histogram that corresponds with the ‘best’ threshold even when there is only a small amount of vegetation. The single waveband methods do not produce histograms, which can easily be analysed, and so their use should be confined to simple images.
human-robot interaction | 2010
Søren Tranberg Hansen; Hans Jørgen Andersen; Thomas Bak
Robots for elderly have drawn a great deal of attention as it is a controversial topic being pushed forward by the fact that there will be a dramatic increase of elderly in most western countries. Within the field of HRI, much research has been conducted on robots interacting with elderly and also a number of commercial products have been introduced to the market. Since 2006, a number of projects have been launched in Denmark in order to evaluate robot technology in practice in elder care. This paper gives an brief overview of a selected number of projects and outlines the characteristics and results. Finally it is discussed how HRI can benefit from these.
british machine vision conference | 2006
Kristian Kirk; Hans Jørgen Andersen
A method for capturing high intensity dynamic range scenes with a low dynamic range camera consists in taking a series of images with di erent exposure settings and combining these into a single high dynamic range image. The combined image values are found by weighted averaging of values from the di erently exposed images on a per-pixel basis. This paper reviews existing weighting schemes and considers their noise properties. Furthermore, a minimum-variance solution is introduced which exploits a camera noise model. Special emphasis is on the case when the camera is linear. A method is given for estimating the uncertainty of the combined image values. The results are validated experimentally.
Computers and Electronics in Agriculture | 2015
Wajahat Kazmi; Francisco Garcia-Ruiz; Jon Nielsen; Jesper Rasmussen; Hans Jørgen Andersen
In this article, we address the problem of thistle detection in sugar beet fields under natural, outdoor conditions. In our experiments, we used a commercial color camera and extracted vegetation indices from the images. A total of 474 field images of sugar beet and thistles were collected and divided into six different groups based on illumination, scale and age. The feature set was made up of 14 indices. Mahalanobis Distance (MD) and Linear Discriminant Analysis (LDA) were used to classify the species. Among the features, excess green (ExG), green minus blue (GB) and color index for vegetation extraction (CIVE) offered the highest average accuracy, above 90%. The feature set was reduced to four important indices following a PCA analysis, but the classification accuracy was similar to that obtained by only combining ExG and GB which was around 95%, still better than an individual index. Stepwise linear regression selected nine out of 14 features and offered the highest accuracy of 97%. The results of LDA and MD were fairly close, making them both equally preferable. Finally, the results were validated by annotating images containing both sugar beet and thistles using the trained classifiers. The validation experiments showed that sunlight followed by the size of the plant, which is related to its growth stage, are the two most important factors affecting the classification. In this study, the best results were achieved for images of young sugar beet (in the seventh week) under a shade.
Computers and Electronics in Agriculture | 2015
Wajahat Kazmi; Francisco Garcia-Ruiz; Jon Nielsen; Jesper Rasmussen; Hans Jørgen Andersen
We exploit affine invariant regions and leaf edge shapes for weed detection.Data contains field images of sugar beet and thistle.A new local vegetation color descriptor is also introduced.Bag of Visual Words approach is used with SVM classifier.Fusion of leaf color and edge signatures yields 99% accuracy. In this article, local features extracted from field images are evaluated for weed detection. Several scale and affine invariant detectors from computer vision literature along with high performance descriptors were applied. Field dataset contained a total of 474 plant images of sugar beet and creeping thistle, divided into six groups based on illumination, age, and camera to plant distance. To establish a performance baseline, leaf image retrieval potential of the selected features was first assessed on a publicly available leaf database containing flatbed scanned images of 15 tree species. Then a comparison with the field data retrieval highlighted the trade-off due to the field challenges. Adopting a comprehensive approach, edge shape detectors and homogeneous surface detecting affine invariant regions were fused. In order to integrate vegetation indices as local features, a new local vegetation color descriptor was introduced which used various combinations of color indices and offered a very high precision. Retrieval in the field data was evaluated group-wise. Although, the impact of the sunlight was found to be very low on shape features, but relatively higher precisions were obtained for younger plants under a shade (overall more than 80%). The weed detection accuracy was assessed using the Bag-of-Visual-Word scheme with KNN and SVM classifiers. The assessment showed that with an SVM classifier, a fusion of surface color and edge shapes boosted the overall classification accuracy to as high as 99.07% with a very low false negative rate (2%).