Henrik Karstoft
Aarhus University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Henrik Karstoft.
Sensors | 2014
Peter Christiansen; Kim Arild Steen; Rasmus Nyholm Jørgensen; Henrik Karstoft
In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3–10 m and an accuracy of 75.2% for an altitude range of 10–20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3–10 of meters and 77.7% in an altitude range of 10–20 m
Sensors | 2012
Simon Lind Kappel; Michael Skovdal Rathleff; Dan Hermann; Ole Simonsen; Henrik Karstoft; Peter Ahrendt
Analysis of foot movement is essential in the treatment and prevention of foot-related disorders. Measuring the in-shoe foot movement during everyday activities, such as sports, has the potential to become an important diagnostic tool in clinical practice. The current paper describes the development of a thin, flexible and robust capacitive strain sensor for the in-shoe measurement of the navicular drop. The navicular drop is a well-recognized measure of foot movement. The position of the strain sensor on the foot was analyzed to determine the optimal points of attachment. The sensor was evaluated against a state-of-the-art video-based system that tracks reflective markers on the bare foot. Preliminary experimental results show that the developed strain sensor is able to measure navicular drop on the bare foot with an accuracy on par with the video-based system and with a high reproducibility. Temporal comparison of video-based, barefoot and in-shoe measurements indicate that the developed sensor measures the navicular drop accurately in shoes and can be used without any discomfort for the user.
Sensors | 2012
Kim Arild Steen; Ole Roland Therkildsen; Henrik Karstoft; Ole Green
Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision) and a reasonable recognition of flushing (79–86%, 66–80%) and landing behaviour(73–91%, 79–92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system.
Sensors | 2016
Peter Christiansen; Lars N. Nielsen; Kim Arild Steen; Rasmus Nyholm Jørgensen; Henrik Karstoft
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Journal of Imaging | 2016
Kim Arild Steen; Peter Christiansen; Henrik Karstoft; Rasmus Nyholm Jørgensen
In this paper, an algorithm for obstacle detection in agricultural fields is presented. The algorithm is based on an existing deep convolutional neural net, which is fine-tuned for detection of a specific obstacle. In ISO/DIS 18497, which is an emerging standard for safety of highly automated machinery in agriculture, a barrel-shaped obstacle is defined as the obstacle which should be robustly detected to comply with the standard. We show that our fine-tuned deep convolutional net is capable of detecting this obstacle with a precision of 99 . 9 % in row crops and 90 . 8 % in grass mowing, while simultaneously not detecting people and other very distinct obstacles in the image frame. As such, this short note argues that the obstacle defined in the emerging standard is not capable of ensuring safe operations when imaging sensors are part of the safety system.
Sensors | 2015
Gang Jun Tu; Henrik Karstoft; Lene Juul Pedersen; Erik Jørgensen
In this paper, we introduce a novel approach to estimate the illumination and reflectance of an image. The approach is based on illumination-reflectance model and wavelet theory. We use a homomorphic wavelet filter (HWF) and define a wavelet quotient image (WQI) model based on dyadic wavelet transform. The illumination and reflectance components are estimated by using HWF and WQI, respectively. Based on the illumination and reflectance estimation we develop an algorithm to segment sows in grayscale video recordings which are captured in complex farrowing pens. Experimental results demonstrate that the algorithm can be applied to detect the domestic animals in complex environments such as light changes, motionless foreground objects and dynamic background.
Applied Soft Computing | 2015
Gang Jun Tu; Henrik Karstoft
HighlightsWe introduce a logarithmic dyadic wavelet transform.It can be used in edge detection.It can be used in 1D signal reconstruction.It can be used in 2D image reconstruction. In this paper, based on the logarithmic image processing model and the dyadic wavelet transform (DWT), we introduce a logarithmic DWT (LDWT) that is a mathematical transform. It can be used in image edge detection, signal and image reconstruction. Comparative study of this proposed LDWT-based method is done with the edge detection Canny and Sobel methods using Pratts Figure of Merit, and the comparative results show that the LDWT-based method is better and more robust in detecting low contrast edges than the other two methods. The gradient maps of images are detected by using the DWT- and LDWT-based methods, and the experimental results demonstrate that the gradient maps obtained by the LDWT-based method are more adequate and precisely located. Finally, we use the DWT- and LDWT-based methods to reconstruct one-dimensional signals and two-dimensional images, and the reconstruction results show that the LDWT-based reconstruction method is more effective.
Precision Agriculture | 2017
Peter Christiansen; Mikkel Kragh; Kim Arild Steen; Henrik Karstoft; Rasmus Nyholm Jørgensen
The concept of autonomous farming concerns automatic agricultural machines operating safely and efficiently without human intervention. In order to ensure safe autonomous operation, real-time risk detection and avoidance must be undertaken. This paper presents a flexible vehicle-mounted sensor system for recording positional and imaging data with a total of six sensors, and a full procedure for calibrating and registering all sensors. Authentic data were recorded for a case study on grass-harvesting and human safety. The paper incorporates parts of ISO 18497 (an emerging standard for safety of highly automated machinery in agriculture) related to human detection and safety. The case study investigates four different sensing technologies and is intended as a dataset to validate human safety or a human detection system in grass-harvesting. The study presents common algorithms that are able to detect humans, but struggle to handle lying or occluded humans in high grass.
Sensors | 2012
Alfredo Chávez; Henrik Karstoft
To enhance sensor capabilities, sensor data readings from different modalities must be fused. The main contribution of this paper is to present a sensor data fusion approach that can reduce KinectTM sensor limitations. This approach involves combining laser with KinectTM sensors. Sensor data is modelled in a 3D environment based on octrees using a probabilistic occupancy estimation. The Bayesian method, which takes into account the uncertainty inherent in the sensor measurements, is used to fuse the sensor information and update the 3D octree map. The sensor fusion yields a significant increase of the field of view of the KinectTM sensor that can be used for robot tasks.
Sensors | 2015
Kim Arild Steen; Ole Roland Therkildsen; Ole Green; Henrik Karstoft
Mechanical weeding is an important tool in organic farming. However, the use of mechanical weeding in conventional agriculture is increasing, due to public demands to lower the use of pesticides and an increased number of pesticide-resistant weeds. Ground nesting birds are highly susceptible to farming operations, like mechanical weeding, which may destroy the nests and reduce the survival of chicks and incubating females. This problem has limited focus within agricultural engineering. However, when the number of machines increases, destruction of nests will have an impact on various species. It is therefore necessary to explore and develop new technology in order to avoid these negative ethical consequences. This paper presents a vision-based approach to automated ground nest detection. The algorithm is based on the fusion of visual saliency, which mimics human attention, and incremental background modeling, which enables foreground detection with moving cameras. The algorithm achieves a good detection rate, as it detects 28 of 30 nests at an average distance of 3.8 m, with a true positive rate of 0.75.