Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kim Arild Steen is active.

Publication


Featured researches published by Kim Arild Steen.


Sensors | 2014

Automated Detection and Recognition of Wildlife Using Thermal Cameras

Peter Christiansen; Kim Arild Steen; Rasmus Nyholm Jørgensen; Henrik Karstoft

In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3–10 m and an accuracy of 75.2% for an altitude range of 10–20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3–10 of meters and 77.7% in an altitude range of 10–20 m


Sensors | 2012

Automatic detection of animals in mowing operations using thermal cameras.

Kim Arild Steen; Andrés Villa-Henriksen; Ole Roland Therkildsen; Ole Green

During the last decades, high-efficiency farming equipment has been developed in the agricultural sector. This has also included efficiency improvement of moving techniques, which include increased working speeds and widths. Therefore, the risk of wild animals being accidentally injured or killed during routine farming operations has increased dramatically over the years. In particular, the nests of ground nesting bird species like grey partridge (Perdix perdix) or pheasant (Phasianus colchicus) are vulnerable to farming operations in their breeding habitat, whereas in mammals, the natural instinct of e.g., leverets of brown hare (Lepus europaeus) and fawns of roe deer (Capreolus capreolus) to lay low and still in the vegetation to avoid predators increase their risk of being killed or injured in farming operations. Various methods and approaches have been used to reduce wildlife mortality resulting from farming operations. However, since wildlife-friendly farming often results in lower efficiency, attempts have been made to develop automatic systems capable of detecting wild animals in the crop. Here we assessed the suitability of thermal imaging in combination with digital image processing to automatically detect a chicken (Gallus domesticus) and a rabbit (Oryctolagus cuniculus) in a grassland habitat. Throughout the different test scenarios, our study animals were detected with a high precision, although the most dense grass cover reduced the detection rate. We conclude that thermal imaging and digital imaging processing may be an important tool for the improvement of wildlife-friendly farming practices in the future.


Sensors | 2012

A Vocal-Based Analytical Method for Goose Behaviour Recognition

Kim Arild Steen; Ole Roland Therkildsen; Henrik Karstoft; Ole Green

Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision) and a reasonable recognition of flushing (79–86%, 66–80%) and landing behaviour(73–91%, 79–92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system.


Sensors | 2016

DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field.

Peter Christiansen; Lars N. Nielsen; Kim Arild Steen; Rasmus Nyholm Jørgensen; Henrik Karstoft

Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).


Journal of Imaging | 2016

Using Deep Learning to Challenge Safety Standard for Highly Autonomous Machines in Agriculture

Kim Arild Steen; Peter Christiansen; Henrik Karstoft; Rasmus Nyholm Jørgensen

In this paper, an algorithm for obstacle detection in agricultural fields is presented. The algorithm is based on an existing deep convolutional neural net, which is fine-tuned for detection of a specific obstacle. In ISO/DIS 18497, which is an emerging standard for safety of highly automated machinery in agriculture, a barrel-shaped obstacle is defined as the obstacle which should be robustly detected to comply with the standard. We show that our fine-tuned deep convolutional net is capable of detecting this obstacle with a precision of 99 . 9 % in row crops and 90 . 8 % in grass mowing, while simultaneously not detecting people and other very distinct obstacles in the image frame. As such, this short note argues that the obstacle defined in the emerging standard is not capable of ensuring safe operations when imaging sensors are part of the safety system.


Precision Agriculture | 2017

Platform for evaluating sensors and human detection in autonomous mowing operations

Peter Christiansen; Mikkel Kragh; Kim Arild Steen; Henrik Karstoft; Rasmus Nyholm Jørgensen

The concept of autonomous farming concerns automatic agricultural machines operating safely and efficiently without human intervention. In order to ensure safe autonomous operation, real-time risk detection and avoidance must be undertaken. This paper presents a flexible vehicle-mounted sensor system for recording positional and imaging data with a total of six sensors, and a full procedure for calibrating and registering all sensors. Authentic data were recorded for a case study on grass-harvesting and human safety. The paper incorporates parts of ISO 18497 (an emerging standard for safety of highly automated machinery in agriculture) related to human detection and safety. The case study investigates four different sensing technologies and is intended as a dataset to validate human safety or a human detection system in grass-harvesting. The study presents common algorithms that are able to detect humans, but struggle to handle lying or occluded humans in high grass.


Sensors | 2015

Detection of Bird Nests during Mechanical Weeding by Incremental Background Modeling and Visual Saliency

Kim Arild Steen; Ole Roland Therkildsen; Ole Green; Henrik Karstoft

Mechanical weeding is an important tool in organic farming. However, the use of mechanical weeding in conventional agriculture is increasing, due to public demands to lower the use of pesticides and an increased number of pesticide-resistant weeds. Ground nesting birds are highly susceptible to farming operations, like mechanical weeding, which may destroy the nests and reduce the survival of chicks and incubating females. This problem has limited focus within agricultural engineering. However, when the number of machines increases, destruction of nests will have an impact on various species. It is therefore necessary to explore and develop new technology in order to avoid these negative ethical consequences. This paper presents a vision-based approach to automated ground nest detection. The algorithm is based on the fusion of visual saliency, which mimics human attention, and incremental background modeling, which enables foreground detection with moving cameras. The algorithm achieves a good detection rate, as it detects 28 of 30 nests at an average distance of 3.8 m, with a true positive rate of 0.75.


International Journal of Pattern Recognition and Artificial Intelligence | 2013

AUDIO-VISUAL RECOGNITION OF GOOSE FLOCKING BEHAVIOR

Kim Arild Steen; Ole Roland Therkildsen; Ole Green; Henrik Karstoft

Every year, agriculture experience significant economic loss due to wild geese, rooks and other flocks of birds. A wide range of devices to detect and deter animals causing conflict is used to prevent this, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviors could form a critical component of a system capable of altering the disruptive stimulus to avoid habituation. This paper presents an audio-visual-based approach for recognition of goose flocking behavior. The vocal communication and movement of the flock is used for the audio-visual recognition, which is accomplished through classifier fusion of an acoustic and a video-based classifier. Acoustic behavior recognition is based on generalized perceptual features and support vector machines, and visual behavior recognition is based on optical flow estimation and a Bayesian Rule-Based scheme. Classifier fusion is implemented using the product rule on the soft-outputs from both classifiers. The algorithm has been used to recognize goose flocking behaviors (landing, foraging and flushing) and have improved the performance compared to using audio- or video-based classifiers alone. The improvement of using classifier fusion is most evident in the case of flushing and landing behavior recognition, where it was possible to combine the advantages of both the audio- and video-based classifier.


Sensors | 2017

FieldSAFE: Dataset for Obstacle Detection in Agriculture

Mikkel Kragh; Peter Christiansen; Morten Stigaard Laursen; Morten Larsen; Kim Arild Steen; Ole Green; Henrik Karstoft; Rasmus Nyholm Jørgensen

In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.


Computers and Electronics in Agriculture | 2018

In row cultivation controlled by plant patterns

Henrik Skov Midtiby; Kim Arild Steen; Ole Green

Abstract Information about a regular crop seeding pattern is used to locate individual crop plants seeded with a precision seeder. The amount of vegetation along each of the crop rows are monitored using a bispectral line scanning camera, this generates a vegetation coverage signal. Convolution of the vegetation coverage signal with a damped harmonic oscillation, tuned to the crop plant spacing used in the field, gives a signal with distinct peaks near the real crop plant locations. The algorithm was tested on real field data, consisting of precision seeded maize. The seeding pattern were locked, such that a crop plant in one row will be next to a crop plant in the adjacent rows. The average absolution position error from the precision seeder is estimated to be around 15.5 mm. Compared to manual annotated ground truth plant positions, the system locates individual crop plants with an average absolute position error of 20.72 mm when using information from a single crop row and an average absolution position error of 14.79 mm when utilising information from five adjacent crop rows.

Collaboration


Dive into the Kim Arild Steen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge