Peter Christiansen
Aarhus University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Christiansen.
Sensors | 2014
Peter Christiansen; Kim Arild Steen; Rasmus Nyholm Jørgensen; Henrik Karstoft
In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3–10 m and an accuracy of 75.2% for an altitude range of 10–20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3–10 of meters and 77.7% in an altitude range of 10–20 m
Sensors | 2016
Peter Christiansen; Lars N. Nielsen; Kim Arild Steen; Rasmus Nyholm Jørgensen; Henrik Karstoft
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Journal of Imaging | 2016
Kim Arild Steen; Peter Christiansen; Henrik Karstoft; Rasmus Nyholm Jørgensen
In this paper, an algorithm for obstacle detection in agricultural fields is presented. The algorithm is based on an existing deep convolutional neural net, which is fine-tuned for detection of a specific obstacle. In ISO/DIS 18497, which is an emerging standard for safety of highly automated machinery in agriculture, a barrel-shaped obstacle is defined as the obstacle which should be robustly detected to comply with the standard. We show that our fine-tuned deep convolutional net is capable of detecting this obstacle with a precision of 99 . 9 % in row crops and 90 . 8 % in grass mowing, while simultaneously not detecting people and other very distinct obstacles in the image frame. As such, this short note argues that the obstacle defined in the emerging standard is not capable of ensuring safe operations when imaging sensors are part of the safety system.
Precision Agriculture | 2017
Peter Christiansen; Mikkel Kragh; Kim Arild Steen; Henrik Karstoft; Rasmus Nyholm Jørgensen
The concept of autonomous farming concerns automatic agricultural machines operating safely and efficiently without human intervention. In order to ensure safe autonomous operation, real-time risk detection and avoidance must be undertaken. This paper presents a flexible vehicle-mounted sensor system for recording positional and imaging data with a total of six sensors, and a full procedure for calibrating and registering all sensors. Authentic data were recorded for a case study on grass-harvesting and human safety. The paper incorporates parts of ISO 18497 (an emerging standard for safety of highly automated machinery in agriculture) related to human detection and safety. The case study investigates four different sensing technologies and is intended as a dataset to validate human safety or a human detection system in grass-harvesting. The study presents common algorithms that are able to detect humans, but struggle to handle lying or occluded humans in high grass.
Sensors | 2017
Mikkel Kragh; Peter Christiansen; Morten Stigaard Laursen; Morten Larsen; Kim Arild Steen; Ole Green; Henrik Karstoft; Rasmus Nyholm Jørgensen
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.
10th European Conference on Precision Agriculture | 2015
Rasmus Nyholm Jørgensen; M. B. Brandt; T. Schmidt; Morten Stigaard Laursen; R. Larsen; M. Nørremark; Henrik Skov Midtiby; Peter Christiansen
The aim was to evaluate a labor-reduced and semi-automated field trial design assessed through a case study estimating the effect of row spacing, seeding density and seeding pattern on maize yield and feed quality. The trial consisted of 70 different treatment combinations and 560 parcels in total. The drone-based orthophoto proved to be a valuable tool pinpointing the parcels with experimental errors exemplified by rows not seeded or patches with bare soil. The results of the field trial showed no interaction between row spacing and plant density. The yield increased due to decreasing row spacing and increasing plant density.
Journal of Field Robotics | 2018
Mads Dyrmann; Peter Christiansen; Henrik Skov Midtiby
Information on which weed species are present within agricultural fields is a prerequisite when using robots for site-specific weed management. This study proposes a method of improving robustness in shape-based classifying of seedlings toward natural shape variations within each plant species. To do so, leaves are separated from plants and classified individually together with the classification of the whole plant. The classification is based on common, rotation-invariant features. Based on previous classifications of leaves and plants, confidence in correct assignment is created for the plants and leaves, and this confidence is used to determine the species of the plant. By using this approach, the classification accuracy of eight plants species at early growth stages is increased from 93.9% to 96.3%.
Frontiers in Robotics and AI | 2018
Timo Korthals; Mikkel Kragh; Peter Christiansen; Henrik Karstoft; Rasmus Nyholm Jørgensen; Ulrich Rückert
Today, agricultural vehicles are available that can automatically perform tasks such as weed detection and spraying, mowing, and sowing while being steered automatically. However, for such systems to be fully autonomous and self-driven, not only their specific agricultural tasks must be automated. An accurate and robust perception system automatically detecting and avoiding all obstacles must also be realized to ensure safety of humans, animals, and other surroundings. In this paper, we present a multi-modal obstacle and environment detection and recognition approach for process evaluation in agricultural fields. The proposed pipeline detects and maps static and dynamic obstacles globally, while providing process-relevant information along the traversed trajectory. Detection algorithms are introduced for a variety of sensor technologies, including range sensors (lidar and radar) and cameras (stereo and thermal). Detection information is mapped globally into semantical occupancy grid maps and fused across all sensors with late fusion, resulting in accurate traversability assessment and semantical mapping of process-relevant categories (e.g., crop, ground, and obstacles). Finally, a decoding step uses a Hidden Markov model to extract relevant process-specific parameters along the trajectory of the vehicle, thus informing a potential control system of unexpected structures in the planned path. The method is evaluated on a public dataset for multi-modal obstacle detection in agricultural fields. Results show that a combination of multiple sensor modalities increases detection performance and that different fusion strategies must be applied between algorithms detecting similar and dissimilar classes.
international conference on image analysis and recognition | 2016
Stefan-Daniel Suvei; Leon Bodenhagen; Lilita Kiforenko; Peter Christiansen; Rasmus Nyholm Jørgensen; Anders Buch; Norbert Krüger
This paper proposes an algorithm which uses the depth information acquired from an active sensor as guidance for a block matching stereo algorithm. In the proposed implementation, the disparity search interval used for the block matching is reduced around the depth values obtained from the active sensor, which leads to an improved matching quality and denser disparity maps and point clouds. The performance of the proposed method is evaluated by carrying out a series of experiments on 3 different data sets obtained from different robotic systems. We demonstrate with experimental results that the disparity estimation is improved and denser disparity maps are generated.
Voluntas | 2010
Peter Christiansen; Asbjørn Sonne Nørgaard; Hilmar Rommetvedt; Torsten Svensson; Gunnar Thesen; PerOla Öberg