Mads Dyrmann
Maersk
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mads Dyrmann.
Advances in Animal Biosciences | 2017
Mads Dyrmann; Rasmus Nyholm Jørgensen; Henrik Skov Midtiby
This paper presents a method for automating weed detection in colour images despite heavy leaf occlusion. A fully convolutional neural network is used to detect the weeds. The network is trained and validated on a total of more than 17,000 annotations of weeds in images from winter wheat fields, which have been collected using a camera mounted on an all-terrain vehicle. Hereby, the network is able to automatically detect single weed instances in cereal fields despite heavy leaf occlusion.
Advances in Animal Biosciences | 2017
Per Rydahl Nielsen; Niels-Peter Jensen; Mads Dyrmann; Poul-Henning Nielsen; Rasmus Nyholm Jørgensen
In order to exploit potentials of 20–40% reduction of herbicide use, as documented by use of Decision Support Systems (DSS), where requirements for manual field inspection constitute a major obstacle, large numbers of digital pictures of weed infestations have been collected and analysed manually by crop advisors. Results were transferred to: 1) DSS, which determined needs for control and connected, optimized options for control returned options for control and 2) convolutional, neural networks, which in this way were trained to enable automatic analysis of future pictures, which support both field- and site-specific integrated weed management.
british machine vision conference | 2015
Mads Dyrmann
An important element in weed control using machine vision is the ability to identify plant species based on shape. For this to be done, it is often necessary to segment the plants from the soil. This may cause problems, if the colour of a plant is not consistent, since plants are then at risk of being separated into several objects. This study presents a plant segmentation method based on fuzzy c-means and a distance transform. This segmentation method is compared with four other plant segmentation methods based on various parameters, including the ability to maintain the plants as whole, connected components. The method presented here is found to be better at preserving plants as connected objects, while keeping the false positive rate low compared to commonly used segmentations techniques.
Sensors | 2018
Nima Teimouri; Mads Dyrmann; Per Rydahl Nielsen; Solvejg Kopp Mathiassen; Gayle J. Somerville; Rasmus Nyholm Jørgensen
This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.
Sensors | 2018
Hadi Karimi; Søren Skovsen; Mads Dyrmann; Rasmus Nyholm Jørgensen
Determining the individual location of a plant, besides evaluating sowing performance, would make subsequent treatment for each plant across a field possible. In this study, a system for locating cereal plant stem emerging points (PSEPs) has been developed. In total, 5719 images were gathered from several cereal fields. In 212 of these images, the PSEPs of the cereal plants were marked manually and used to train a fully-convolutional neural network. In the training process, a cost function was made, which incorporates predefined penalty regions and PSEPs. The penalty regions were defined based on fault prediction of the trained model without penalty region assignment. By adding penalty regions to the training, the network’s ability to precisely locate emergence points of the cereal plants was enhanced significantly. A coefficient of determination of about 87 percent between the predicted PSEP number of each image and the manually marked one implies the ability of the system to count PSEPs. With regard to the obtained results, it was concluded that the developed model can give a reliable clue about the quality of PSEPs’ distribution and the performance of seed drills in fields.
Journal of Field Robotics | 2018
Mads Dyrmann; Peter Christiansen; Henrik Skov Midtiby
Information on which weed species are present within agricultural fields is a prerequisite when using robots for site-specific weed management. This study proposes a method of improving robustness in shape-based classifying of seedlings toward natural shape variations within each plant species. To do so, leaves are separated from plants and classified individually together with the classification of the whole plant. The classification is based on common, rotation-invariant features. Based on previous classifications of leaves and plants, confidence in correct assignment is created for the plants and leaves, and this confidence is used to determine the species of the plant. By using this approach, the classification accuracy of eight plants species at early growth stages is increased from 93.9% to 96.3%.
Sensors | 2017
Søren Skovsen; Mads Dyrmann; Anders Krogh Mortensen; Kim Arild Steen; Ole Green; Jørgen Eriksen; René Gislum; Rasmus Nyholm Jørgensen; Henrik Karstoft
Optimal fertilization of clover-grass fields relies on knowledge of the clover and grass fractions. This study shows how knowledge can be obtained by analyzing images collected in fields automatically. A fully convolutional neural network was trained to create a pixel-wise classification of clover, grass, and weeds in red, green, and blue (RGB) images of clover-grass mixtures. The estimated clover fractions of the dry matter from the images were found to be highly correlated with the real clover fractions of the dry matter, making this a cheap and non-destructive way of monitoring clover-grass fields. The network was trained solely on simulated top-down images of clover-grass fields. This enables the network to distinguish clover, grass, and weed pixels in real images. The use of simulated images for training reduces the manual labor to a few hours, as compared to more than 3000 h when all the real images are annotated for training. The network was tested on images with varied clover/grass ratios and achieved an overall pixel classification accuracy of 83.4%, while estimating the dry matter clover fraction with a standard deviation of 7.8%.
Biosystems Engineering | 2016
Mads Dyrmann; Henrik Karstoft; Henrik Skov Midtiby
International Conference on Agricultural Engineering: Automation, Environment and Food Safety | 2016
Anders Krogh Mortensen; Mads Dyrmann; Henrik Karstoft; Rasmus Nyholm Jørgensen; René Gislum
Archive | 2014
Peter Christiansen; Mads Dyrmann