Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Massimo Minervini is active.

Publication


Featured researches published by Massimo Minervini.


Ecological Informatics | 2014

Image-based plant phenotyping with incremental learning and active contours

Massimo Minervini; Mohammed M. Abdelsamea; Sotirios A. Tsaftaris

Plant phenotyping investigates how a plants genome, interacting with the environment, affects the observable traits of a plant (phenome). It is becoming increasingly important in our quest towards efficient and sustainable agriculture. While sequencing the genome is becoming increasingly efficient, acquiring phenotype information has remained largely of low throughput. Current solutions for automated image-based plant phenotyping, rely either on semi-automated or manual analysis of the imaging data, or on expensive and proprietary software which accompanies costly hardware infrastructure. While some attempts have been made to create software applications that enable the analysis of such images in an automated fashion, most solutions are tailored to particular acquisition scenarios and restrictions on experimental design. In this paper we propose and test, a method for the segmentation and the automated analysis of time-lapse plant images from phenotyping experiments in a general laboratory setting, that can adapt to scene variability. The method involves minimal user interaction, necessary to establish the statistical experiments that may follow. At every time instance (i.e., a digital photograph), it segments the plants in images that contain many specimens of the same species. For accurate plant segmentation we propose a vector valued level set formulation that incorporates features of color intensity, local texture, and prior knowledge. Prior knowledge is incorporated using a plant appearance model implemented with Gaussian mixture models, which utilizes incrementally information from previously segmented instances. The proposed approach is tested on Arabidopsis plant images acquired with a static camera capturing many subjects at the same time. Our validation with ground truth segmentations and comparisons with state-of-the-art methods in the literature shows that the proposed method is able to handle images with complicated and changing background in an automated fashion. An accuracy of 96.7% (dice similarity coefficient) was observed, which was higher than other methods used for comparison. While here it was tested on a single plant species, the fact that we do not employ shape driven models and we do not rely on fully supervised classification (trained on a large dataset) increases the ease of deployment of the proposed solution for the study of different plant species in a variety of laboratory settings. Our solution will be accompanied by an easy to use graphical user interface and, to facilitate adoption, we will make the software available to the scientific community.


IEEE Signal Processing Magazine | 2015

Image Analysis: The New Bottleneck in Plant Phenotyping [Applications Corner]

Massimo Minervini; Hanno Scharr; Sotirios A. Tsaftaris

Plant phenotyping is the identification of effects on the phenotype (i.e., the plant appearance and performance) as a result of genotype differences (i.e., differences in the genetic code) and the environmental conditions to which a plant has been exposed [1]?[3]. According to the Food and Agriculture Organization of the United Nations, large-scale experiments in plant phenotyping are a key factor in meeting the agricultural needs of the future to feed the world and provide biomass for energy, while using less water, land, and fertilizer under a constantly evolving environment due to climate change. Working on model plants (such as Arabidopsis), combined with remarkable advances in genotyping, has revolutionized our understanding of biology but has accelerated the need for precision and automation in phenotyping, favoring approaches that provide quantifiable phenotypic information that could be better used to link and find associations in the genotype [4]. While early on, the collection of phenotypes was manual, currently noninvasive, imaging-based methods are increasingly being utilized [5], [6]. However, the rate at which phenotypes are extracted in the field or in the lab is not matching the speed of genotyping and is creating a bottleneck [1].


Pattern Recognition Letters | 2016

Finely-grained annotated datasets for image-based plant phenotyping

Massimo Minervini; Andreas Fischbach; Hanno Scharr; Sotirios A. Tsaftaris

First comprehensive annotated datasets for computer vision tasks in plant phenotyping.Publicly available data and evaluation criteria for eight challenging tasks.Tasks include fine-grained categorization of age, developmental stage, and cultivars.Example test cases and results on plant and leaf-wise segmentation and leaf counting. In this paper we present a collection of benchmark datasets for the development and evaluation of computer vision and machine learning algorithms in the context of plant phenotyping. We provide annotated imaging data and suggest suitable evaluation criteria for plant/leaf segmentation, detection, tracking as well as classification and regression problems. The Figure symbolically depicts the data available together with ground truth segmentations and further annotations and metadata.Display Omitted Image-based approaches to plant phenotyping are gaining momentum providing fertile ground for several interesting vision tasks where fine-grained categorization is necessary, such as leaf segmentation among a variety of cultivars, and cultivar (or mutant) identification. However, benchmark data focusing on typical imaging situations and vision tasks are still lacking, making it difficult to compare existing methodologies. This paper describes a collection of benchmark datasets of raw and annotated top-view color images of rosette plants. We briefly describe plant material, imaging setup and procedures for different experiments: one with various cultivars of Arabidopsis and one with tobacco undergoing different treatments. We proceed to define a set of computer vision and classification tasks and provide accompanying datasets and annotations based on our raw data. We describe the annotation process performed by experts and discuss appropriate evaluation criteria. We also offer exemplary use cases and results on some tasks obtained with parts of these data. We hope with the release of this rigorous dataset collection to invigorate the development of algorithms in the context of plant phenotyping but also provide new interesting datasets for the general computer vision community to experiment on. Data are publicly available at http://www.plant-phenotyping.org/datasets.


Proceedings of the Computer Vision Problems in Plant Phenotyping Workshop 2015 | 2015

Learning to Count Leaves in Rosette Plants

Mario Valerio Giuffrida; Massimo Minervini; Sotirios A. Tsaftaris

Counting the number of leaves in plants is important for plant phenotyping, since it can be used to assess plant growth stages. We propose a learning-based approach for counting leaves in rosette (model) plants. We relate image-based descriptors learned in an unsupervised fashion to leaf counts using a supervised regression model. To take advantage of the circular and coplanar arrangement of leaves and also to introduce scale and rotation invariance, we learn features in a log-polar representation. Image patches extracted in this log-polar domain are provided to K-means, which builds a codebook in a unsupervised manner. Feature codes are obtained by projecting patches on the codebook using the triangle encoding, introducing both sparsity and specifically designed representation. A global, per-plant image descriptor is obtained by pooling local features in specific regions of the image. Finally, we provide the global descriptors to a support vector regression framework to estimate the number of leaves in a plant. We evaluate our method on datasets of the \textit{Leaf Counting Challenge} (LCC), containing images of Arabidopsis and tobacco plants. Experimental results show that on average we reduce absolute counting error by 40% w.r.t. the winner of the 2014 edition of the challenge -a counting via segmentation method. When compared to state-of-the-art density-based approaches to counting, on Arabidopsis image data ~75% less counting errors are observed. Our findings suggest that it is possible to treat leaf counting as a regression problem, requiring as input only the total leaf count per training image.


Trends in Plant Science | 2016

Machine Learning for Plant Phenotyping Needs Image Processing

Sotirios A. Tsaftaris; Massimo Minervini; Hanno Scharr

We found the article by Singh et al. [1] extremely interesting because it introduces and showcases the utility of machine learning for high-throughput data-driven plant phenotyping. With this letter we aim to emphasize the role that image analysis and processing have in the phenotyping pipeline beyond what is suggested in [1], both in analyzing phenotyping data (e.g., to measure growth) and when providing effective feature extraction to be used by machine learning. Key recent reviews have shown that it is image analysis itself (what the authors of [1] consider as part of pre-processing) that has brought a renaissance in phenotyping [2].


Plant Journal | 2017

Phenotiki: An open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants.

Massimo Minervini; Mario Valerio Giuffrida; Pierdomenico Perata; Sotirios A. Tsaftaris

Phenotyping is important to understand plant biology, but current solutions are costly, not versatile or are difficult to deploy. To solve this problem, we present Phenotiki, an affordable system for plant phenotyping that, relying on off-the-shelf parts, provides an easy to install and maintain platform, offering an out-of-box experience for a well-established phenotyping need: imaging rosette-shaped plants. The accompanying software (with available source code) processes data originating from our device seamlessly and automatically. Our software relies on machine learning to devise robust algorithms, and includes an automated leaf count obtained from 2D images without the need of depth (3D). Our affordable device (~€200) can be deployed in growth chambers or greenhouse to acquire optical 2D images of approximately up to 60 adult Arabidopsis rosettes concurrently. Data from the device are processed remotely on a workstation or via a cloud application (based on CyVerse). In this paper, we present a proof-of-concept validation experiment on top-view images of 24 Arabidopsis plants in a combination of genotypes that has not been compared previously. Phenotypic analysis with respect to morphology, growth, color and leaf count has not been performed comprehensively before now. We confirm the findings of others on some of the extracted traits, showing that we can phenotype at reduced cost. We also perform extensive validations with external measurements and with higher fidelity equipment, and find no loss in statistical accuracy when we use the affordable setting that we propose. Device set-up instructions and analysis software are publicly available ( http://phenotiki.com).


Proceedings of the Computer Vision Problems in Plant Phenotyping Workshop 2015 | 2015

An interactive tool for semi-automated leaf annotation

Massimo Minervini; Mario Valerio Giuffrida; Sotirios A. Tsaftaris

High throughput plant phenotyping is emerging as a necessary step towards meeting agricultural demands of the future. Central to its success is the development of robust computer vision algorithms that analyze images and extract phenotyping information to be associated with genotypes and environmental conditions for identifying traits suitable for further development. Obtaining leaf level quantitative data is important towards understanding better this interaction. While certain efforts have been made to obtain such information in an automated fashion, further innovations are necessary. In this paper we present an annotation tool that can be used to semi-automatically segment leaves in images of rosette plants. This tool, which is designed to exist in a stand-alone fashion but also in cloud based environments, can be used to annotate data directly for the study of plant and leaf growth or to provide annotated datasets for learning-based approaches to extracting phenotypes from images. It relies on an interactive graph-based segmentation algorithm to propagate expert provided priors (in the form of pixels) to the rest of the image, using the random walk formulation to find a good per-leaf segmentation. To evaluate the tool we use standardized datasets available from the LSC and LCC 2015 challenges, achieving an average leaf segmentation accuracy of almost 97% using scribbles as annotations. The tool and source code are publicly available at http://www.phenotiki.com and as a GitHub repository at https://github.com/phenotiki/LeafAnnotationTool.


international conference on digital signal processing | 2013

Application-aware image compression for low cost and distributed plant phenotyping

Massimo Minervini; Sotirios A. Tsaftaris

Plant phenotyping investigates how a plants genome, interacting with the environment, affects the observable traits of a plant (phenome). It is becoming increasingly important in our quest towards efficient and sustainable agriculture. While sequencing the genome is becoming increasingly efficient, acquiring phenotype information has remained largely of low throughput, since high throughput solutions are costly and not widespread. A distributed approach could provide a low cost solution, offering high accuracy and throughput. A sensor of low computational power acquires time-lapse images of plants and sends them to an analysis system with higher computational and storage capacity (e.g., a service running on a cloud infrastructure). However, such system requires the transmission of imaging data from sensor to receiver, which necessitates their lossy compression to reduce bandwidth requirements. In this paper, we propose an application aware image compression approach where the sensor is aware of its context (i.e., imaging plants) and takes advantage of the feedback from the receiver to focus bitrate on regions of interest (ROI). We use JPEG 2000 with ROI coding, and thus remain standard compliant, and offer a solution that is low cost and has low computational requirements. We evaluate our solution in several images of Arabidopsis thaliana phenotyping experiments, and we show that both for traditional metrics (such as PSNR) and application aware metrics, the performance of the proposed solution provides a 70% reduction of bitrate for equivalent performance.


soft computing | 2012

Nonnegative matrix factorizations performing object detection and localization

Gabriella Casalino; N. Del Buono; Massimo Minervini

We study the problem of detecting and localizing objects in still, gray-scale images making use of the part-based representation provided by nonnegative matrix factorizations. Nonnegative matrix factorization represents an emerging example of subspace methods, which is able to extract interpretable parts from a set of template image objects and then to additively use them for describing individual objects. In this paper, we present a prototype system based on some nonnegative factorization algorithms, which differ in the additional properties added to the nonnegative representation of data, in order to investigate if any additional constraint produces better results in general object detection via nonnegative matrix factorizations.


ieee international conference on high performance computing data and analytics | 2015

Large-scale analysis of neuroimaging data on commercial clouds with content-aware resource allocation strategies

Massimo Minervini; Cristian Rusu; Mario Damiano; Valter Tucci; Angelo Bifone; Alessandro Gozzi; Sotirios A. Tsaftaris

The combined use of mice that have genetic mutations (transgenic mouse models) of human pathology and advanced neuroimaging methods (such as magnetic resonance imaging) has the potential to radically change how we approach disease understanding, diagnosis and treatment. Morphological changes occurring in the brain of transgenic animals as a result of the interaction between environment and genotype can be assessed using advanced image analysis methods, an effort described as ‘mouse brain phenotyping’. However, the computational methods involved in the analysis of high-resolution brain images are demanding. While running such analysis on local clusters is possible, not all users have access to such infrastructure and even for those that do, having additional computational capacity can be beneficial (e.g. to meet sudden high throughput demands). In this paper we use a commercial cloud platform for brain neuroimaging and analysis. We achieve a registration-based multi-atlas, multi-template anatomical segmentation, normally a lengthy-in-time effort, within a few hours. Naturally, performing such analyses on the cloud entails a monetary cost, and it is worthwhile identifying strategies that can allocate resources intelligently. In our context a critical aspect is the identification of how long each job will take. We propose a method that estimates the complexity of an image-processing task, a registration, using statistical moments and shape descriptors of the image content. We use this information to learn and predict the completion time of a registration. The proposed approach is easy to deploy, and could serve as an alternative for laboratories that may require instant access to large high-performance-computing infrastructures. To facilitate adoption from the community we publicly release the source code.

Collaboration


Dive into the Massimo Minervini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanno Scharr

Forschungszentrum Jülich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alessandro Gozzi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Angelo Bifone

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Mario Damiano

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Valter Tucci

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge