Hervé Goëau
French Institute for Research in Computer Science and Automation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hervé Goëau.
cross language evaluation forum | 2013
Barbara Caputo; Henning Müller; Bart Thomee; Mauricio Villegas; Roberto Paredes; David Zellhöfer; Hervé Goëau; Alexis Joly; Pierre Bonnet; Jesús Martínez Gómez; Ismael García Varea; Miguel Cazorla
This paper presents an overview of the ImageCLEF 2013 lab. Since its first edition in 2003, ImageCLEF has become one of the key initiatives promoting the benchmark evaluation of algorithms for the cross-language annotation and retrieval of images in various domains, such as public and personal images, to data acquired by mobile robot platforms and botanic collections. Over the years, by providing new data collections and challenging tasks to the community of interest, the ImageCLEF lab has achieved an unique position in the multi lingual image annotation and retrieval research landscape. The 2013 edition consisted of three tasks: the photo annotation and retrieval task, the plant identification task and the robot vision task. Furthermore, the medical annotation task, that traditionally has been under the ImageCLEF umbrella and that this year celebrates its tenth anniversary, has been organized in conjunction with AMIA for the first time. The paper describes the tasks and the 2013 competition, giving an unifying perspective of the present activities of the lab while discussion the future challenges and opportunities.
acm multimedia | 2013
Hervé Goëau; Pierre Bonnet; Alexis Joly; Vera Bakić; Julien Barbe; Itheri Yahiaoui; Souheil Selmi; Jennifer Carré; Daniel Barthélémy; Nozha Boujemaa; Jean-François Molino; Grégoire Duché; Aurélien Péronnet
Pl@ntNet is an image sharing and retrieval application for the identification of plants, available on iPhone and iPad devices. Contrary to previous content-based identification applications it can work with several parts of the plant including flowers, leaves, fruits and bark. It also allows integrating users observations in the database thanks to a collaborative workflow involving the members of a social network specialized on plants. Data collected so far makes it one of the largest mobile plant identification tool.
acm multimedia | 2011
Hervé Goëau; Alexis Joly; Souheil Selmi; Pierre Bonnet; Elise Mouysset; Laurent Joyeux; Jean-François Molino; Philippe Birnbaum; Daniel Bathelemy; Nozha Boujemaa
This demo presents a crowdsourcing web application dedicated to the access of botanical knowledge through automated identification of plant species by visual content. Inspired by citizen sciences, our aim is to speed up the collection and integration of raw botanical observation data, while providing to potential users an easy and efficient access to this botanical knowledge. The result presented during the demo is an enjoying application where anyone can play to shoot fresh cut leaves and observe the relevance of species suggested in spite of various visual difficult queries.
Multimedia Systems | 2016
Alexis Joly; Pierre Bonnet; Hervé Goëau; Julien Barbe; Souheil Selmi; Julien Champ; Samuel Dufour-Kowalski; Antoine Affouard; Jennifer Carré; Jean-François Molino; Nozha Boujemaa; Daniel Barthélémy
Pl@ntNet is an innovative participatory sensing platform relying on image-based plants identification as a mean to enlist non-expert contributors and facilitate the production of botanical observation data. One year after the public launch of the mobile application, we carry out a self-critical evaluation of the experience with regard to the requirements of a sustainable and effective ecological surveillance tool. We first demonstrate the attractiveness of the developed multimedia system (with more than 90K end-users) and the nice self-improving capacities of the whole collaborative workflow. We then point out the current limitations of the approach towards producing timely and accurate distribution maps of plants at a very large scale. We discuss in particular two main issues: the bias and the incompleteness of the produced data. We finally open new perspectives and describe upcoming realizations towards bridging these gaps.
acm multimedia | 2013
Hervé Goëau; Alexis Joly; Pierre Bonnet; Vera Bakić; Daniel Barthélémy; Nozha Boujemaa; Jean-François Molino
This paper presents a synthesis of ImageCLEF 2013 plant identification task, a system-oriented testbed dedicated to the evaluation of image-based plant identification technologies. With 12 participating groups coming from over 9 countries and 33 submitted runs, the 2013 campaign confirmed the increasing interest of the multimedia community in ecology-related challenges (respectively 10 and 11 groups crossed the finish line in 2011 and 2012). Contrary to the two previous years that were exclusively focused on leaf images, the coverage of the 2013 task was extended to six different types of view of the plant (flower, bark, fruit, entire, \dots) and significantly more plant species. This synthesis presents the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results.
acm multimedia | 2012
Hervé Goëau; Pierre Bonnet; Julien Barbe; Vera Bakić; Alexis Joly; Jean-François Molino; Daniel Barthélémy; Nozha Boujemaa
This paper presents a new interactive web application for the visual identification of plants based on collaborative pictures. Contrary to previous content-based identification methods and systems developed for plants that mainly relied on leaves, or in few other cases on flowers, it makes use of five different organs and plants views including habit, flowers, fruits, leaves and bark. Thanks to an interactive and visual query widget, the tagging process of the different organs and views is as simple as drag-and-drop operations and does not require any expertise in botany. All training pictures used by the system were continuously collected during one year through a crowdsourcing application that was set up in the scope of a citizen sciences initiative. System-oriented and human-centered evaluations of the application show that the results are already satisfactory and therefore very promising in the long term to identify a richer flora.
BMC Evolutionary Biology | 2017
Jose Carranza-Rojas; Hervé Goëau; Pierre Bonnet; Erick Mata-Montero; Alexis Joly
BackgroundHundreds of herbarium collections have accumulated a valuable heritage and knowledge of plants over several centuries. Recent initiatives started ambitious preservation plans to digitize this information and make it available to botanists and the general public through web portals. However, thousands of sheets are still unidentified at the species level while numerous sheets should be reviewed and updated following more recent taxonomic knowledge. These annotations and revisions require an unrealistic amount of work for botanists to carry out in a reasonable time. Computer vision and machine learning approaches applied to herbarium sheets are promising but are still not well studied compared to automated species identification from leaf scans or pictures of plants in the field.ResultsIn this work, we propose to study and evaluate the accuracy with which herbarium images can be potentially exploited for species identification with deep learning technology. In addition, we propose to study if the combination of herbarium sheets with photos of plants in the field is relevant in terms of accuracy, and finally, we explore if herbarium images from one region that has one specific flora can be used to do transfer learning to another region with other species; for example, on a region under-represented in terms of collected data.ConclusionsThis is, to our knowledge, the first study that uses deep learning to analyze a big dataset with thousands of species from herbaria. Results show the potential of Deep Learning on herbarium species identification, particularly by training and testing across different datasets from different herbaria. This could potentially lead to the creation of a semi, or even fully automated system to help taxonomists and experts with their annotation, classification, and revision works.
international conference on multimedia retrieval | 2014
Hervé Goëau; Pierre Bonnet; Alexis Joly; Antoine Affouard; Vera Bakić; Julien Barbe; Samuel Dufour; Souheil Selmi; Itheri Yahiaoui; Christel Vignau; Daniel Barthélémy; Nozha Boujemaa
This paper presents several improvements of Pl@ntNet1, an image sharing and retrieval application for identifying plants [6]: (i) ported to most android platforms (ii) three times more data (iii) exploiting metadata as well as visual content in the identification process (iv) a new multi-plant-organ, multi-image and multi-feature merging strategy with separate indexes for each visual feature (v) integrating cross-languages functions. This paper also presents the new results achieved by our system in the ImageCLEF 2013 plant identification task and in real-world user trials.
international conference on multimedia retrieval | 2013
Sofiène Mouine; Itheri Yahiaoui; Anne Verroust-Blondet; Laurent Joyeux; Souheil Selmi; Hervé Goëau
This paper presents an Android application for plant identification. The system relies on the observation of leaf images. Unlike other mobile plant identification applications, the user may choose the leaf characters that will guide the identification process. For this purpose, two kinds of descriptors are proposed to the user: a shape descriptor based on a multiscale triangular representation of the leaf margin and a descriptor of the salient points of the leaf. The application achieves good identification accuracy and provides Android users a useful system for plant identification.
Multimedia Tools and Applications | 2016
Pierre Bonnet; Alexis Joly; Hervé Goëau; Julien Champ; Christel Vignau; Jean-François Molino; Daniel Barthélémy; Nozha Boujemaa
This paper reports a large-scale experiment aimed at evaluating how state-of-art computer vision systems perform in identifying plants compared to human expertise. A subset of the evaluation dataset used within LifeCLEF 2014 plant identification challenge was therefore shared with volunteers of diverse expertise, ranging from the leading experts of the targeted flora to inexperienced test subjects. In total, 16 human runs were collected and evaluated comparatively to the 27 machine-based runs of LifeCLEF challenge. One of the main outcomes of the experiment is that machines are still far from outperforming the best expert botanists at the image-based plant identification competition. On the other side, the best machine runs are competing with experienced botanists and clearly outperform beginners and inexperienced test subjects. This shows that the performances of automated plant identification systems are very promising and may open the door to a new generation of ecological surveillance systems.