Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luca Piras is active.

Publication


Featured researches published by Luca Piras.


cross language evaluation forum | 2015

General Overview of ImageCLEF at the CLEF 2015 Labs

Mauricio Villegas; Henning Müller; Andrew Gilbert; Luca Piras; Josiah Wang; Krystian Mikolajczyk; Alba Garcia Seco de Herrera; Stefano Bromuri; M. Ashraful Amin; Mahmood Kazi Mohammed; Burak Acar; Suzan Uskudarli; Neda Barzegar Marvasti; José F. Aldana; María del Mar Roldán García

This paper presents an overview of the ImageCLEF 2015 evaluation campaign, an event that was organized as part of the CLEF labs 2015. ImageCLEF is an ongoing initiative that promotes the evaluation of technologies for annotation, indexing and retrieval for providing information access to databases of images in various usage scenarios and domains. In 2015, the 13th edition of ImageCLEF, four main tasks were proposed: 1 automatic concept annotation, localization and sentence description generation for general images; 2 identification, multi-label classification and separation of compound figures from biomedical literature; 3 clustering of x-rays from all over the body; and 4 prediction of missing radiological annotations in reports of liver CT images. The x-ray task was the only fully novel task this year, although the other three tasks introduced modifications to keep up relevancy of the proposed challenges. The participation was considerably positive in this edition of the lab, receiving almost twice the number of submitted working notes papers as compared to previous years.


Information Fusion | 2017

Information fusion in content based image retrieval

Luca Piras; Giorgio Giacinto

An overview of information fusion in Content Based Image Retrieval (CBIR).Analysis of each component in the fusion processing pipeline.Classification of the main categories in which fusion approaches can be grouped.Details of some representative method for each fusion category. An ever increasing part of communication between persons involve the use of pictures, due to the cheap availability of powerful cameras on smartphones, and the cheap availability of storage space. The rising popularity of social networking applications such as Facebook, Twitter, Instagram, and of instant messaging applications, such as WhatsApp, WeChat, is the clear evidence of this phenomenon, due to the opportunity of sharing in real-time a pictorial representation of the context each individual is living in. The media rapidly exploited this phenomenon, using the same channel, either to publish their reports, or to gather additional information on an event through the community of users. While the real-time use of images is managed through metadata associated with the image (i.e., the timestamp, the geolocation, tags, etc.), their retrieval from an archive might be far from trivial, as an image bears a rich semantic content that goes beyond the description provided by its metadata. It turns out that after more than 20 years of research on Content-Based Image Retrieval (CBIR), the giant increase in the number and variety of images available in digital format is challenging the research community. It is quite easy to see that any approach aiming at facing such challenges must rely on different image representations that need to be conveniently fused in order to adapt to the subjectivity of image semantics. This paper offers a journey through the main information fusion ingredients that a recipe for the design of a CBIR system should include to meet the demanding needs of users.


cross language evaluation forum | 2017

Overview of ImageCLEF 2017: information extraction from images

Bogdan Ionescu; Henning Müller; Mauricio Villegas; Helbert Arenas; Giulia Boato; Duc-Tien Dang-Nguyen; Yashin Dicente Cid; Carsten Eickhoff; Alba Garcia Seco de Herrera; Cathal Gurrin; Bayzidul Islam; Vassili Kovalev; Vitali Liauchuk; Josiane Mothe; Luca Piras; Michael Riegler; Immanuel Schwall

This paper presents an overview of the ImageCLEF 2017 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) labs 2017. ImageCLEF is an ongoing initiative (started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval for providing information access to collections of images in various usage scenarios and domains. In 2017, the 15th edition of ImageCLEF, three main tasks were proposed and one pilot task: (1) a LifeLog task about searching in LifeLog data, so videos, images and other sources; (2) a caption prediction task that aims at predicting the caption of a figure from the biomedical literature based on the figure alone; (3) a tuberculosis task that aims at detecting the tuberculosis type from CT (Computed Tomography) volumes of the lung and also the drug resistance of the tuberculosis; and (4) a remote sensing pilot task that aims at predicting population density based on satellite images. The strong participation of over 150 research groups registering for the four tasks and 27 groups submitting results shows the interest in this benchmarking campaign despite the fact that all four tasks were new and had to create their own community.


international conference on multimedia and expo | 2015

A hybrid approach for retrieving diverse social images of landmarks

Duc-Tien Dang-Nguyen; Luca Piras; Giorgio Giacinto; Giulia Boato; Francesco G. B. De Natale

In this paper, we present a novel method that can produce a visual description of a landmark by choosing the most diverse pictures that best describe all the details of the queried location from community-contributed datasets. The main idea of this method is to filter out non-relevant images at a first stage and then cluster the images according to textual descriptors first, and then to visual descriptors. The extraction of images from different clusters according to a measure of users credibility, allows obtaining a reliable set of diverse and relevant images. Experimental results performed on the MediaEval 2014 “Retrieving Diverse Social Images” dataset show that the proposed approach can achieve very good performance outperforming state-of-art techniques.


workshop on image analysis for multimedia interactive services | 2009

Neighborhood-based feature weighting for relevance feedback in content-based retrieval

Luca Piras; Giorgio Giacinto

High retrieval precision in content-based image retrieval can be attained by adopting relevance feedback mechanisms. In this paper we propose a weighted similarity measure based on the nearest-neighbor relevance feedback technique proposed by the authors. Each image is ranked according to a relevance score depending on nearest-neighbor distances from relevant and non-relevant images. Distances are computed by a weighted measure, the weights being related to the capability of feature spaces of representing relevant images as nearest-neighbors. This approach is proposed to weights individual features, feature subsets, and also to weight relevance scores computed from different feature spaces. Reported results show that the proposed weighting scheme improves the performances with respect to unweighed distances, and to other weighting schemes.


Studies in computational intelligence | 2013

ImageHunter: A Novel Tool for Relevance Feedback in Content Based Image Retrieval

Roberto Tronci; Gabriele Murgia; Maurizio Pili; Luca Piras; Giorgio Giacinto

Nowadays, a very large number of digital image archives is easily produced thanks to the wide diffusion of personal digital cameras and mobile devices with embedded cameras. Thus, personal computers, personal storage units, as well as photo-sharing and social-network websites, are rapidly becoming the repository for thousands, or even billions of images (i.e., more than 100 million photos are uploaded every day on the social site Facebook). As a consequence, there is an increasing need for tools enabling the semantic search, classification, and retrieval of images. The use of meta-data associated to images solves the problems only partially, as the process of assigning reliable meta-data to images is not trivial, is slow, and closely related to whom performed the task. One solution for effective image search and retrieval is to combine content-based analysis with feedbacks from the users. In this chapter we present Image Hunter, a tool that implements a Content Based Image Retrieval (CBIR) engine with a Relevance Feedback mechanism. Thanks to a user friendly interface the tool is especially suited to unskilled users. In addition, the modular structure permits the use of the same core both in web-based and stand alone applications.


international conference on image analysis and processing | 2013

Diversity in Ensembles of Codebooks for Visual Concept Detection

Luca Piras; Roberto Tronci; Giorgio Giacinto

Visual codebooks generated by the quantization of local descriptors allows building effective feature vectors for image archives. Codebooks are usually constructed by clustering a subset of image descriptors from a set of training images. In this paper we investigate the effect of the combination of an ensemble of different codebooks, each codebook being created by using different pseudo-random techniques for subsampling the set of local descriptors. Despite the claims in the literature on the gain attained by combining different codebook representations, reported results on different visual detection tasks show that the diversity is quite small, thus allowing for modest improvement in performance w.r.t. the standard random subsampling procedure, and calling for further investigation on the use of ensemble approaches in this context.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2017

Multimodal Retrieval with Diversification and Relevance Feedback for Tourist Attraction Images

Duc-Tien Dang-Nguyen; Luca Piras; Giorgio Giacinto; Giulia Boato; Francesco G. B. De Natale

In this article, we present a novel framework that can produce a visual description of a tourist attraction by choosing the most diverse pictures from community-contributed datasets, which describe different details of the queried location. The main strength of the proposed approach is its flexibility that permits us to filter out non-relevant images and to obtain a reliable set of diverse and relevant images by first clustering similar images according to their textual descriptions and their visual content and then extracting images from different clusters according to a measure of the user’s credibility. Clustering is based on a two-step process, where textual descriptions are used first and the clusters are then refined according to the visual features. The degree of diversification can be further increased by exploiting users’ judgments on the results produced by the proposed algorithm through a novel approach, where users not only provide a relevance feedback but also a diversity feedback. Experimental results performed on the MediaEval 2015 “Retrieving Diverse Social Images” dataset show that the proposed framework can achieve very good performance both in the case of automatic retrieval of diverse images and in the case of the exploitation of the users’ feedback. The effectiveness of the proposed approach has been also confirmed by a small case study involving a number of real users.


machine learning and data mining in pattern recognition | 2012

Enhancing image retrieval by an exploration-exploitation approach

Luca Piras; Giorgio Giacinto; Roberto Paredes

In this paper, the Relevance Feedback procedure for Content Based Image Retrieval is considered as an Exploration-Exploitation approach. The proposed method exploits the information obtained from the relevance score as computed by a Nearest Neighbor approach in the exploitation step. The idea behind the Nearest Neighbor relevance feedback is to retrieve the immediate neighborhood of the area of the feature space where relevant images are found. The exploitation step aims at returning to the user the maximum number of relevant images in a local region of the feature space. On the other hand, the exploration step aims at driving the search towards different areas of the feature space in order to discover not only relevant images but also informative images. Similar ideas have been proposed with Support Vector Machines, where the choice of the informative images has been driven by the closeness to the decision boundary. Here, we propose a rather simple method to explore the representation space in order to present to the user a wider variety of images. Reported results show that the proposed technique allows to improve the performance in terms of average precision and that the improvements are higher if compared to techniques that use an SVM approach.


international conference on multimedia and expo | 2010

Unbalanced learning in content-based image classification and retrieval

Luca Piras; Giorgio Giacinto

Nowadays very large archives of digital images can be easily produced thanks to the availability of digital cameras as standalone devices, or embedded into a number of portable devices. Each personal computer is typically a repository for thousands of images, while the Internet can be seen as a very large repository. One of the most severe problems in the classification and retrieval of images from very large repositories is the very limited number of elements belonging to each semantic class compared to the number of images in the repository. As a consequence, an even smaller fraction of images per semantic class can be used as training set in a classification problem, or as a query in a content-based image retrieval problem. In this paper we propose a technique aimed at artificially increasing the number of examples in the training set in order to improve the learning capabilities, reducing the unbalance between the semantic class of interest, and all other images. The proposed approach is tailored to classification and relevance feedback techniques based on the Nearest-Neighbor paradigm. A number of new points in the feature space are created based on the available training patterns, so that they better represent the distribution of the semantic class of interest. These new points are created according to the k-NN paradigm, and take into account both relevant and non-relevant images with respect to the semantic class of interest. The proposed approach allows increasing the generalization capability of NN techniques, and mitigates the risk of classifier over-training on few patterns. Reported experiments show the effectiveness of the proposed technique in Content-Based Image Retrieval tasks, where the Nearest-Neighbor approach is used to exploit users relevance feedback. The improvement in precision and recall gained in one feature space allows also to outperform the improvement in performances attained by combining different feature spaces.

Collaboration


Dive into the Luca Piras's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liting Zhou

Dublin City University

View shared research outputs
Top Co-Authors

Avatar

Mauricio Villegas

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gianni Fenu

University of Cagliari

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge