Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giovanni Maria Farinella is active.

Publication


Featured researches published by Giovanni Maria Farinella.


Journal of Plastic Reconstructive and Aesthetic Surgery | 2008

Experimental methodology for digital breast shape analysis and objective surgical outcome evaluation

Giuseppe Catanuto; A. Spano; Angela Pennati; Egidio Riggio; Giovanni Maria Farinella; Gaetano Impoco; Salvatore Spoto; Giovanni Gallo; Maurizio B. Nava

Outcome evaluation in cosmetic and reconstructive surgery of the breast is commonly performed visually or employing bi-dimensional photography. The reconstructive process in the era of anatomical implants requires excellent survey capabilities that mainly rely on surgeon experience. In this paper we present a set of parameters to unambiguously estimate the shape of natural and reconstructed breast. A digital laser scanner was employed on seven female volunteers. A graphic depiction of curvature of the thoracic surface has been the most interesting result. Further work is required to provide clinical and instrumental validation to our technique.


IEEE Transactions on Information Forensics and Security | 2012

Robust Image Alignment for Tampering Detection

Sebastiano Battiato; Giovanni Maria Farinella; Enrico Messina; Giovanni Puglisi

The widespread use of classic and newest technologies available on Internet (e.g., emails, social networks, digital repositories) has induced a growing interest on systems able to protect the visual content against malicious manipulations that could be performed during their transmission. One of the main problems addressed in this context is the authentication of the image received in a communication. This task is usually performed by localizing the regions of the image which have been tampered. To this aim the aligned image should be first registered with the one at the sender by exploiting the information provided by a specific component of the forensic hash associated to the image. In this paper we propose a robust alignment method which makes use of an image hash component based on the Bag of Features paradigm. The proposed signature is attached to the image before transmission and then analyzed at destination to recover the geometric transformations which have been applied to the received image. The estimator is based on a voting procedure in the parameter space of the model used to recover the geometric transformation occurred into the manipulated image. The proposed image hash encodes the spatial distribution of the image features to deal with highly textured and contrasted tampering patterns. A block-wise tampering detection which exploits an histograms of oriented gradients representation is also proposed. A non-uniform quantization of the histogram of oriented gradient space is used to build the signature of each image block for tampering purposes. Experiments show that the proposed approach obtains good margin of performances with respect to state-of-the art methods.


Computer Graphics Forum | 2007

Digital Mosaic Frameworks - An Overview

Sebastiano Battiato; G. Di Blasi; Giovanni Maria Farinella; Giovanni Gallo

Art often provides valuable hints for technological innovations especially in the field of Image Processing and Computer Graphics. In this paper we survey in a unified framework several methods to transform raster input images into good quality mosaics. For each of the major different approaches in literature the paper reports a short description and a discussion of the most relevant issues. To complete the survey comparisons among the different techniques both in terms of visual quality and computational complexity are provided.


Eurasip Journal on Image and Video Processing | 2015

Special issue on animal and insect behaviour understanding in image sequences

Concetto Spampinato; Giovanni Maria Farinella; Bastiaan Johannes Boom; Vasileios Mezaris; Margrit Betke; Robert B. Fisher

Imaging systems are, nowadays, used increasingly in a range of ecological monitoring applications, in particular for biological, fishery, geological and physical surveys. These technologies have improved radically the ability to capture high-resolution images in challenging environments and consequently to manage effectively natural resources. Unfortunately, advances in imaging devices have not been followed by improvements in automated analysis systems, necessary because of the need for timeconsuming and expensive inputs by human observers. This analytical ‘bottleneck’ greatly limits the potentialities of these technologies and increases demand for automatic content analysis approaches to enable proactive provision of analytical information. On the other side, the study of the behaviour by processing visual data has become an active research area in computer vision. The visual information gathered from image sequences is extremely useful to understand the behaviour of the different objects in the scene, as well as how they interact with each other or with the surrounding environment. However, whilst a large number of video analysis techniques have been developed specifically for investigating events and behaviour in human-centred applications, very little attention has been paid to the understanding of other live organisms, such as animals and insects, although a huge amount of video data are routinely recorded, e.g. the Fish4Knowledge project (www. fish4knowledge.eu) or the wide range of nest cams (http:// watch.birds.cornell.edu/nestcams/home/index) continuously monitor, respectively, underwater reef and bird nests (there exist also variants focusing on wolves, badgers, foxes etc.). The automated analysis of visual data in real-life environments for animal and insect behaviour understanding poses several challenges for computer vision researchers


european conference on computer vision | 2014

A Benchmark Dataset to Study the Representation of Food Images

Giovanni Maria Farinella; Dario Allegra; Filippo Stanco

It is well-known that people love food. However, an insane diet can cause problems in the general health of the people. Since health is strictly linked to the diet, advanced computer vision tools to recognize food images (e.g. acquired with mobile/wearable cameras), as well as their properties (e.g., calories), can help the diet monitoring by providing useful information to the experts (e.g., nutritionists) to assess the food intake of patients (e.g., to combat obesity). The food recognition is a challenging task since the food is intrinsically deformable and presents high variability in appearance. Image representation plays a fundamental role. To properly study the peculiarities of the image representation in the food application context, a benchmark dataset is needed. These facts motivate the work presented in this paper. In this work we introduce the UNICT-FD889 dataset. It is the first food image dataset composed by over \(800\) distinct plates of food which can be used as benchmark to design and compare representation models of food images. We exploit the UNICT-FD889 dataset for Near Duplicate Image Retrieval (NDIR) purposes by comparing three standard state-of-the-art image descriptors: Bag of Textons, PRICoLBP and SIFT. Results confirm that both textures and colors are fundamental properties in food representation. Moreover the experiments point out that the Bag of Textons representation obtained considering the color domain is more accurate than the other two approaches for NDIR.


Journal of Dairy Science | 2011

Objective Estimation of Body Condition Score by Modeling Cow Body Shape from Digital Images

G. Azzaro; Margherita Caccamo; James D. Ferguson; Sebastiano Battiato; Giovanni Maria Farinella; Giuseppe Claudio Guarnera; Giovanni Puglisi; R. Petriglieri; G. Licitra

Body condition score (BCS) is considered an important tool for management of dairy cattle. The feasibility of estimating the BCS from digital images has been demonstrated in recent work. Regression machines have been successfully employed for automatic BCS estimation, taking into account information of the overall shape or information extracted on anatomical points of the shape. Despite the progress in this research area, such studies have not addressed the problem of modeling the shape of cows to build a robust descriptor for automatic BCS estimation. Moreover, a benchmark data set of images meant as a point of reference for quantitative evaluation and comparison of different automatic estimation methods for BCS is lacking. The main objective of this study was to develop a technique that was able to describe the body shape of cows in a reconstructive way. Images, used to build a benchmark data set for developing an automatic system for BCS, were taken using a camera placed above an exit gate from the milking robot. The camera was positioned at 3 m from the ground and in such a position to capture images of the rear, dorsal pelvic, and loin area of cows. The BCS of each cow was estimated on site by 2 technicians and associated to the cow images. The benchmark data set contained 286 images with associated BCS, anatomical points, and shapes. It was used for quantitative evaluation. A set of example cow body shapes was created. Linear and polynomial kernel principal component analysis was used to reconstruct shapes of cows using a linear combination of basic shapes constructed from the example database. In this manner, a cows body shape was described by considering her variability from the average shape. The method produced a compact description of the shape to be used for automatic estimation of BCS. Model validation showed that the polynomial model proposed in this study performs better (error=0.31) than other state-of-the-art methods in estimating BCS even at the extreme values of BCS scale.


international conference on image processing | 2014

Classifying food images represented as Bag of Textons

Giovanni Maria Farinella; Marco Moltisanti; Sebastiano Battiato

The classification of food images is an interesting and challenging problem since the high variability of the image content which makes the task difficult for current state-of-the-art classification methods. The image representation to be employed in the classification engine plays an important role. We believe that texture features have been not properly considered in this application domain. This paper points out, through a set of experiments, that textures are fundamental to properly recognize different food items. For this purpose the bag of visual words model (BoW) is employed. Images are processed with a bank of rotation and scale invariant filters and then a small codebook of Textons is built for each food class. The learned class-based Textons are hence collected in a single visual dictionary. The food images are represented as visual words distributions (Bag of Textons) and a Support Vector Machine is used for the classification stage. The experiments demonstrate that the image representation based on Bag of Textons is more accurate than existing (and more complex) approaches in classifying the 61 classes of the Pittsburgh Fast-Food Image Dataset.


Eurasip Journal on Image and Video Processing | 2010

Exploiting Textons distributions on spatial hierarchy for scene classification

Sebastiano Battiato; Giovanni Maria Farinella; Giovanni Gallo; Daniele Ravì

This paper proposes a method to recognize scene categories using bags of visual words obtained by hierarchically partitioning into subregion the input images. Specifically, for each subregion the Textons distribution and the extension of the corresponding subregion are taken into account. The bags of visual words computed on the subregions are weighted and used to represent the whole scene. The classification of scenes is carried out by discriminative methods (i.e., SVM, KNN). A similarity measure based on Bhattacharyya coefficient is proposed to establish similarities between images, represented as hierarchy of bags of visual words. Experimental tests, using fifteen different scene categories, show that the proposed approach achieves good performances with respect to the state-of-the-art methods.


conference on multimedia modeling | 2009

Spatial Hierarchy of Textons Distributions for Scene Classification

Sebastiano Battiato; Giovanni Maria Farinella; Giovanni Gallo; Daniele Ravì

This paper proposes a method to recognize scene categories using bags of visual words obtained hierarchically partitioning into subregion the input images. Specifically, for each subregion the Textons distribution and the extension of the corresponding subregion are taken into account. The bags of visual words computed on the subregions are weighted and used to represent the whole scene. The classification of scenes is carried out by a Support Vector Machine. A k-nearest neighbor algorithm and a similarity measure based on Bhattacharyya coefficient are used to retrieve from the scene database those that contain similar visual content to a given a scene used as query. Experimental tests using fifteen different scene categories show that the proposed approach achieves good performances with respect to the state of the art methods.


Expert Systems With Applications | 2015

An integrated system for vehicle tracking and classification

Sebastiano Battiato; Giovanni Maria Farinella; Antonino Furnari; Giovanni Puglisi; Anique Snijders; Jelmer Spiekstra

We present a unified system for vehicle tracking and classification which has been developed with a data-driven approach on real-world data. The main purpose of the system is the tracking of the vehicles to understand lane changes, gates transits and other behaviors useful for traffic analysis. The discrimination of the vehicles into two classes (cars vs. trucks) is also required for electronic truck-tolling. Both tracking and classification are performed online by a system made up of two components (tracker and classifier) plus a controller which automatically adapts the configuration of the system to the observed conditions. Experiments show that the proposed system outperforms the state-of-the-art algorithms on the considered data.

Collaboration


Dive into the Giovanni Maria Farinella's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge