Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcella Cornia is active.

Publication


Featured researches published by Marcella Cornia.


international conference on pattern recognition | 2016

A deep multi-level network for saliency prediction

Marcella Cornia; Lorenzo Baraldi; Giuseppe Serra; Rita Cucchiara

This paper presents a novel deep architecture for saliency prediction. Current state of the art models for saliency prediction employ Fully Convolutional networks that perform a non-linear combination of features extracted from the last convolutional layer to predict saliency maps. We propose an architecture which, instead, combines features extracted at different levels of a Convolutional Neural Network (CNN). Our model is composed of three main blocks: a feature extraction CNN, a feature encoding network, that weights low and high level feature maps, and a prior learning network. We compare our solution with state of the art saliency models on two public benchmarks datasets. Results show that our model outperforms under all evaluation metrics on the SALICON dataset, which is currently the largest public dataset for saliency prediction, and achieves competitive results on the MIT300 benchmark. Code is available at https://github.com/marcellacornia/mlnet.


european conference on computer vision | 2016

Multi-level Net: A Visual Saliency Prediction Model

Marcella Cornia; Lorenzo Baraldi; Giuseppe Serra; Rita Cucchiara

State of the art approaches for saliency prediction are based on Fully Convolutional Networks, in which saliency maps are built using the last layer. In contrast, we here present a novel model that predicts saliency maps exploiting a non-linear combination of features coming from different layers of the network. We also present a new loss function to deal with the imbalance issue on saliency masks. Extensive results on three public datasets demonstrate the robustness of our solution. Our model outperforms the state of the art on SALICON, which is the largest and unconstrained dataset available, and obtains competitive results on MIT300 and CAT2000 benchmarks.


international conference on multimodal interfaces | 2017

Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild

Stefano Pini; Olfa Ben Ahmed; Marcella Cornia; Lorenzo Baraldi; Rita Cucchiara; Benoit Huet

In this paper, we propose a multimodal deep learning architecture for emotion recognition in video regarding our participation to the audio-video based sub-challenge of the Emotion Recognition in the Wild 2017 challenge. Our model combines cues from multiple video modalities, including static facial features, motion patterns related to the evolution of the human expression over time, and audio information. Specifically, it is composed of three sub-networks trained separately: the first and second ones extract static visual features and dynamic patterns through 2D and 3D Convolutional Neural Networks (CNN), while the third one consists in a pretrained audio network which is used to extract useful deep acoustic signals from video. In the audio branch, we also apply Long Short Term Memory (LSTM) networks in order to capture the temporal evolution of the audio features. To identify and exploit possible relationships among different modalities, we propose a fusion network that merges cues from the different modalities in one representation. The proposed architecture outperforms the challenge baselines (38.81 % and 40.47 %): we achieve an accuracy of 50.39 % and 49.92 % respectively on the validation and the testing data.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2018

Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention

Marcella Cornia; Lorenzo Baraldi; Giuseppe Serra; Rita Cucchiara

Image captioning has been recently gaining a lot of attention thanks to the impressive achievements shown by deep captioning architectures, which combine Convolutional Neural Networks to extract image representations and Recurrent Neural Networks to generate the corresponding captions. At the same time, a significant research effort has been dedicated to the development of saliency prediction models, which can predict human eye fixations. Even though saliency information could be useful to condition an image captioning architecture, by providing an indication of what is salient and what is not, research is still struggling to incorporate these two techniques. In this work, we propose an image captioning approach in which a generative recurrent neural network can focus on different parts of the input image during the generation of the caption, by exploiting the conditioning given by a saliency prediction model on which parts of the image are salient and which are contextual. We show, through extensive quantitative and qualitative experiments on large-scale datasets, that our model achieves superior performance with respect to captioning baselines with and without saliency and to different state-of-the-art approaches combining saliency and captioning.


international conference on multimedia and expo | 2017

Visual saliency for image captioning in new multimedia services

Marcella Cornia; Lorenzo Baraldi; Giuseppe Serra; Rita Cucchiara

Image and video captioning are important tasks in visual data analytics, as they concern the capability of describing visual content in natural language. They are the pillars of query answering systems, improve indexing and search and allow a natural form of human-machine interaction. Even though promising deep learning strategies are becoming popular, the heterogeneity of large image archives makes this task still far from being solved. In this paper we explore how visual saliency prediction can support image captioning. Recently, some forms of unsupervised machine attention mechanisms have been spreading, but the role of human attention prediction has never been examined extensively for captioning. We propose a machine attention model driven by saliency prediction to provide captions in images, which can be exploited for many services on cloud and on multimedia data. Experimental evaluations are conducted on the SALICON dataset, which provides groundtruths for both saliency and captioning, and on the large Microsoft COCO dataset, the most widely used for image captioning.


international conference on image analysis and processing | 2017

Towards Video Captioning with Naming: A Novel Dataset and a Multi-modal Approach

Stefano Pini; Marcella Cornia; Lorenzo Baraldi; Rita Cucchiara

Current approaches for movie description lack the ability to name characters with their proper names, and can only indicate people with a generic “someone” tag. In this paper we present two contributions towards the development of video description architectures with naming capabilities: firstly, we collect and release an extension of the popular Montreal Video Annotation Dataset in which the visual appearance of each character is linked both through time and to textual mentions in captions. We annotate, in a semi-automatic manner, a total of 53k face tracks and 29k textual mentions on 92 movies. Moreover, to underline and quantify the challenges of the task of generating captions with names, we present different multi-modal approaches to solve the problem on already generated captions.


italian research conference on digital library management systems | 2018

Automatic Image Cropping and Selection Using Saliency: An Application to Historical Manuscripts

Marcella Cornia; Stefano Pini; Lorenzo Baraldi; Rita Cucchiara

Automatic image cropping techniques are particularly important to improve the visual quality of cropped images and can be applied to a wide range of applications such as photo-editing, image compression, and thumbnail selection. In this paper, we propose a saliency-based image cropping method which produces significant cropped images by only relying on the corresponding saliency maps. Experiments on standard image cropping datasets demonstrate the benefit of the proposed solution with respect to other cropping methods. Moreover, we present an image selection method that can be effectively applied to automatically select the most representative pages of historical manuscripts thus improving the navigation of historical digital libraries.


Conference of the Italian Association for Artificial Intelligence | 2017

Attentive Models in Vision: Computing Saliency Maps in the Deep Learning Era

Marcella Cornia; Davide Abati; Lorenzo Baraldi; Andrea Palazzi; Simone Calderara; Rita Cucchiara

Estimating the focus of attention of a person looking at an image or a video is a crucial step which can enhance many vision-based inference mechanisms: image segmentation and annotation, video captioning, autonomous driving are some examples. The early stages of the attentive behavior are typically bottom-up; reproducing the same mechanism means to find the saliency embodied in the images, i.e. which parts of an image pop out of a visual scene. This process has been studied for decades in neuroscience and in terms of computational models for reproducing the human cortical process. In the last few years, early models have been replaced by deep learning architectures, that outperform any early approach compared against public datasets. In this paper, we propose a discussion on why convolutional neural networks (CNNs) are so accurate in saliency prediction. We present our DL architectures which combine both bottom-up cues and higher-level semantics, and incorporate the concept of time in the attentional process through LSTM recurrent architectures. Eventually, we present a video-specific architecture based on the C3D network, which can extracts spatio-temporal features by means of 3D convolutions to model task-driven attentive behaviors. The merit of this work is to show how these deep networks are not mere brute-force methods tuned on massive amount of data, but represent well-defined architectures which recall very closely the early saliency models, although improved with the semantics learned by human ground-truth.


IEEE Transactions on Image Processing | 2016

Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model.

Marcella Cornia; Lorenzo Baraldi; Giuseppe Serra; Rita Cucchiara


computer vision and pattern recognition | 2018

SAM: Pushing the Limits of Saliency Prediction Models

Marcella Cornia; Lorenzo Baraldi; Giuseppe Serra; Rita Cucchiara

Collaboration


Dive into the Marcella Cornia's collaboration.

Top Co-Authors

Avatar

Lorenzo Baraldi

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Rita Cucchiara

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Serra

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Stefano Pini

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Andrea Palazzi

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Davide Abati

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Simone Calderara

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Stefano Alletto

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge