Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria Maistro is active.

Publication


Featured researches published by Maria Maistro.


international acm sigir conference on research and development in information retrieval | 2014

Injecting user models and time into precision via Markov chains

Marco Ferrante; Nicola Ferro; Maria Maistro

We propose a family of new evaluation measures, called Markov Precision (MP), which exploits continuous-time and discrete-time Markov chains in order to inject user models into precision. Continuous-time MP behaves like time-calibrated measures, bringing the time spent by the user into the evaluation of a system; discrete-time MP behaves like traditional evaluation measures. Being part of the same Markovian framework, the time-based and rank-based versions of MP produce values that are directly comparable. We show that it is possible to re-create average precision using specific user models and this helps in providing an explanation of Average Precision (AP) in terms of user models more realistic than the ones currently used to justify it. We also propose several alternative models that take into account different possible behaviors in scanning a ranked result list. Finally, we conduct a thorough experimental evaluation of MP on standard TREC collections in order to show that MP is as reliable as other measures and we provide an example of calibration of its time parameters based on click logs from Yandex.


international conference on the theory of information retrieval | 2015

Towards a Formal Framework for Utility-oriented Measurements of Retrieval Effectiveness

Marco Ferrante; Nicola Ferro; Maria Maistro

In this paper we present a formal framework to define and study the properties of utility-oriented measurements of retrieval effectiveness, like AP, RBP, ERR and many other popular IR evaluation measures. The proposed framework is laid in the wake of the representational theory of measurement, which provides the foundations of the modern theory of measurement in both physical and social sciences, thus contributing to explicitly link IR evaluation to a broader context. The proposed framework is minimal, in the sense that it relies on just one axiom, from which other properties are derived. Finally, it contributes to a better understanding and a clear separation of what issues are due to the inherent problems in comparing systems in terms of retrieval effectiveness and what others are due to the expected numerical properties of a measurement.


international acm sigir conference on research and development in information retrieval | 2017

On Including the User Dynamic in Learning to Rank

Nicola Ferro; Claudio Lucchese; Maria Maistro; Raffaele Perego

Ranking query results effectively by considering user past behaviour and preferences is a primary concern for IR researchers both in academia and industry. In this context, LtR is widely believed to be the most effective solution to design ranking models that account for user-interaction features that have proved to remarkably impact on IR effectiveness. In this paper, we explore the possibility of integrating the user dynamic directly into the LtR algorithms. Specifically, we model with Markov chains the behaviour of users in scanning a ranked result list and we modify Lambdamart, a state-of-the-art LtR algorithm, to exploit a new discount loss function calibrated on the proposed Markovian model of user dynamic. We evaluate the performance of the proposed approach on publicly available LtR datasets, finding that the improvements measured over the standard algorithm are statistically significant.


cross language evaluation forum | 2014

Rethinking How to Extend Average Precision to Graded Relevance

Marco Ferrante; Nicola Ferro; Maria Maistro

We present two new measures of retrieval effectiveness, inspired by Graded Average Precision(GAP), which extends Average Precision(AP) to graded relevance judgements. Starting from the random choice of a user, we define Extended Graded Average Precision(xGAP) and Expected Graded Average Precision(eGAP), which are more accurate than GAP in the case of a small number of highly relevant documents with high probability to be considered relevant by the users. The proposed measures are then evaluated on TREC 10, TREC 14, and TREC 21 collections showing that they actually grasp a different angle from GAP and that they are robust when it comes to incomplete judgments and shallow pools.


cross language evaluation forum | 2018

Overview of CENTRE@CLEF 2018: A First Tale in the Systematic Reproducibility Realm

Nicola Ferro; Maria Maistro; Tetsuya Sakai; Ian Soboroff

Reproducibility has become increasingly important for many research areas, among those IR is not an exception and has started to be concerned with reproducibility and its impact on research results. This paper describes our first attempt to propose a lab on reproducibility named CENTRE and held during CLEF 2018. The aim of CENTRE is to run a reproducibility challenge across all the major IR evaluation campaigns and to provide the IR community with a venue where previous research results can be explored and discussed. This paper reports the participant results and preliminary considerations on the first edition of CENTRE@CLEF 2018, as well as some suggestions for future editions.


international conference on the theory of information retrieval | 2017

LEARning Next gEneration Rankers (LEARNER 2017)

Nicola Ferro; Claudio Lucchese; Maria Maistro; Raffaele Perego

The aim of LEARNER@ICTIR2017 is to investigate new solutions for LtR. In details, we identify some research areas related to LtR which are of actual interest and which have not been fully explored yet. We solicit the submission of position papers on novel LtR algorithms, on evaluation of LtR algorithms, on dataset creation and curation, and on domain specific applications of LtR. LEARNER@ICTIR2017 will be a gathering of academic people interested in IR, ML and related application areas. We believe that the proposed workshop is relevant to ICTIR since we look for novel contributions to LtR focused on foundational and conceptual aspects, which need to be properly framed and modeled.


ACM Transactions on Information Systems | 2017

AWARE: Exploiting Evaluation Measures to Combine Multiple Assessors

Marco Ferrante; Nicola Ferro; Maria Maistro

We propose the Assessor-driven Weighted Averages for Retrieval Evaluation (AWARE) probabilistic framework, a novel methodology for dealing with multiple crowd assessors that may be contradictory and/or noisy. By modeling relevance judgements and crowd assessors as sources of uncertainty, AWARE takes the expectation of a generic performance measure, like Average Precision, composed with these random variables. In this way, it approaches the problem of aggregating different crowd assessors from a new perspective, that is, directly combining the performance measures computed on the ground truth generated by the crowd assessors instead of adopting some classification technique to merge the labels produced by them. We propose several unsupervised estimators that instantiate the AWARE framework and we compare them with state-of-the-art approaches, that is,Majoriity Vote and Expectation Maximization, on TREC collections. We found that AWARE approaches improve in terms of their capability of correctly ranking systems and predicting their actual performance scores.


italian research conference on digital library management systems | 2018

Thirty Years of Digital Libraries Research at the University of Padua: The User Side

Maristella Agosti; Giorgio Maria Di Nunzio; Nicola Ferro; Maria Maistro; Stefano Marchesin; Nicola Orio; Chiara Ponchia; Gianmaria Silvello

For the 30th anniversary of the Information Management Systems (IMS) research group of the University of Padua, we report the main and more recent contributions of the group that focus on the users in the field of Digital Library (DL). In particular, we describe a dynamic and adaptive environment for user engagement with cultural heritage collections, the role of log analysis for studying the interaction between users and DL, and how to model user behaviour.


conference on information and knowledge management | 2018

Continuation Methods and Curriculum Learning for Learning to Rank

Nicola Ferro; Claudio Lucchese; Maria Maistro; Raffaele Perego

In this paper we explore the use of Continuation Methods and Curriculum Learning techniques in the area of Learning to Rank. The basic idea is to design the training process as a learning path across increasingly complex training instances and objective functions. We propose to instantiate continuation methods in Learning to Rank by changing the IR measure to optimize during training, and we present two different curriculum learning strategies to identify easy training examples. Experimental results show that simple continuation methods are more promising than curriculum learning ones since they allow for slightly improving the performance of state-of-the-art λ-MART models and provide a faster convergence speed.


Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW '18 | 2018

A Gamified Approach to Naïve Bayes Classification: A Case Study for Newswires and Systematic Medical Reviews

Giorgio Maria Di Nunzio; Maria Maistro; Federica Vezzani

Supervised machine learning algorithms require a set of labelled examples to be trained; however, the labelling process is a costly and time consuming task which is carried out by experts of the domain who label the dataset by means of an iterative process to filter out non-relevant objects of the dataset. In this paper, we describe a set of experiments that use gamification techniques to transform this labelling task into an interactive learning process where users can cooperate in order to achieve a common goal. To this end, first we use a geometrical interpretation of Naïve Bayes (NB) classifiers in order to create an intuitive visualization of the current state of the system and let the user change some of the parameters directly as part of a game. We apply this visualization technique to the classification of newswire and we report the results of the experiments conducted with different groups of people: PhD students, Master Degree students and general public. Then, we present a preliminary experiment of query rewriting for systematic reviews in a medical scenario, which makes use of gamification techniques to collect different formulation of the same query. Both the experiments show how the exploitation of gamification approaches help to engage the users in abstract tasks that might be hard to understand and/or boring to perform.

Collaboration


Dive into the Maria Maistro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudio Lucchese

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raffaele Perego

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Soboroff

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge