Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefano Mizzaro is active.

Publication


Featured researches published by Stefano Mizzaro.


Journal of the Association for Information Science and Technology | 1997

Relevance: the whole history

Stefano Mizzaro

Relevance is a fundamental, though not completely understood, concept for documentation, information science, and information retrieval. This article presents the history of relevance through an exhaustive review of the literature. Such history being very complex (about 160 papers are discussed), it is not simple to describe it in a comprehensible way. Thus, first of all a framework for establishing a common ground is defined, and then the history itself is illustrated via the presentation in chronological order of the papers on relevance. The history is divided into three periods (“Before 1958,” “1959–1976,” and “1977–present”) and, inside each period, the papers on relevance are analyzed under seven different aspects (methodological foundations, different kinds of relevance, beyond-topical criteria adopted by users, modes for expression of the relevance judgment, dynamic nature of relevance, types of document representation, and agreement among different judges).


Interacting with Computers | 1998

How many relevances in information retrieval

Stefano Mizzaro

Abstract The aim of an information retrieval system is to find relevant documents, thus relevance is a (if not ‘the’) central concept of information retrieval. Notwithstanding its importance, and the huge amount of research on this topic in the past, relevance is not yet a well understood concept, also because of inconsistently used terminology. In this paper, I try to clarify this issue, classifying the various kinds of relevance. I show that: (i) there are many kinds of relevance, not just one; (ii) these kinds can be classified in a formally defined four dimensional space, and (iii) such classification helps us to understand the nature of relevance and relevance judgement. Finally, the consequences of this classification on the design and evaluation of information retrieval systems are analysed.


international acm sigir conference on research and development in information retrieval | 1996

Evaluating user interfaces to information retrieval systems: a case study on user support

Giorgio Brajnik; Stefano Mizzaro; Carlo Tasso

Designing good user interfaces to information retrieval systems is a complex activity. The design space is large and evaluation methodologies that go beyond the classical precision and recall figures are not well established. In this paper we present an evaluation of an intelligent interface that covers also the user-system interaction and measures users satisfaction. More specifically, we describe an experiment that evaluates: (i) the added value of the semiautomatic query reformulation implemented in a prototype system; (ii) the importance of technical, terminological, and strategic supports and (iii) the best way to provide them. The interpretation of results leads to guidelines for the design of user interfaces to information retrieval systems and to some observations on the evaluation issue.


Information Processing and Management | 2012

Using crowdsourcing for TREC relevance assessment

Omar Alonso; Stefano Mizzaro

Crowdsourcing has recently gained a lot of attention as a tool for conducting different kinds of relevance evaluations. At a very high level, crowdsourcing describes outsourcing of tasks to a large group of people instead of assigning such tasks to an in-house employee. This crowdsourcing approach makes possible to conduct information retrieval experiments extremely fast, with good results at a low cost. This paper reports on the first attempts to combine crowdsourcing and TREC: our aim is to validate the use of crowdsourcing for relevance assessment. To this aim, we use the Amazon Mechanical Turk crowdsourcing platform to run experiments on TREC data, evaluate the outcomes, and discuss the results. We make emphasis on the experiment design, execution, and quality control to gather useful results, with particular attention to the issue of agreement among assessors. Our position, supported by the experimental results, is that crowdsourcing is a cheap, quick, and reliable alternative for relevance assessment.


Journal of the Association for Information Science and Technology | 2003

Quality control in scholarly publishing: A new proposal

Stefano Mizzaro

The Internet has fostered a faster, more interactive and effective model of scholarly publishing. However, as the quantity of information available is constantly increasing, its quality is threatened, since the traditional quality control mechanism of peer review is often not used (e.g., in online repositories of preprints, and by people publishing whatever they want on their Web pages). This paper describes a new kind of electronic scholarly journal, in which the standard submission-review-publication process is replaced by a more sophisticated approach, based on judgments expressed by the readers: in this way, each reader is, potentially, a peer reviewer. New ingredients, not found in similar approaches, are that each readers judgment is weighted on the basis of the readers skills as a reviewer, and that readers are encouraged to express correct judgments by a feedback mechanism that estimates their own quality. The new electronic scholarly journal is described in both intuitive and formal ways. Its effectiveness is tested by several laboratory experiments that simulate what might happen if the system were deployed and used.


international acm sigir conference on research and development in information retrieval | 2007

Hits hits TREC: exploring IR evaluation results with network analysis

Stefano Mizzaro; Stephen E. Robertson

We propose a novel method of analysing data gathered fromTREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics. We analyze the meaning of well known - and somewhat generalized - indicators fromsocial network analysis on the Systems-Topics graph. We apply this method to an analysis of TREC 8 data; amongthe results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case.


Journal of the Association for Information Science and Technology | 2002

Strategic help in user interfaces for information retrieval

Giorgio Brajnik; Stefano Mizzaro; Carlo Tasso

Brajnik et alia describe their view of an effective retrieval interface, one which coaches the searcher using stored knowledge not only of database structure, but of strategic situations which are likely to occur, such as repeating failed tactics in a low return search, or failing to try relevance feedback techniques. The emphasis is on the system suggesting search strategy improvements by relating them to an analysis of work entered so far and selecting and ranking those found relevant. FIRE is an interface utilizing these techniques. It allows the user to assign documents to useful, topical and trash folders, maintains thesauri files automatically searchable on query terms, and it builds, using user entries and a rule system, a picture of the retrieval situation from which it generates suggestions.


ACM Transactions on Information Systems | 2009

A few good topics: Experiments in topic set reduction for retrieval evaluation

John Guiver; Stefano Mizzaro; Stephen E. Robertson

We consider the issue of evaluating information retrieval systems on the basis of a limited number of topics. In contrast to statistically-based work on sample sizes, we hypothesize that some topics or topic sets are better than others at predicting true system effectiveness, and that with the right choice of topics, accurate predictions can be obtained from small topics sets. Using a variety of effectiveness metrics and measures of goodness of prediction, a study of a set of TREC and NTCIR results confirms this hypothesis, and provides evidence that the value of a topic set for this purpose does generalize.


IEEE Intelligent Systems | 2010

The Context-Aware Browser

Paolo Coppola; V. Della Mea; L. Di Gaspero; Davide Menegon; Danny Mischis; Stefano Mizzaro; Ivan Scagnetto; Luca Vassena

The typical scenario of a user seeking information on the Web requires significant effort to get the desired information. In a world where information is essential, it can be crucial for users to get the desired information quickly even when they are away from their desktop computers. The Context-Aware Browser for mobile devices senses the surrounding environment, infers the users current context, and proactively searches for and activates relevant Web documents and applications.


Journal of the Association for Information Science and Technology | 2004

Measuring retrieval effectiveness: a new proposal and a first experimental validation

Vincenzo Della Mea; Stefano Mizzaro

Most common effectiveness measures for information retrieval systems are based on the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this article, these assumptions are questioned, and a new measure named ADM (average distance measure) is proposed, discussed from a conceptual point of view, and experimentally validated on Text Retrieval Conference (TREC) data. Both conceptual analysis and experimental evidence demonstrate ADMs adequacy in measuring the effectiveness of information retrieval systems. Some potential problems about precision and recall are also highlighted and discussed.

Collaboration


Dive into the Stefano Mizzaro's collaboration.

Researchain Logo
Decentralizing Knowledge