Gheorghe Muresan
Rutgers University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gheorghe Muresan.
international acm sigir conference on research and development in information retrieval | 2003
Nicholas J. Belkin; Diane Kelly; Gwui Cheol Kim; Ja-Young Kim; Hyuk-Jin Lee; Gheorghe Muresan; Muh-Chyun Tang; Xiaojun Yuan; Colleen Cool
Query length in best-match information retrieval (IR) systems is well known to be positively related to effectiveness in the IR task, when measured in experimental, non-interactive environments. However, in operational, interactive IR systems, query length is quite typically very short, on the order of two to three words. We report on a study which tested the effectiveness of a particular query elicitation technique in increasing initial searcher query length, and which tested the effectiveness of queries elicited using this technique, and the relationship in general between query length and search effectiveness in interactive IR. Results show that the specific technique results in longer queries than a standard query elicitation technique, that this technique is indeed usable, that the technique results in increased user satisfaction with the search, and that query length is positively correlated with user satisfaction with the search.
international acm sigir conference on research and development in information retrieval | 2006
Ryen W. White; Gheorghe Muresan; Gary Marchionini
Exploratory search systems (ESS) are designed to help users move beyond simply finding information toward using that information to support learning, analysis, and decision-making. The evaluation of the interactive systems designed specifically to help exploratory searchers is a challenging area, worthy of further discussion in the research community. In this article we report on a workshop conducted in conjunction with the ACM SIGIR Conference in Seattle, USA, in August 2006. The workshop involved researchers, academics, and practitioners discussing the formative and summative evaluation of ESS.
Information Processing and Management | 2008
Ryen W. White; Gary Marchionini; Gheorghe Muresan
Online search has become an increasingly important part of the everyday lives of most computer users. Generally, popular search tools support users well, however, in situations where the search problem is poorly defined, or the information seeker is unfamiliar with the problem domain, or the search task requires some exploration or the consideration of multiple perspectives, such tools may not operate as effectively. To address situations where technology may not meet their needs, users have developed coping strategies involving the submission of multiple queries and the interactive exploration of the retrieved document space, selectively following links and passively obtaining cues about where their next steps lie. This is an example of exploratory search behavior, and comprises a mixture of serendipity, learning, and investigation [7].
hawaii international conference on system sciences | 2006
Gheorghe Muresan; Catherine L. Smith; Michael J. Cole; Lu Liu; Nicholas J. Belkin
We report on the effectiveness of language models for personalization of retrieval results based on a searcher’s preference for document genre. In principle, such preferences can be obtained via implicit relevance feedback through the observation of the searcher’s actions and behavior during search sessions. While our approach did not produce significant improvement to retrieval effectiveness, the methodology and experimental setting can and are being used for further work on exploring genre-based personalization.
hawaii international conference on system sciences | 2006
Gheorghe Muresan; Michael J. Cole; Catherine L. Smith; Lu Liu; Nicholas J. Belkin
We report on an evaluation of the effectiveness of considering a users familiarity with a topic in improving information retrieval performance. This approach to personalization is based on previous results indicating differences in user search behavior and judgments according to his/her familiarity to the topic explored, and to research on using implicit sources of evidence to determine the users context and preferences. Our attempt was to relate a topic-dependent concept and measure, familiarity with the topic, with topic-independent measures of documents such as readability, concreteness/abstractness, and specificity/generality. Contrary to our expectations, a user’s familiarity with a topic has no effect on the utility of readability or concrete/abstract scoring. We are encouraged, however, to find that high readability had a positive effect on search results, regardless of a user’s familiarity with a topic.
Proceedings of The Asist Annual Meeting | 2006
Edie Rasmussen; Elaine G. Toms; Bernard J. Jansen; Gheorghe Muresan
The focus of this panel is on methodologies and measures for evaluating information retrieval (IR) systems from a human-centred perspective. Current research especially with regard to search engines is challenged by “Internet time” – the need for near instantaneous results that are also reliable and valid. The session will begin with an assessment of the current status of IR evaluation, followed by presentations on emerging methods used in recent evaluations, as well as on types of data collected and the measures used for analysis. A discussion among the panelists and the audience will critique current methods, and suggest how those methods may be enhanced. The outcome from this panel will be a fresh critical examination of IR evaluation methods.
Proceedings of The Asist Annual Meeting | 2007
Gheorghe Muresan; Dmitri Roussinov
This paper describes a framework for investigating the quality of different query expansion approaches, and applies it in the HARD TREC experimental setting. The intuition behind our approach is that each topic has an optimal term-based representation, i.e. a set of terms that best describe it, and that the effectiveness of any other representation is correlated with the overlap that it has with the optimal representation. Indeed, we find that, for a wide number of candidate topic representations, obtained through various query-expansion approaches, there is a high correlation between standard effectiveness measures (R-P, P@10, MAP) and term overlap with what is estimated to be the optimal representation. An important conclusion of comparing different query expansion approaches is that machines are better than humans at doing statistical calculations and at estimating which query terms are more likely to discriminate documents relevant for a given topic. This explains why, in the HARD track of TREC 2005, the overall conclusion was that interaction with the searcher and elicitation of additional information could not over-perform automatic procedures for query improvement. However, the best results are obtained from hybrid approaches, in which human relevance judgments are used by algorithms for deriving terms representations. This result suggest that the best approach in improving retrieval performance is probably to focus on implicit relevance feedback and novel interaction models based on ostention or mediation, which have shown great potential.
Proceedings of The Asist Annual Meeting | 2006
Gheorghe Muresan; Lu Liu; Michael J. Cole; Catherine L. Smith; Nicholas J. Belkin
We report on an evaluation of the relationship between document readability, an objective measure related to the length and complexity of words and sentences, and the subjective perception of document relevance of users with a certain level of familiarity to a topic. The research reported here, follow-up work to our TREC 2004 effort, tries to explain why all our TREC hypotheses were rejected. While trying to understand what was wrong with our intuition, we propose and test new hypotheses. The main conclusion is that readability may improve the chances that a document is judged relevant, which suggests the use of “blind readability feedback”, i.e. boosting the ranking of relevant documents when performing a search in order to improve retrieval performance.
Proceedings of The Asist Annual Meeting | 2008
Dmitri Roussinov; Gheorghe Muresan
We report an investigation of techniques for mining world wide web in order to identify terms (single words or phrases) that are highly related to a topic (query) described by a short (one sentence or a paragraph-long) interest statement. These terms are subsequently used to improve automated document retrieval. By following a standard testing methodology, we established that our technique improves the effectiveness of retrieval up to 8% over BM25 combined with pseudo-relevance feedback, which is currently known to be one of the best ranking functions, and was indeed the strongest baseline in our studies
Proceedings of The Asist Annual Meeting | 2007
Gheorghe Muresan
This poster describes a framework for investigating the effectiveness of query expansion term sets and reports the results of an investigation on the quality of query expansion terms coming from different sources: pseudo-relevance feedback, web-based expansion, interactive elicitations from human searchers, and expansion approaches based on query clarity. The conclusion regarding the experimental framework is that certain different evaluation approaches show a substantial level of correlation, and can therefore be used interchangeably according to convenience considerations. With regard to the actual comparison of different sources of expansion terms, the conclusion is that machines are better than humans at doing statistical calculations and at estimating which query terms are more likely to discriminate documents relevant for a given topic. One consequence is a recommendation for research in implicit relevance feedback approaches and novel interaction models based on ostention or mediation, which have shown great potential.