Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seyyed Hadi Hashemi is active.

Publication


Featured researches published by Seyyed Hadi Hashemi.


international acm sigir conference on research and development in information retrieval | 2015

On the Reusability of Open Test Collections

Seyyed Hadi Hashemi; Charles L. A. Clarke; Adriel Dean-Hall; Jaap Kamps; Julia Kiseleva

Creating test collections for modern search tasks is increasingly more challenging due to the growing scale and dynamic nature of content, and need for richer contextualization of the statements of request. To address these issues, the TREC Contextual Suggestion Track explored an open test collection, where participants were allowed to submit any web page as a result for a personalized venue recommendation task. This prompts the question on the reusability of the resulting test collection: How does the open nature affect the pooling process? Can participants reliably evaluate variant runs with the resulting qrels? Can other teams evaluate new runs reliably? In short, does the set of pooled and judged documents effectively produce a post hoc test collection? Our main findings are the following: First, while there is a strongly significant rank correlation, the effect of pooling is notable and results in underestimation of performance, implying the evaluation of non-pooled systems should be done with great care. Second, we extensively analyze impacts of open corpus on the fraction of judged documents, explaining how low recall affects the reusability, and how the personalization and low pooling depth aggravate that problem. Third, we outline a potential solution by deriving a fixed corpus from open web submissions.


conference on information and knowledge management | 2018

Measuring User Satisfaction on Smart Speaker Intelligent Assistants Using Intent Sensitive Query Embeddings

Seyyed Hadi Hashemi; Kyle Williams; Ahmed El Kholy; Imed Zitouni; Paul A. Crook

Intelligent assistants are increasingly being used on smart speaker devices, such as Amazon Echo, Google Home, Apple Homepod, and Harmon Kardon Invoke with Cortana. Typically, user satisfaction measurement relies on user interaction signals, such as clicks and scroll movements, in order to determine if a user was satisfied. However, these signals do not exist for smart speakers, which creates a challenge for user satisfaction evaluation on these devices. In this paper, we propose a new signal, user intent, as a means to measure user satisfaction. We propose to use this signal to model user satisfaction in two ways: 1) by developing intent sensitive word embeddings and then using sequences of these intent sensitive query representations to measure user satisfaction; 2) by representing a users interactions with a smart speaker as a sequence of user intents and thus using this sequence to identify user satisfaction. Our experimental results indicate that our proposed user satisfaction models based on the intent-sensitive query representations have statistically significant improvements over several baselines in terms of common classification evaluation metrics. In particular, our proposed task satisfaction prediction model based on intent-sensitive word embeddings has a 11.81% improvement over a generative model baseline and 6.63% improvement over a user satisfaction prediction model based on Skip-gram word embeddings in terms of the F1 metric. Our findings have implications for the evaluation of Intelligent Assistant systems.


conference on information and knowledge management | 2018

Impact of Domain and User's Learning Phase on Task and Session Identification in Smart Speaker Intelligent Assistants

Seyyed Hadi Hashemi; Kyle Williams; Ahmed El Kholy; Imed Zitouni; Paul A. Crook

Task and session identification is a key element of system evaluation and user behavior modeling in Intelligent Assistant (IA) systems. However, identifying task and sessions for IAs is challenging due to the multi-task nature of IAs and the differences in the ways they are used on different platforms, such as smart-phones, cars, and smart speakers. Furthermore, usage behavior may differ among users depending on their expertise with the system and the tasks they are interested in performing. In this study, we investigate how to identify tasks and sessions in IAs given these differences. To do this, we analyze data based on the interaction logs of two IAs integrated with smart-speakers. We fit Gaussian Mixture Models to estimate task and session boundaries and show how a model with 3 components models user interactivity time better than a model with 2 components. We then show how session boundaries differ for users depending on whether they are in a learning-phase or not. Finally, we study how user inter-activity times differs depending on the task that the user is trying to perform. Our findings show that there is no single task or session boundary that can be used for IA evaluation. Instead, these boundaries are influenced by the experience of the user and the task they are trying to perform. Our findings have implications for the study and evaluation of Intelligent Agent Systems.


international conference on user modeling adaptation and personalization | 2017

On the Reusability of Personalized Test Collections

Seyyed Hadi Hashemi; Jaap Kamps

Test collections for offline evaluation remain crucial for information retrieval research and industrial practice, yet reusability of test collections is under threat by different factors such as dynamic nature of data collections and new trends in building retrieval systems. Specifically, building reusable test collections that last over years is a very challenging problem as retrieval approaches change considerably per year based on new trends among Information Retrieval researchers. We experiment with a novel temporal reusability test to evaluate reusability of test collections over a year based on leaving mutual topics in experiment, in which we borrow some judged topics from previous years and include them in the new set of topics to be used in the current year. In fact, we experiment whether a new set of retrieval systems can be evaluated and comparatively ranked based on an old test collection. Our experiments is done based on two sets of runs from Text REtrieval Conference (TREC) 2015 and 2016 Contextual Suggestion Track, which is a personalized venue recommendation task. Our experiments show that the TREC 2015 test collection is not temporally reusable. The test collection should be used with extreme care based on early precision metrics and slightly less care based on NDCG, bpref and MAP metrics. Our approach offers a very precise experiment to test temporal reusability of test collections over a year, and it is very effective to be used in tracks running a setup similar to their previous years.


international conference on user modeling adaptation and personalization | 2017

Where To Go Next?: Exploiting Behavioral User Models in Smart Environments

Seyyed Hadi Hashemi; Jaap Kamps


text retrieval conference | 2014

Venue Recommendation and Web Search Based on Anchor Text

Seyyed Hadi Hashemi; Jaap Kamps


conference on human information interaction and retrieval | 2016

Effects of Position and Time Bias on Understanding Onsite Users' Behavior

Seyyed Hadi Hashemi; W. Hupperetz; Jaap Kamps; Merel van der Vaart


conference on human information interaction and retrieval | 2017

Skip or Stay: Users' Behavior in Dealing with Onsite Information Interaction Crowd-Bias

Seyyed Hadi Hashemi; Jaap Kamps


text retrieval conference | 2016

Neural Endorsement Based Contextual Suggestion

Seyyed Hadi Hashemi; N.O. Amer; Jaap Kamps; Ellen M. Voorhees; A. Ellis


EVIA@NTCIR | 2016

An Easter Egg Hunting Approach to Test Collection Building in Dynamic Domains.

Seyyed Hadi Hashemi; Charles L. A. Clarke; Adriel Dean-Hall; Jaap Kamps; Julia Kiseleva

Collaboration


Dive into the Seyyed Hadi Hashemi's collaboration.

Top Co-Authors

Avatar

Jaap Kamps

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Julia Kiseleva

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

W. Hupperetz

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kyle Williams

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge