Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiepu Jiang is active.

Publication


Featured researches published by Jiepu Jiang.


international acm sigir conference on research and development in information retrieval | 2014

Searching, browsing, and clicking in a search session: changes in user behavior by task and over time

Jiepu Jiang; Daqing He; James Allan

There are many existing studies of user behavior in simple tasks (e.g., navigational and informational search) within a short duration of 1--2 queries. However, we know relatively little about user behavior, especially browsing and clicking behavior, for longer search session solving complex search tasks. In this paper, we characterize and compare user behavior in relatively long search sessions (10 minutes; about 5 queries) for search tasks of four different types. The tasks differ in two dimensions: (1) the user is locating facts or is pursuing intellectual understanding of a topic; (2) the user has a specific task goal or has an ill-defined and undeveloped goal. We analyze how search behavior as well as browsing and clicking patterns change during a search session in these different tasks. Our results indicate that user behavior in the four types of tasks differ in various aspects, including search activeness, browsing style, clicking strategy, and query reformulation. As a search session progresses, we note that users shift their interests to focus less on the top results but more on results ranked at lower positions in browsing. We also found that results eventually become less and less attractive for the users. The reasons vary and include downgraded search performance of query, decreased novelty of search results, and decaying persistence of users in browsing. Our study highlights the lack of long session support in existing search engines and suggests different strategies of supporting longer sessions according to different task types.


international world wide web conferences | 2015

Automatic Online Evaluation of Intelligent Assistants

Jiepu Jiang; Ahmed Hassan Awadallah; Rosie Jones; Umut Ozertem; Imed Zitouni; Ranjitha Gurunath Kulkarni; Omar Zia Khan

Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions.


IEEE Computer | 2014

Influences on Query Reformulation in Collaborative Web Search

Zhen Yue; Shuguang Han; Daqing He; Jiepu Jiang

Past analysis has considered query reformulation primarily from the perspective of individual Web searches. Findings from a recent study suggest ways that collaboration during the search process influences how users generate new terms for query reformulation.


acm/ieee joint conference on digital libraries | 2013

Mendeley group as a new source of interdisciplinarity study: how do disciplines interact on mendeley?

Jiepu Jiang; Chaoqun Ni; Daqing He; Wei Jeng

In this paper, we studied interdisciplinary structures by looking into how online academic groups of different disciplines share members and followers. Results based on Mendeley online groups show clear interdisciplinary structures, indicating Mendeley online groups as a promising data source and a new perspective of disciplinarity and interdisciplinarity studies.


conference on information and knowledge management | 2012

Where do the query terms come from?: an analysis of query reformulation in collaborative web search

Zhen Yue; Jiepu Jiang; Shuguang Han; Daqing He

This paper presents a user study aiming to investigate the query reformulation in collaborative Web search. 7 pairs of participants were recruited and each pair worked as a team on two collaborative exploratory Web search tasks. Through the log analysis, we compared possible sources for participants to draw query terms from. The results show that both search and collaborative actions are possible resources for new query terms. Traditional resources for query expansion such as previous search histories and relevant documents are still important resources for new query terms. The content in chat and workspace generated by participants themselves seems more likely to be the resource for new query terms than that of their partners. Task types also affect the influences on query reformulations. For the academic task, previously saved relevance documents are the most important resources for new query terms while chat histories are the most important resources for the leisure task.


european conference on information retrieval | 2016

Adaptive Effort for Search Evaluation Metrics

Jiepu Jiang; James Allan

We explain a wide range of search evaluation metrics as the ratio of users’ gain to effort for interacting with a ranked list of results. According to this explanation, many existing metrics measure users’ effort as linear to the (expected) number of examined results. This implicitly assumes that users spend the same effort to examine different results. We adapt current metrics to account for different effort on relevant and non-relevant documents. Results show that such adaptive effort metrics better correlate with and predict user perceptions on search quality.


web search and data mining | 2016

Reducing Click and Skip Errors in Search Result Ranking

Jiepu Jiang; James Allan

Search engines provide result summaries to help users quickly identify whether or not it is worthwhile to click on a result and read in detail. However, users may visit non-relevant results and/or skip relevant ones. These actions are usually harmful to the user experience, but few considered this problem in search result ranking. This paper optimizes relevance of results and user click and skip activities at the same time. Comparing two equally relevant results, our approach learns to rank the one that users are more likely to click on at a higher position. Similarly, it demotes non-relevant web pages with high click probabilities. Experimental results show this approach reduces about 10%-20% of the click and skip errors with a trade off of 2.1% decline in nDCG@10.


international acm sigir conference on research and development in information retrieval | 2017

Comparing In Situ and Multidimensional Relevance Judgments

Jiepu Jiang; Daqing He; James Allan

To address concerns of TREC-style relevance judgments, we explore two improvements. The first one seeks to make relevance judgments contextual, collecting in situ feedback of users in an interactive search session and embracing usefulness as the primary judgment criterion. The second one collects multidimensional assessments to complement relevance or usefulness judgments, with four distinct alternative aspects examined in this paper - novelty, understandability, reliability, and effort. We evaluate different types of judgments by correlating them with six user experience measures collected from a lab user study. Results show that switching from TREC-style relevance criteria to usefulness is fruitful, but in situ judgments do not exhibit clear benefits over the judgments collected without context. In contrast, combining relevance or usefulness with the four alternative judgments consistently improves the correlation with user experience measures, suggesting future IR systems should adopt multi-aspect search result judgments in development and evaluation. We further examine implicit feedback techniques for predicting these judgments. We find that click dwell time, a popular indicator of search result quality, is able to predict some but not all dimensions of the judgments. We enrich the current implicit feedback methods using post-click user interaction in a search session and achieve better prediction for all six dimensions of judgments.


international acm sigir conference on research and development in information retrieval | 2014

Necessary and frequent terms in queries

Jiepu Jiang; James Allan

Vocabulary mismatch has long been recognized as one of the major issues affecting search effectiveness. Ineffective queries usually fail to incorporate important terms and/or incorrectly include inappropriate keywords. However, in this paper we show another cause of reduced search performance: sometimes users issue reasonable query terms, but systems cannot identify the correct properties of those terms and take advantages of the properties. Specifically, we study two distinct types of terms that exist in all search queries: (1) necessary terms, for which term occurrence alone is indicative of document relevance; and (2) frequent terms, for which the relative term frequency is indicative of document relevance within the set of documents where the term appears. We evaluate these two properties of query terms in a dataset. Results show that only 1/3 of the terms are both necessary and frequent, while another 1/3 only hold one of the properties and the final third do not hold any of the properties. However, existing retrieval models do not clearly distinguish terms with the two properties and consider them differently. We further show the great potential of improving retrieval models by treating terms with distinct properties differently.


Archive | 2013

Is the Article Crucial to My Research? Evaluating Task-Oriented Impacts of Scientific Articles in Information Seeking

Jiepu Jiang; Daqing He; Shuguang Han; Wei Jeng

We propose a new aspect of evaluating scientific articles: crucialness, which refers to the state of articles being not only useful, but also scarce (difficult to be found or identified by scientists). Compared with the popularity-based metrics, crucialness may be a better metric in helping scientists’ information seeking and use because it identifies scientists’ difficulties in information seeking and reveals the crucial articles that may help scientists succeed in research. Some preliminary results are presented and discussed.

Collaboration


Dive into the Jiepu Jiang's collaboration.

Top Co-Authors

Avatar

Daqing He

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Shuguang Han

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

James Allan

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Wei Jeng

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Zhen Yue

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Chaoqun Ni

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge