Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chang Liu is active.

Publication


Featured researches published by Chang Liu.


acm/ieee joint conference on digital libraries | 2010

Search behaviors in different task types

Jingjing Liu; Michael J. Cole; Chang Liu; Ralf Bierig; Jacek Gwizdka; Nicholas J. Belkin; Jun Zhang; Xiangmin Zhang

Personalization of information retrieval tailors search towards individual users to meet their particular information needs by taking into account information about users and their contexts, often through implicit sources of evidence such as user behaviors. Task types have been shown to influence search behaviors including usefulness judgments. This paper reports on an investigation of user behaviors associated with different task types. Twenty-two undergraduate journalism students participated in a controlled lab experiment, each searching on four tasks which varied on four dimensions: complexity, task product, task goal and task level. Results indicate regular differences associated with different task characteristics in several search behaviors, including task completion time, decision time (the time taken to decide whether a document is useful or not), and eye fixations, etc. We suggest these behaviors can be used as implicit indicators of the users task type.


Interacting with Computers | 2011

Task and user effects on reading patterns in information search

Michael J. Cole; Jacek Gwizdka; Chang Liu; Ralf Bierig; Nicholas J. Belkin; Xiangmin Zhang

We report on an investigation into people’s behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks.


Information Processing and Management | 2013

Inferring user knowledge level from eye movement patterns

Michael J. Cole; Jacek Gwizdka; Chang Liu; Nicholas J. Belkin; Xiangmin Zhang

The acquisition of information and the search interaction process is influenced strongly by a persons use of their knowledge of the domain and the task. In this paper we show that a users level of domain knowledge can be inferred from their interactive search behaviors without considering the content of queries or documents. A technique is presented to model a users information acquisition process during search using only measurements of eye movement patterns. In a user study (n=40) of search in the domain of genomics, a representation of the participants domain knowledge was constructed using self-ratings of knowledge of genomics-related terms (n=409). Cognitive effort features associated with reading eye movement patterns were calculated for each reading instance during the search tasks. The results show correlations between the cognitive effort due to reading and an individuals level of domain knowledge. We construct exploratory regression models that suggest it is possible to build models that can make predictions of the users level of knowledge based on real-time measurements of eye movement patterns during a task session.


international acm sigir conference on research and development in information retrieval | 2012

Personalization of search results using interaction behaviors in search sessions

Chang Liu; Nicholas J. Belkin; Michael J. Cole

Personalization of search results offers the potential for significant improvement in information retrieval performance. User interactions with the system and documents during information-seeking sessions provide a wealth of information about user preferences and their task goals. In this paper, we propose methods for analyzing and modeling user search behavior in search sessions to predict document usefulness and then using information to personalize search results. We generate prediction models of document usefulness from behavior data collected in a controlled lab experiment with 32 participants, each completing uncontrolled searching for 4 tasks in the Web. The generated models are then tested with another data set of user search sessions in radically different search tasks and constrains. The documents predicted useful and not useful by the models are used to modify the queries in each search session using a standard relevance feedback technique. The results show that application of the models led to consistently improved performance over a baseline that did not take account of user interaction information. These findings have implications for designing systems for personalized search and improving user search experience.


conference on information and knowledge management | 2012

Exploring and predicting search task difficulty

Jingjing Liu; Chang Liu; Michael J. Cole; Nicholas J. Belkin; Xiangmin Zhang

We report on an investigation of behavioral differences between users in difficult and easy search tasks. Behavioral factors that can be used in real-time to predict task difficulty are identified. User data was collected in a controlled lab experiment (n=38) where each participant completed four search tasks in the genomics domain. We looked at user behaviors that can be obtained by systems at three levels, distinguished by the time point when the measurements can be done. They are: 1) first-round level at the beginning of the search, 2) accumulated level during the search, and 3) whole-session level by the end of the search. Results show that a number of user behaviors at all three levels differed between easy and difficult tasks. Models predicting task difficulty at all three levels were developed and evaluated. A real-time model incorporating first-round and accumulated levels of behaviors (FA) had fairly good prediction performance (accuracy 83%; precision 88%), which is comparable with the model using the whole-session level behaviors which are not real-time (accuracy 75%; precision 92%). We also found that for efficiency purpose, using only a limited number of significant variables (FC_FA) can obtain a prediction accuracy of 75%, with a precision of 88%. Our findings can help search systems predict task difficulty and adapt search results to users.


international acm sigir conference on research and development in information retrieval | 2010

Can search systems detect users' task difficulty?: some behavioral signals

Jingjing Liu; Chang Liu; Jacek Gwizdka; Nicholas J. Belkin

In this paper, we report findings on how user behaviors vary in tasks with different difficulty levels as well as of different types. Two behavioral signals: document dwell time and number of content pages viewed per query, were found to be able to help the system detect when users are working with difficult tasks.


information interaction in context | 2010

Helping identify when users find useful documents: examination of query reformulation intervals

Chang Liu; Jacek Gwizdka; Jingjing Liu

We explore search behaviors during a new kind of search unit -- the query reformulation interval (QRI). The QRI is defined as an interval between two consecutive queries in one search session that contains at least two queries. Our controlled, web-based study focused on examining behaviors associated with querying and useful document saving. We compared behavioral variables that characterized QRIs during which useful pages were found with those during which no useful pages were found. Our results demonstrated that the QRI duration and the total time spent on content pages during QRIs with useful pages were significantly longer than during QRIs with no useful pages. Users viewed more content pages and spent more time on content pages than on search result pages during QRIs with useful pages. The findings suggest that user behavior during QRIs can be used as an indicator of QRIs containing useful documents.


Proceedings of the American Society for Information Science and Technology | 2011

Dynamic assessment of information acquisition effort during interactive search

Michael J. Cole; Jacek Gwizdka; Chang Liu; Nicholas J. Belkin

We present a method to measure some aspects of cognitive effort by a user while reading during a search session. We measured reading eye movement properties and patterns of eye movement in a user study (n=32) of participants carrying out realistic journalism IR work tasks. The results show the cognitive effort measures correlate positively with the information task characteristics that, by hypothesis, contribute to task difficulty. They also correlate well with participant self-assessed task difficulty. Our methodology can be applied to eye tracking data in any (textual) information setting and used to give dynamic estimates of these aspects of cognitive processing during search.


conference on information and knowledge management | 2014

Predicting Search Task Difficulty at Different Search Stages

Chang Liu; Jingjing Liu; Nicholas J. Belkin

Knowing, in real time, whether a current searcher in an information retrieval system finds the search task difficult can be valuable for tailoring the systems support for that searcher. This study investigated searchers behaviors at different stages of the search process; they are: 1) first-round point at the beginning of the search, right before searchers issued their second query; 2) middle point, when searchers proceeded to the middle of the search process, and 3) end point, when searchers finished the whole task. We compared how the behavioral features calculated at these three points were different between difficult and easy search tasks, and identified behavioral features during search sessions that can be used in real-time to predict perceived task difficulty. In addition, we compared the prediction performance at different stages of search process. Our results show that a number of user behavioral measures at all three points differed between easy and difficult tasks. Query interval time, dwell time on viewed documents, and number of viewed documents per query were important predictors of task difficulty. The results also indicate that it is possible to make relatively accurate prediction of task difficulty at the first query round of a search. Our findings can help search systems predict task difficulty which is necessary in personalizing support for the individual searcher.


european conference on cognitive ergonomics | 2010

Linking search tasks with low-level eye movement patterns

Michael J. Cole; Jacek Gwizdka; Ralf Bierig; Nicholas J. Belkin; Jingjing Liu; Chang Liu; Xiangmin Zhang

Motivation -- On-the-task detection of the task type and task attributes can benefit personalization and adaptation of information systems. Research approach -- A web-based information search experiment was conducted with 32 participants using a multi-stream logging system. The realistic tasks were related directly to the backgrounds of the participants and were of distinct task types. Findings/Design -- We report on a relationship between task and individual reading behaviour. Specifically we show that transitions between scanning and reading behaviour in eye movement patterns are an implicit indicator of the current task. Research limitations/Implications -- This work suggests it is plausible to infer the type of information task from eye movement patterns. One limitation is a lack of knowledge about the general reading model differences across different types of tasks in the population. Although this is an experimental study we argue it can be generalized to real world text-oriented information search tasks. Originality/Value -- This research presents a new methodology to model user information search task behaviour. It suggests promise for detection of information task type based on patterns of eye movements. Take away message -- With increasingly complex computer interaction, knowledge about the type of information task can be valuable for system personalization. Modelling the reading/scanning patterns of eye movements can allow inference about the task type and task attributes.

Collaboration


Dive into the Chang Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jacek Gwizdka

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralf Bierig

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaojun Yuan

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge