Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joyce Y. Chai is active.

Publication


Featured researches published by Joyce Y. Chai.


international acm sigir conference on research and development in information retrieval | 2004

An automatic weighting scheme for collaborative filtering

Rong Jin; Joyce Y. Chai; Luo Si

Collaborative filtering identifies information interest of a particular user based on the information provided by other similar users. The memory-based approaches for collaborative filtering (e.g., Pearson correlation coefficient approach) identify the similarity between two users by comparing their ratings on a set of items. In these approaches, different items are weighted either equally or by some predefined functions. The impact of rating discrepancies among different users has not been taken into consideration. For example, an item that is highly favored by most users should have a smaller impact on the user-similarity than an item for which different types of users tend to give different ratings. Even though simple weighting methods such as variance weighting try to address this problem, empirical studies have shown that they are ineffective in improving the performance of collaborative filtering. In this paper, we present an optimization algorithm to automatically compute the weights for different items based on their ratings from training users. More specifically, the new weighting scheme will create a clustered distribution for user vectors in the item space by bringing users of similar interests closer and separating users of different interests more distant. Empirical studies over two datasets have shown that our new weighting scheme substantially improves the performance of the Pearson correlation coefficient method for collaborative filtering.


acm multimedia | 2004

Effective automatic image annotation via a coherent language model and active learning

Rong Jin; Joyce Y. Chai; Luo Si

Image annotations allow users to access a large image database with textual queries. There have been several studies on automatic image annotation utilizing machine learning techniques, which automatically learn statistical models from annotated images and apply them to generate annotations for unseen images. One common problem shared by most previous learning approaches for automatic image annotation is that each annotated word is predicated for an image independently from other annotated words. In this paper, we proposed a coherent language model for automatic image annotation that takes into account the word-to-word correlation by estimating a coherent language model for an image. This new approach has two important advantages: 1) it is able to automatically determine the annotation length to improve the accuracy of retrieval results, and 2) it can be used with active learning to significantly reduce the required number of annotated image examples. Empirical studies with Corel dataset are presented to show the effectiveness of the coherent language model for automatic image annotation.


intelligent user interfaces | 2004

A probabilistic approach to reference resolution in multimodal user interfaces

Joyce Y. Chai; Pengyu Hong; Michelle X. Zhou

Multimodal user interfaces allow users to interact with computers through multiple modalities, such as speech, gesture, and gaze. To be effective, multimodal user interfaces must correctly identify all objects which users refer to in their inputs. To systematically resolve different types of references, we have developed a probabilistic approach that uses a graph-matching algorithm. Our approach identifies the most probable referents by optimizing the satisfaction of semantic, temporal, and contextual constraints simultaneously. Our preliminary user study results indicate that our approach can successfully resolve a wide variety of referring expressions, ranging from simple to complex and from precise to ambiguous ones.


intelligent user interfaces | 2008

What's in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces

Zahar Prasov; Joyce Y. Chai

Multimodal conversational interfaces allow users to carry a dialog with a graphical display using speech to accomplish a particular task. Motivated by previous psycholinguistic findings, we examine how eye-gaze contributes to reference resolution in such a setting. Specifically, we present an integrated probabilistic framework that combines speech and eye-gaze for reference resolution. We further examine the relationship between eye-gaze and increased domain modeling with corresponding linguistic processing. Our empirical results show that the incorporation of eye-gaze significantly improves reference resolution performance. This improvement is most dramatic when a simple domain model is used. Our results also show that minimal domain modeling combined with eye-gaze significantly outperforms complex domain modeling without eye-gaze, which indicates that eye-gaze can be used to potentially compensate a lack of domain modeling for reference resolution.


Computational Linguistics | 2012

Semantic role labeling of implicit arguments for nominal predicates

Matthew S. Gerber; Joyce Y. Chai

Nominal predicates often carry implicit arguments. Recent work on semantic role labeling has focused on identifying arguments within the local context of a predicate; implicit arguments, however, have not been systematically examined. To address this limitation, we have manually annotated a corpus of implicit arguments for ten predicates from NomBank. Through analysis of this corpus, we find that implicit arguments add 71% to the argument structures that are present in NomBank. Using the corpus, we train a discriminative model that is able to identify implicit arguments with an F1 score of 50%, significantly outperforming an informed baseline model. This article describes our investigation, explores a wide variety of features important for the task, and discusses future directions for work on implicit argument identification.


international acm sigir conference on research and development in information retrieval | 2005

User term feedback in interactive text-based image retrieval

Chen Zhang; Joyce Y. Chai; Rong Jin

To alleviate the vocabulary problem, this paper investigates the role of user term feedback in interactive text-based image retrieval. Term feedback refers to the feedback from a user on specific terms regarding their relevance to a target image. Previous studies have indicated the effectiveness of term feedback in interactive text retrieval [14]. However, the term feedback has not shown to be effective in our experiments on text-based image retrieval. Our results indicate that, although term feedback has a positive effect by allowing users to identify more relevant terms, it also has a strong negative effect by providing more opportunities for users to specify irrelevant terms. To understand these different effects and their implications on the potential of term feedback, this paper further presents analysis of important factors that contribute to the utility of term feedback and discusses the outlook of term feedback in interactive text-based image retrieval.


Ai Magazine | 2002

Natural Language Assistant: A Dialog System for Online Product Recommendation

Joyce Y. Chai; Veronika Horvath; Nicolas Nicolov; Margo Stys; Nanda Kambhatla; Wlodek Zadrozny; Prem Melville

With the emergence of electronic-commerce systems, successful information access on electroniccommerce web sites becomes essential. Menu-driven navigation and keyword search currently provided by most commercial sites have considerable limitations because they tend to overwhelm and frustrate users with lengthy, rigid, and ineffective interactions. To provide an efficient solution for information access, we have built the NATURAL language ASSISTANT (NLA), a web-based natural language dialog system to help users find relevant products on electronic-commerce sites. The system brings together technologies in natural language processing and human-computer interaction to create a faster and more intuitive way of interacting with web sites. By combining statistical parsing techniques with traditional AI rule-based technology, we have created a dialog system that accommodates both customer needs and business requirements. The system is currently embedded in an application for recommending laptops and was deployed as a pilot on IBMs web site.


human-robot interaction | 2014

Collaborative effort towards common ground in situated human-robot dialogue

Joyce Y. Chai; Lanbo She; Rui Fang; Spencer Ottarson; Cody Littley; Changsong Liu; Kenneth Hanson

In situated human-robot dialogue, although humans and robots are co-present in a shared environment, they have significantly mismatched capabilities in perceiving the shared environment. Their representations of the shared world are misaligned. In order for humans and robots to communicate with each other successfully using language, it is important for them to mediate such differences and to establish common ground. To address this issue, this paper describes a dialogue system that aims to mediate a shared perceptual basis during human-robot dialogue. In particular, we present an empirical study that examines the role of the robot’s collaborative effort and the performance of natural language processing modules in dialogue grounding. Our empirical results indicate that in situated human-robot dialogue, a low collaborative effort from the robot may lead its human partner to believe a common ground is established. However, such beliefs may not reflect true mutual understanding. To support truly grounded dialogues, the robot should make an extra effort by making its partner aware of its internal representation of the shared world.


annual meeting of the special interest group on discourse and dialogue | 2014

Back to the Blocks World: Learning New Actions through Situated Human-Robot Dialogue

Lanbo She; Shaohua Yang; Yu Cheng; Yunyi Jia; Joyce Y. Chai; Ning Xi

This paper describes an approach for a robotic arm to learn new actions through dialogue in a simplified blocks world. In particular, we have developed a threetier action knowledge representation that on one hand, supports the connection between symbolic representations of language and continuous sensorimotor representations of the robot; and on the other hand, supports the application of existing planning algorithms to address novel situations. Our empirical studies have shown that, based on this representation the robot was able to learn and execute basic actions in the blocks world. When a human is engaged in a dialogue to teach the robot new actions, step-by-step instructions lead to better learning performance compared to one-shot instructions.


intelligent user interfaces | 2005

Linguistic theories in efficient multimodal reference resolution: an empirical investigation

Joyce Y. Chai; Zahar Prasov; Joseph Blaim; Rong Jin

Multimodal conversational interfaces provide a natural means for users to communicate with computer systems through multiple modalities such as speech, gesture, and gaze. To build effective multimodal interfaces, understanding user multimodal inputs is important. Previous linguistic and cognitive studies indicate that user language behavior does not occur randomly, but rather follows certain linguistic and cognitive principles. Therefore, this paper investigates the use of linguistic theories in multimodal interpretation. In particular, we present a greedy algorithm that incorporates Conversation Implicature and Givenness Hierarchy for efficient multimodal reference resolution. Empirical studies indicate that this algorithm significantly reduces the complexity in multimodal reference resolution compared to a previous graph-matching approach. One major advantage of this greedy algorithm is that the prior linguistic and cognitive knowledge can be used to guide the search and significantly prune the search space. Because of its simplicity and generality, this approach has the potential to improve the robustness of interpretation and provide a more practical solution to multimodal input interpretation.

Collaboration


Dive into the Joyce Y. Chai's collaboration.

Top Co-Authors

Avatar

Lanbo She

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Changsong Liu

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Rui Fang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Chen Zhang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Tyler Baldwin

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Shaolin Qu

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Shaohua Yang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Zahar Prasov

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Qiaozi Gao

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge