Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johanne R. Trippas is active.

Publication


Featured researches published by Johanne R. Trippas.


conference on human information interaction and retrieval | 2017

How Do People Interact in Conversational Speech-Only Search Tasks: A Preliminary Analysis

Johanne R. Trippas; Damiano Spina; Lawrence Cavedon; Mark Sanderson

We present preliminary findings from a study of mixed initiative conversational behaviour for informational search in an acoustic setting. The aim of the observational study is to reveal insights into how users would conduct searches over voice where a screen is absent but where users are able to converse interactively with the search system. We conducted alaboratory-based observational study of 13 pairs of participants each completing three search tasks with different cognitive complexity levels. The communication between the pairs was analyzed for interaction patterns used in the search process. This setup mimics the situation of a user interacting with a search system via a speech-only interface.


international acm sigir conference on research and development in information retrieval | 2015

Towards Understanding the Impact of Length in Web Search Result Summaries over a Speech-only Communication Channel

Johanne R. Trippas; Damiano Spina; Mark Sanderson; Lawrence Cavedon

Presenting search results over a speech-only communication channel involves a number of challenges for users due to cognitive limitations and the serial nature of speech. We investigated the impact of search result summary length in speech-based web search, and compared our results to a text baseline. Based on crowdsourced workers, we found that users preferred longer, more informative summaries for text presentation. For audio, user preferences depended on the style of query. For single-facet queries, shortened audio summaries were preferred, additionally users were found to judge relevance with a similar accuracy compared to text-based summaries. For multi-facet queries, user preferences were not as clear, suggesting that more sophisticated techniques are required to handle such queries.


international acm sigir conference on research and development in information retrieval | 2017

Modelling Information Needs in Collaborative Search Conversations

Sosuke Shiga; Hideo Joho; Roi Blanco; Johanne R. Trippas; Mark Sanderson

The increase of voice-based interaction has changed the way people seek information, making search more conversational. Development of effective conversational approaches to search requires better understanding of how people express information needs in dialogue. This paper describes the creation and examination of over 32K spoken utterances collected during 34 hours of collaborative search tasks. The contribution of this work is three-fold. First, we propose a model of conversational information needs (CINs) based on a synthesis of relevant theories in Information Seeking and Retrieval. Second, we show several behavioural patterns of CINs based on the proposed model. Third, we identify effective feature groups that may be useful for detecting CINs categories from conversations. This paper concludes with a discussion of how these findings can facilitate advance of conversational search applications.


Journal of the Association for Information Science and Technology | 2017

Extracting audio summaries to support effective spoken document search

Damiano Spina; Johanne R. Trippas; Lawrence Cavedon; Mark Sanderson

We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio‐only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or “snippets” for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user‐judged preference, to spoken summaries generated from error‐free manual transcripts of the same collection. We also observed that content‐based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.


conference on information and knowledge management | 2015

Results Presentation Methods for a Spoken Conversational Search System

Johanne R. Trippas; Damiano Spina; Mark Sanderson; Lawrence Cavedon

We propose research to investigate a new paradigm for Interactive Information Retrieval (IIR) where all input and output is mediated via speech. Our aim is to develop a new framework for effective and efficient IIR over a speech-only channel: a Spoken Conversational Search System (SCSS). This SCSS will provide an interactive conversational approach to determine user information needs, presenting results and enabling search reformulations. We have thus far investigated the format of results summaries for both audio and text, features such as summary length and summaries documents (noisy document or clean document) generated from (noisy) speech-recognition output from spoken document. In this paper we discuss future directions regarding a novel spoken interface targeted at search result presentation, query intent detection, and interaction patterns for audio search.


conference on human information interaction and retrieval | 2018

Informing the Design of Spoken Conversational Search: Perspective Paper

Johanne R. Trippas; Damiano Spina; Lawrence Cavedon; Hideo Joho; Mark Sanderson

We conducted a laboratory-based observational study where pairs of people performed search tasks communicating verbally. Examination of the discourse allowed commonly used interactions to be identified for Spoken Conversational Search (SCS). We compared the interactions to existing models of search behaviour. We find that SCS is more complex and interactive than traditional search. This work enhances our understanding of different search behaviours and proposes research opportunities for an audio-only search system. Future work will focus on creating models of search behaviour for SCS and evaluating these against actual SCS systems.


international acm sigir conference on research and development in information retrieval | 2018

Analyzing and Characterizing User Intent in Information-seeking Conversations

Chen Qu; Liu Yang; W. Bruce Croft; Johanne R. Trippas; Yongfeng Zhang; Minghui Qiu

Understanding and characterizing how people interact in information-seeking conversations is crucial in developing conversational search systems. In this paper, we introduce a new dataset designed for this purpose and use it to analyze information-seeking conversations by user intent distribution, co-occurrence, and flow patterns. The MSDialog dataset is a labeled dialog dataset of question answering (QA) interactions between information seekers and providers from an online forum on Microsoft products. The dataset contains more than 2,000 multi-turn QA dialogs with 10,000 utterances that are annotated with user intent on the utterance level. Annotations were done using crowdsourcing. With MSDialog, we find some highly recurring patterns in user intent during an information-seeking process. They could be useful for designing conversational search systems. We will make our dataset freely available to encourage exploration of information-seeking conversation models.


Sigir Forum | 2018

ACM SIGIR Student Liaison Program

Mohammad Aliannejadi; Maram Hasanain; Jiaxin Mao; Jaspreet Singh; Johanne R. Trippas; Hamed Zamani; Laura Dietz

ACM SIGIR has recently created the Student Liaison Program, a means to connect and stay connected with the student body of the information retrieval (IR) community. This report provides more information about the program, introduces the founding ACM SIGIR student liaisons, and explains past, ongoing, and future activities. We seek suggestions and recommendations on the current plans as well as the new ideas that fit into our mission.


international acm sigir conference on research and development in information retrieval | 2015

Spoken Conversational Search: Information Retrieval over a Speech-only Communication Channel

Johanne R. Trippas

This research is investigating a new interaction paradigm for Interactive Information Retrieval (IIR), where all input and output is mediated via speech. While such information systems have been important for the visually impaired for many years, a renewed focus on speech is driven by the growing sales of internet enabled mobile devices. Presenting search results over a speech-only communication channel involves a number of challenges for users due to cognitive limitations and the serial nature of the audio channel [2]. Other research has shown that one cannot just ‘bolt on’ speech recognizers and screen readers to an existing system [5]. Therefore the aim of this research is to develop a new framework for effective and efficient IIR over a speech-only channel: a Spoken Conversational Search System (SCSS) which provides a conversational approach to determining user information needs, presenting results and enabling search reformulations. This research will go beyond current Voice Search approaches by aiming for a greater integration between document search and conversational dialogue processes in order to provide a more efficient and effective search experience when using a SCSS. We will also investigate an information seeking model for audio and language models. Presenting a Search Engine Result Page (SERP) over a speechonly communication channel presents a number of challenges, e.g., the textual component of a standard search results list has been shown to be ineffectual [4]. The transient nature of speech poses problems due to memory constraints, and makes the possibility of “skimming” back and forth over a list of results (a standard process in browsing a visual list) difficult. These issues are greatly exacerbated when the result being sought is further down the list. This research will advance the knowledge base by: Providing an understanding of which strategies and IIR techniques for SCSS are best for users. Defining novel technologies for contextual conversational interaction with a large collection of unstructured documents that supports effective search over a speech-only communication channel (audio). Determining new methods for providing summary-based resultpresentation for unstructured documents.


acm multimedia | 2015

SpeakerLDA: Discovering Topics in Transcribed Multi-Speaker Audio Contents

Damiano Spina; Johanne R. Trippas; Lawrence Cavedon; Mark Sanderson

Topic models such as Latent Dirichlet Allocation (LDA) have been extensively used for characterizing text collections according to the topics discussed in documents. Organizing documents according to topic can be applied to different information access tasks such as document clustering, content-based recommendation or summarization. Spoken documents such as podcasts typically involve more than one speaker (e.g., meetings, interviews, chat shows or news with reporters). This paper presents a work-in-progress based on a variation of LDA that includes in the model the different speakers participating in conversational audio transcripts. Intuitively, each speaker has her own background knowledge which generates different topic and word distributions. We believe that informing a topic model with speaker segmentation (e.g., using existing speaker diarization techniques) may enhance discovery of topics in multi-speaker audio content.

Collaboration


Dive into the Johanne R. Trippas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chen Qu

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Hamed Zamani

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Laura Dietz

University of New Hampshire

View shared research outputs
Top Co-Authors

Avatar

Liu Yang

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge