Xiangming Mu
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiangming Mu.
Proceedings of The Asist Annual Meeting | 2005
Xiangming Mu; Gary Marchionini
An enriched video metadata framework including video authorization using VAST, metadata integration, and user level applications is presented. The Video Annotation and Summarization Tool (VAST) is a novel video metadata authorization system that integrates both semantic and visual metadata. Balance between accuracy and efficiency is achieved by adopting a semi-automatic authorization approach. Semantic information such as video annotation is integrated with the visual information under a XML schema. Results from user studies and field experiments using the VAST metadata demonstrated that the enriched metadata were seamlessly incorporated into application level programs such as the Interactive Shared Educational Environment (ISEE). Given the VAST metadata, a new video surrogate called the smartlink video surrogate was proposed and deployed in the ISEE. VAST has become a key component of the emerging Open Video Digital Library toolkit.
Information Processing and Management | 2003
Gary Marchionini; Xiangming Mu
In this paper, we describe a series of user studies that were used to advance understanding of how people use electronic tables (E-tables) and inform the development of a web-based statistical table browser for use by non-specialists. Interviews and focus groups with providers, intermediaries, and non-specialist end users; transaction log analysis; and e-mail content analysis were used to develop a user-task taxonomy for government statistical data. These studies were the basis for a prototype web-based interface for browsing statistical tables. Two usability studies with 23 subjects and two eye-tracking studies with 21 subjects were conducted with this interface and paper, PDF, and spreadsheet interfaces. The results of the needs assessment, prototype development, and user studies provide a foundation for understanding E-tables in general and guiding continued design of interfaces for E-tables.
international acm sigir conference on research and development in information retrieval | 2003
Xiangming Mu; Gary Marchionini
Four statistical visual feature indexes are proposed: SLM (Shot Length Mean), the average length of each shot in a video; SLD (Shot Length Deviation), the standard deviation of shot lengths for a video; ONM (Object Number Mean), the average number of objects per frame of the video; and OND (Object Number Deviation), the standard deviation of the number of objects per frame across the video. Each of these indexes provides a unique perspective on video content. A novel video retrieval interface has been developed as a platform to examine our assumption that the new indexes facilitate some video retrieval tasks. Initial feedback is promising and formal experiments are planned.
Proceedings of The Asist Annual Meeting | 2007
Xin Fu; John C. Schaefer; Gary Marchionini; Xiangming Mu
Numerous studies have demonstrated that annotation is an important part of human reading behavior in both printed and electronic environments. Annotation in the electronic environment requires special support due to limited media affordances. We have witnessed continuous improvement of annotation functions in some electronic reading environments, such as text documents in Microsoft Word or Adobe Acrobat and images in Flickr. However, comparatively little research has been conducted to understand people’s needs for making annotations when they watch videos, let alone work to develop tools to support their needs. With the increasing use of videos in many aspects of our lives, from professional activities to personal entertainment, by not only specialists but also general consumers, there is need for more efforts on designing annotation facilities for video navigation and manipulation devices. This study focuses on video annotation in a learning environment. We studied how people in a teaching assistant training class annotated videotaped instructional presentations. We attempted to understand the value of annotation in achieving their learning objectives and how video annotation functions helped in supporting their tasks. The results of this study provide implications for video annotation system design.
acm ieee joint conference on digital libraries | 2003
Meng Yang; Xiangming Mu; Gary Marchionini
Traditional video libraries only catalog and index videos at the piece level. Digital videos need to be catalogued and indexed both on multiple levels (e.g. video, segment and frame) and through multiple modalities (e.g., textual description and visual surrogate). VIVO (Video Indexing and Visualization Organizer) is such a prototype tool we developed to help digital video librarians to input, edit and manage video metadata elements on different levels.
european conference on research and advanced technology for digital libraries | 2002
Barbara M. Wildemuth; Gary Marchionini; Todd Wilkens; Meng Yang; Gary Geisler; Beth Fowler; Anthony Hughes; Xiangming Mu
acm ieee joint conference on digital libraries | 2003
Xiangming Mu; Gary Marchionini; Amy Pattee
Proceedings of the ASIST Annual Meeting | 2001
Xiangming Mu; Gary Marchionini
Archive | 2004
Gary Marchionini; Xiangming Mu
Information Processing and Management | 2005
Xiangming Mu