Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David House is active.

Publication


Featured researches published by David House.


conference of the european chapter of the association for computational linguistics | 1999

The TIPSTER SUMMAC Text Summarization Evaluation

Inderjeet Mani; David House; Gary L. Klein; Lynette Hirschman; Therese Firmin; Beth Sundheim

The TIPSTER Text Summarization Evaluation (SUMMAC) has established definitively that automatic text summarization is very effective in relevance assessment tasks. Summaries as short as 17% of full text length sped up decision-making by almost a factor of 2 with no statistically significant degradation in F-score accuracy. SUMMAC has also introduced a new intrinsic method for automated evaluation of informative summaries.


Natural Language Engineering | 2002

SUMMAC: a text summarization evaluation

Inderjeet Mani; Gary L. Klein; David House; Lynette Hirschman; Therese Firmin; Beth Sundheim

The TIPSTER Text Summarization Evaluation (SUMMAC) has developed several new extrinsic and intrinsic methods for evaluating summaries. It has established definitively that automatic text summarization is very effective in relevance assessment tasks on news articles. Summaries as short as 17% of full text length sped up decision-making by almost a factor of 2 with no statistically significant degradation in accuracy. Analysis of feedback forms filled in after each decision indicated that the intelligibility of present-day machine-generated summaries is high. Systems that performed most accurately in the production of indicative and informative topic-related summaries used term frequency and co-occurrence statistics, and vocabulary overlap comparisons between text passages. However, in the absence of a topic, these statistical methods do not appear to provide any additional leverage: in the case of generic summaries, the systems were indistinguishable in accuracy. The paper discusses some of the tradeoffs and challenges faced by the evaluation, and also lists some of the lessons learned, impacts, and possible future directions. The evaluation methods used in the SUMMAC evaluation are of interest to both summarization evaluation as well as evaluation of other ‘output-related’ NLP technologies, where there may be many potentially acceptable outputs, with no automatic way to compare them.


Communications of The ACM | 2001

Expert Finding for Collaborative Virtual Environments

Mark T. Maybury; Raymond J. D'Amore; David House

COMMUNICATIONS OF THE ACM December 2001/Vol. 44, No. 12 55 While computer-supported collaborative virtual environments have been successfully applied to revolutionize distance learning, distributed design, and collaborative analysis and planning (see Ragusa and Bochenek’s introduction to this section), a fundamental challenge of these systems is establishing the right teams of individuals during interactive problem solving for consultation, coordination, or collaboration. Motivated by our use of place-based collaborative environments for analysis and ecision support [3], we created Expert Finder and XperNet, two software programs that automatically profile topics of interest and expertise associated with employees based on employees’ tool use, publications, project roles, and written communication with others. Figure 1 illustrates Expert Finder in action. In this case a user types in the keywords “data mining” and Expert Finder replies with a rank-ordered list of employees whose expertise profile, inferred from a variety of evidence sources, best matches this query. Evidence includes the frequency of documents published by an employee on this topic, contents of any published resume, and documents that mention employees in conjunction with a particular topic (for example, corporate newsletters). In the latter case, informatio extraction technology is used to detect names within unstructured documents. These names are then correlated with topic areas in the documents. Despite low human inter-subject agreement, empirical evaluations [1] comparing 10 technical resource managers’ performances with Expert Finder on five specialty areas (data mining, chemicals, human-computer interaction, network security, and collaboration) demonstrated that Expert Finder performed at 60% precision and 40% recall when approEXPERT FINDING FOR COLLABORATIVE VIRTUAL ENVIRONMENTS Mark Maybury, Ray D’Amore, and David House


International Journal of Human-computer Interaction | 2002

Awareness of Organizational Expertise

Mark T. Maybury; Raymond J. D'Amore; David House

This article describes automated tools for increasing organizational awareness within a global enterprise. The MITRE Corporation is the context for this work; however, the tools and techniques are general and should apply to a wide variety of distributed, heterogeneous organizations. These tools provide awareness of team members and materials in virtual collaboration environments as well as support for automated discovery of distributed experts. The results are embodied in 3 systems: MITREs Collaborative Virtual Workspace (CVW), Expert Finder, and XpertNet. CVW is a place-based collaboration environment that enables team members to find one another and work together. Expert Finder is an expert skill finder that exploits the intellectual products created within an organization to support automated expertise identification. XpertNet addresses the problem of detecting extant or emerging classes of expertise without a priori knowledge of their existence. Both Expert Finder and XpertNet combine to detect and track experts and expert communities within a complex work environment. After describing the background of knowledge management at MITRE, this article describes the architecture and use of collaboration and expert finder systems to enhance organizational awareness, provides some principles of expertise, and concludes with an outline of future research directions.


language resources and evaluation | 2000

How to Evaluate Your Question Answering System Every Day … and Still Get Real Work Done

Eric Breck; John D. Burger; Lisa Ferro; Lynette Hirschman; David House; Marc Light; Inderjeet Mani


Archive | 1998

The tipster summac text summarization evaluation final report

Inderjeet Mani; David House; Gary A. Klein; Lynette Hirschman; Leo Obrst; Therese Firmin; M Chrzanowski; Beth Sundheim


multimedia information retrieval | 1997

Towards content-based browsing of broadcast news video

Inderjeet Mani; David House; Mark T. Maybury


text retrieval conference | 1999

A Sys Called Qanda.

Eric Breck; John D. Burger; Lisa Ferro; David House; Marc Light; Inderjeet Mani


Research-technology Management | 2000

Automating the Finding of Experts

Mark T. Maybury; Ray D'Amore; David House


Research-technology Management | 2000

Managers at Work: Automating the Finding of Experts

Mark T. Maybury; Ray D'Amore; David House

Collaboration


Dive into the David House's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge