Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emile L. Morse is active.

Publication


Featured researches published by Emile L. Morse.


international conference on supporting group work | 2001

A comparison of usage evaluation and inspection methods for assessing groupware usability

Michelle Potts Steves; Emile L. Morse; Carl Gutwin; Saul Greenberg

Many researchers believe that groupware can only be evaluated by studying real collaborators in their real contexts, a process that tends to be expensive and time-consuming. Others believe that it is more practical to evaluate groupware through usability inspection methods. Deciding between these two approaches is difficult, because it is unclear how they compare in a real evaluation situation. To address this problem, we carried out a dual evaluation of a groupware system, with one evaluation applying user-based techniques, and the other using inspection methods. We compared the results from the two evaluations and concluded that, while the two methods have their own strengths, weaknesses, and trade-offs, they are complementary. Because the two methods found overlapping problems, we expect that they can be used in tandem to good effect, e.g., applying the discount method prior to a field study, with the expectation that the system deployed in the more expensive field study has a better chance of doing well because some pertinent usability problems will have already been addressed.


international conference on human computer interaction | 2011

A field study of user behavior and perceptions in smartcard authentication

Celeste Lyn Paul; Emile L. Morse; Aiping Zhang; Yee-Yin Choong; Mary F. Theofanos

A field study of 24 participants over 10 weeks explored user behavior and perceptions in a smartcard authentication system. Ethnographic methods used to collect data included diaries, surveys, interviews, and field observations. We observed a number of issues users experienced while they integrated smartcards into their work processes, including forgetting smartcards in readers, forgetting to use smartcards to authenticate, and difficulty understanding digital signatures and encryption. The greatest perceived benefit was the use of an easy-to-remember PIN in replacement of complicated passwords. The greatest perceived drawback was the lack of smartcard-supported applications. Overall, most participants had a positive experience using smartcards for authentication. Perceptions were influenced by personal benefits experienced by participants rather than an increase in security.


human factors in computing systems | 2006

Does habituation affect fingerprint quality

Mary F. Theofanos; Ross J. Micheals; Jean Scholtz; Emile L. Morse; Peter May

Interest in the environmental factors that affect biometric image quality is increasing as biometric technologies are currently being implemented in various business applications. This study aims to determine, through repeated trials, the effects of various external factors on the image quality and usability of prints collected by an electronic reader. These factors include age and gender but also the absence or presence of immediate feedback. A key factor in biometric systems that will be used daily or routinely is habituation. The users behavior could potentially change as a result of acclimatization; ones input might increase in quality as one learns how to use the system better, or decrease in quality since comfort with the system could translate into carelessness.


north american chapter of the association for computational linguistics | 2006

User-Centered Evaluation of Interactive Question Answering Systems

Diane Kelly; Paul B. Kantor; Emile L. Morse; Jean Scholtz; Ying Sun

We describe a large-scale evaluation of four interactive question answering system with real users. The purpose of the evaluation was to develop evaluation methods and metrics for interactive QA systems. We present our evaluation method as a case study, and discuss the design and administration of the evaluation components and the effectiveness of several evaluation techniques with respect to their validity and discriminatory power. Our goal is to provide a roadmap to others for conducting evaluations of their own systems, and to put forward a research agenda for interactive QA evaluation.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002

Quantifying Usability: The Industry Usability Reporting Project

Jean Scholtz; Anna Wichansky; Keith Butler; Emile L. Morse; Sharon J. Laskowski

The Industry US ability Reporting (IUSR) Project seeks to help potential corporate consumers of software obtain information about the usability of supplier products, to measure the benefit of more usable software, and to increase communication about usability needs between consumers and suppliers. Human factors and software engineers have developed a Common Industry Format (ANSI/NCITS 354–2001) for sharing usability information. Four pilot studies were conducted by industry which verify its usefulness in procurement and assess the costs and benefits of including usability test results in the software purchase process. Use of the Common Industry Format can increase communications across corporate boundaries and help improve the usability of software for consumers. The standard may also be applicable to setting usability requirements, and measuring usability of websites, hardware, and universal access.


Journal of the Association for Information Science and Technology | 2011

Using cross-evaluation to evaluate interactive QA systems

Ying Sun; Paul B. Kantor; Emile L. Morse

In this article, we report on an experiment to assess the possibility of rigorous evaluation of interactive question-answering (QA) systems using the cross-evaluation method. This method takes into account the effects of tasks and context, and of the users of the systems. Statistical techniques are used to remove these effects, isolating the effect of the system itself. The results show that this approach yields meaningful measurements of the impact of systems on user task performance, using a surprisingly small number of subjects and without relying on predetermined judgments of the quality, or of the relevance of materials. We conclude that the method is indeed effective for comparing end-to-end QA systems, and for comparing interactive systems with high efficiency.


Natural Language Engineering | 2009

Questionnaires for eliciting evaluation data from users of interactive question answering systems

Diane Kelly; Paul B. Kantor; Emile L. Morse; Jean Scholtz; Ying Sun

Evaluating interactive question answering (QA) systems with real users can be challenging because traditional evaluation measures based on the relevance of items returned are difficult to employ since relevance judgments can be unstable in multi-user evaluations. The work reported in this paper evaluates, in distinguishing among a set of interactive QA systems, the effectiveness of three questionnaires: a Cognitive Workload Questionnaire (NASA TLX), and Task and System Questionnaires customized to a specific interactive QA application. These Questionnaires were evaluated with four systems, seven analysts, and eight scenarios during a 2-week workshop. Overall, results demonstrate that all three Questionnaires are effective at distinguishing among systems, with the Task Questionnaire being the most sensitive. Results also provide initial support for the validity and reliability of the Questionnaires.


ACM Sigchi Bulletin - A Supplement To Interactions | 2002

A new usability standard and what it means to you

Jean Scholtz; Emile L. Morse

The Common Industry Format (CIF) was approved on December 12, 2001 as an ANSI standard (ANSI/ NCITS-354-2001). The CIF was developed under the auspices o f the Industry USabili ty Reporting (IUSR) Project (www.nist.gov/iusr) which was organized by the National Institute o f Standards and Technology in 1997. The IUSR participants include representatives from prominent suppliers o f software and from large consumer organizations, usability consultants, and academics. The goal is to raise the visibility o f software usability so that it can be used as a factor when companies make procurement decisions.


collaboration technologies and systems | 2006

A Longitudinal Study of the Use of a Collaboration Tool: A Breadth and Depth Analysis

Jean Scholtz; Emile L. Morse; Michelle Potts Steves

In this paper we present both a broad and deep look at the use of a collaboration tool in the intelligence community. Through an experimental program, intelligence analysts are given the opportunity to explore and use tools to determine if the tools provide sufficient value to be certified and moved into the analytic work environment. The goal of this program is to bring advanced technologies to the intelligence community through research and experimentation. New tools are evaluated using a metrics based assessment. Tools that successfully pass these evaluations are then introduced on an experimental network. Analysts employed by the experimental program work along side analysts in the intelligence community and look for opportunities where the experimental tools could be useful in current analytic processes. These uses are also evaluated to determine the value of the tools in the analytic environment.


Software Process: Improvement and Practice | 2003

Using consumer demands to bridge the gap between software engineering and usability engineering

Jean Scholtz; Emile L. Morse

A number of efforts are being undertaken to integrate usability engineering and software engineering in the software-development process. The majority of these integration efforts focus on software developers, usability engineers, or defining new processes. In this article, we report on an effort to involve the consumer of software by providing a mechanism, namely the common industry format (CIF), to formally request usability information on the software to be purchased. Copyright

Collaboration


Dive into the Emile L. Morse's collaboration.

Top Co-Authors

Avatar

Jean Scholtz

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michelle Potts Steves

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mary F. Theofanos

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Yee-Yin Choong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sharon J. Laskowski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Ying Sun

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Brian A. Weiss

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Craig I. Schlenoff

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge