Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel L. Chester is active.

Publication


Featured researches published by Daniel L. Chester.


Diagrams'10 Proceedings of the 6th international conference on Diagrammatic representation and inference | 2010

Recognizing the intended message of line graphs

Peng Wu; Sandra Carberry; Stephanie Elzer; Daniel L. Chester

Information graphics (line graphs, bar charts, etc.) that appear in popular media, such as newspapers and magazines, generally have a message that they are intended to convey. We contend that this message captures the high-level knowledge conveyed by the graphic and can serve as a brief summary of the graphics content. This paper presents a system for recognizing the intended message of a line graph. Our methodology relies on 1)segmenting the line graph into visually distinguishable trends which are used to suggest possible messages, and 2)extracting communicative signals from the graphic and using them as evidence in a Bayesian Network to identify the best hypothesis about the graphics intended message. Our system has been implemented and its performance has been evaluated on a corpus of line graphs.


international syposium on methodologies for intelligent systems | 2005

Getting computers to see information graphics so users do not have to

Daniel L. Chester; Stephanie Elzer

Information graphics such as bar, line and pie charts appear frequently in electronic media and often contain information that is not found elsewhere in documents. Unfortunately, sight-impaired users have difficulty accessing and assimilating information graphics. Our goal is an interactive natural language system that provides effective access to information graphics for sight-impaired individuals. This paper describes how image processing has been applied to transform an information graphic into an XML representation that captures all aspects of the graphic that might be relevant to extracting knowledge from it. It discusses the problems that were encountered in analyzing and categorizing components of the graphic, and the algorithms and heuristics that were successfully applied. The resulting XML representation serves as input to an evidential reasoning component that hypothesizes the message that the graphic was intended to convey.


meeting of the association for computational linguistics | 2005

Exploring and Exploiting the Limited Utility of Captions in Recognizing Intention in Information Graphics

Stephanie Elzer; Sandra Carberry; Daniel L. Chester; Seniz Demir; Nancy L. Green; Ingrid Zukerman; Keith Trnka

This paper presents a corpus study that explores the extent to which captions contribute to recognizing the intended message of an information graphic. It then presents an implemented graphic interpretation system that takes into account a variety of communicative signals, and an evaluation study showing that evidence obtained from shallow processing of the graphics caption has a significant impact on the systems success. This work is part of a larger project whose goal is to provide sight-impaired users with effective access to information graphics.


international symposium on intelligent control | 1990

Dynamic fault detection and diagnosis using neural networks

R. Li; J.H. Olson; Daniel L. Chester

A neural network methodology for dynamic fault diagnosis is proposed. Moving windows cut the dynamic data into overlapping pieces. Then the segmented data are presented to the networks for training and generalization purposes. Some unique features associated with this methodology, namely the length of the moving window, the sampling rate, and the construction of the training data set, are studied. The proposed method has been successfully applied to a binary distillation process and shows superiority over the networks trained by steady-state data.<<ETX>>


Ksii Transactions on Internet and Information Systems | 2012

Access to multimodal articles for individuals with sight impairments

Sandra Carberry; Stephanie Elzer Schwartz; Kathleen F. McCoy; Seniz Demir; Peng Wu; Charles F. Greenbacker; Daniel L. Chester; Edward J. Schwartz; David Oliver; Priscilla S. Moraes

Although intelligent interactive systems have been the focus of many research efforts, very few have addressed systems for individuals with disabilities. This article presents our methodology for an intelligent interactive system that provides individuals with sight impairments with access to the content of information graphics (such as bar charts and line graphs) in popular media. The article describes the methodology underlying the systems intelligent behavior, its interface for interacting with users, examples processed by the implemented system, and evaluation studies both of the methodology and the effectiveness of the overall system. This research advances universal access to electronic documents.


Diagrams'12 Proceedings of the 7th international conference on Diagrammatic Representation and Inference | 2012

Automatically recognizing intended messages in grouped bar charts

Richard Burns; Sandra Carberry; Stephanie Elzer; Daniel L. Chester

Information graphics (bar charts, line graphs, grouped bar charts, etc.) often appear in popular media such as newspapers and magazines. In most cases, the information graphic is intended to convey a high-level message; this message plays a role in understanding the document but is seldom repeated in the documents text. This paper presents our methodology for recognizing the intended message of a grouped bar chart. We discuss the types of messages communicated in grouped bar charts, the communicative signals that serve as evidence for the message, and the design and evaluation of our implemented system.


southeastcon | 1996

Gesture-speech based HMI for a rehabilitation robot

Shoupu Chen; Zunaid Kazi; Matthew Beitler; Marcos Salganicoff; Daniel L. Chester; Richard A. Foulds

One of the most challenging problems in rehabilitation robotics is the design of an efficient human-machine interface (HMI) allowing the user with a disability considerable freedom and flexibility. A multimodal user direction approach combining command and control methods is a very promising way to achieve this goal. This multimodal design is motivated by the idea of minimizing the users burden of operating a robot manipulator while utilizing the users intelligence and available mobilities. With this design, the user with a physical disability simply uses gesture (pointing with a laser pointer) to indicate a location or a desired object and uses speech to activate the system. Recognition of the spoken input is also used to supplant the need for general purpose object recognition between different objects and to perform the critical function of disambiguation. The robot system is designed to operate in an unstructured environment containing objects that are reasonably predictable. A novel reactive planning mechanism, of which the user is an active integral component, in conjunction with a stereo-vision system and an object-oriented knowledge base, provides the robot system with the 3D information of the surrounding world as well as the motion strategies.


Lecture Notes in Computer Science | 1998

Speech and Gesture Mediated Intelligent Teleoperation

Zunaid Kazi; Shoupu Chen; Matthew Beitler; Daniel L. Chester; Richard A. Foulds

The Multimodal User Supervised Interface and Intelligent Control (MUSIIC) project addresses the issue of telemanipulation of everyday objects in an unstructured environment. Telerobot control by individuals with physical limitations pose a set of challenging problems that need to be resolved. MUSIIC addresses these problems by integrating a speech and gesture driven human-machine interface with a knowledge driven planner and a 3-D vision system. The resultant system offers the opportunity to study unstructured world telemanipulation by people with physical disabilities and provides means for generalizing to effective manipulation techniques for real-world unstructured tasks in domains where direct physical control may be limited due to time delay, lack of sensation, and coordination.


The New Review of Hypermedia and Multimedia | 2010

Interactive SIGHT: textual access to simple bar charts

Seniz Demir; David Oliver; Edward J. Schwartz; Stephanie Elzer; Sandra Carberry; Kathleen F. McCoy; Daniel L. Chester

Information graphics, such as bar charts and line graphs, are an important component of many articles from popular media. The majority of such graphics have an intention (a high-level message) to communicate to the graph viewer. Since the intended message of a graphic is often not repeated in the accompanying text, graphics together with the textual segments contribute to the overall purpose of an article and cannot be ignored. Unfortunately, these visual displays are provided in a format which is not readily accessible to everyone. For example, individuals with sight impairments who use screen readers to listen to documents have limited access to the graphics. This article presents a new accessibility tool, the Interactive SIGHT (Summarizing Information GrapHics Textually) system, that is intended to enable visually impaired users to access the knowledge that one would gain from viewing information graphics found on the web. The current system, which is implemented as a browser extension that works on simple bar charts, can be invoked by a user via a keystroke combination while navigating the web. Once launched, Interactive SIGHT first provides a brief summary that conveys the underlying intention of a bar chart along with the charts most significant and salient features, and then produces history-aware follow-up responses to provide further information about the chart upon request from the user. We present two user studies that were conducted with sighted and visually impaired users to determine how effective the initial summary and follow-up responses are in conveying the informational content of bar charts, and to evaluate how easy it is to use the system interface. The evaluation results are promising and indicate that the system responses are well-structured and enable visually impaired users to answer key questions about bar charts in an easy-to-use manner. Post-experimental interviews revealed that visually impaired participants were very satisfied with the system offering different options to access the content of a chart to meet their specific needs and that they would use Interactive SIGHT if it was publicly available so as not to have to ignore graphics on the web. Being a language based assistive technology designed to compensate for the lack of sight, our work paves the road for a stronger acceptance of natural language interfaces to graph interpretation that we believe will be of great benefit to the visually impaired community.


Image and Vision Computing | 1998

Color and three-dimensional vision-based assistive telemanipulation

Shoupu Chen; Zunaid Kazi; Richard A. Foulds; Daniel L. Chester

This paper presents a general purpose assistive tele-robot system that is designed to manipulate objects in a three-dimensional environment by using color and stereo vision. The incorporated vision system allows the user to operate the robot remotely simply by gesturing (pointing with a laser pointer). In this vision-based tele-manipulation system, the user is in the control loop actively interacting with the reactive robot motion planner through a multimodal interface (vision and speech). Because of this unique feature, the robot system has been simplified and released from performing the complex object recognition tasks that are often required by autonomous systems.

Collaboration


Dive into the Daniel L. Chester's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephanie Elzer

Millersville University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Shoupu Chen

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Zunaid Kazi

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Richard A. Foulds

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Seniz Demir

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peng Wu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge