Songsak Channarukul
University of Wisconsin–Milwaukee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Songsak Channarukul.
Natural Language Engineering | 2003
Susan Weber McRoy; Songsak Channarukul; Syed S. Ali
We present an Augmented Template-Based approach to text realization that addresses the requirements of real-time, interactive systems such as a dialog system or an intelligent tutoring system. Template-based approaches are easier to implement and use than traditional approaches to text realization. They can also generate texts more quickly. However traditional template-based approaches with rigid templates are inflexible and difficult to reuse. Our approach augments traditional template-based approaches by adding several types of declarative control expressions and an attribute grammar-based mechanism for processing missing or inconsistent slot fillers. Therefore, augmented templates can be made more general than traditional ones, yielding templates that are more flexible and reusable across applications.
international conference on natural language generation | 2000
Songsak Channarukul; Susan Weber McRoy; Syed S. Ali
We present a new approach to enriching under-specified representations of content to be realized as text. Our approach uses an attribute grammar to propagate missing information where needed in a tree that represents the text to be realized. This declaratively-specified grammar mediates between application-produced output and the input to a generation system and, as a consequence, can easily augment an existing generation system. End-applications that use this approach can produce high quality text without a fine-grained specification of the text to be realized, thereby reducing the burden to the application. Additionally, representations used by the generator are compact, because values that can be constructed from the constraints encoded by the grammar will be propagated where necessary. This approach is more flexible than defaulting or making a statistically good choice because it can deal with long-distance dependencies (such as gaps and reflexive pronouns). Our approach differs from other approaches that use attribute grammars in that we use the grammar to enrich the representations of the content to be realized, rather than to generate the text itself. We illustrate the approach with examples from our template-based text-realizer, YAG.
Intelligence | 1999
Susan Weber McRoy; Syed S. Ali; Angelo Restifficar; Songsak Channarukul
We overview our recent work in specifying and building intelligent dialog systems that collaborate with users for a task. As part of this work we have specied and built systems for: giving medical students an opportunity to practice their decision making skills in English (B2); performing template-based natural language generation (YAG); detecting and rebutting arguments (ARGUER); recognizing and repairing misunderstandings (RRM); and assessing and augmenting patients’ health knowledge (PEAS). All of these systems make use of rich models of dialog for human-computer communication.
north american chapter of the association for computational linguistics | 2003
Songsak Channarukul; Susan Weber McRoy; Syed S. Ali
This paper describes DOGHED (Dialog Output Generator for HEterogeneous Devices), a multimodal generation component which is a part of a dialog system that supports adaptation of multimodal content based on user preferences and their current device. Existing dialog systems focus on generating output for a single device that might not be suitable when users access the system using different devices. Multimedia presentation systems can be built that support several device types. However, most content presentation and layout is done off-line and defined at the document level.
international conference on natural language generation | 2000
Susan Weber McRoy; Songsak Channarukul; Syed S. Ali
YAG (Yet Another Generator) is a real-time, general-purpose, template-based generation system that will enable interactive applications to adapt natural language output to the interactive context without requiring developers to write all possible output strings ahead of time or to embed extensive knowledge of the grammar of the target language in the application. Currently, designers of interactive systems who might wish to include dynamically generated text face a number of barriers; for example designers must decide (1) How hard will it be to link the application to the generator? (2) Will the generator be fast enough? (3) How much linguistic information will the application need to provide in order to get reasonable quality output? (5) How much effort will be required to write a generation grammar that covers all the potential outputs of the application? The design and implementation of YAG is intended to address each of these concerns. In particular, YAG offers the following benefits to applications and application designers:Support for Underspecified Inputs YAG supports knowledge-based systems by accepting two types of inputs: applications can either provide a feature structure (a set of featurevalue pairs) or provide a syntactically underspecified semantic structure that YAG will map onto a feature-based representation for realization. YAG also provides an opportunity for an application to add syntactic constraints, such as whether to express a proposition as a question rather than a statement, as a noun-phrase rather than as a sentence, or as a pronoun rather than a full noun phrase.
international conference on multimodal interfaces | 2004
Songsak Channarukul; Susan Weber McRoy; Syed S. Ali
We are interested in applying and extending existing frameworks for combining output modalities for adaptations of multimodal content on heterogeneous devices based on user and device models. In this paper, we present <b>Multiface</b>, a multimodal dialog system that allows users to interact using different devices such as desktop computers, PDAs, and mobile phones. The presented content and its modality will be customized to individual users and the device they are using.
international conference on multimodal interfaces | 2004
Songsak Channarukul
Dialog systems that adapt to different user needs and preferences appropriately have been shown to achieve higher levels of user satisfaction [4]. However, it is also important that dialog systems be able to adapt to the users computing environment, because people are able to access computer systems using different kinds of devices such as desktop computers, personal digital assistants, and cellular telephones. Each of these devices has a distinct set of physical capabilities, as well as a distinct set of functions for which it is typically used. Existing research on adaptation in both hypermedia and dialog systems has focused on how to customize content based on user models [2, 4] and interaction history. Some researchers have also investigated device-centered adaptations that range from low-level adaptations such as conversion of multimedia objects [6] (e.g., video to images, audio to text, image size reduction) to higher-level adaptations based on multimedia document models [1] and frameworks for combining output modalities [3, 5]. However, to my knowledge, no work has been done on integrating and coordinating both types of adaptation interdependently. The primary problem I would like to address in this thesis is how multimodal dialog systems can adapt their content and style of interaction, taking the user, the device, and the dependency between them into account. Two main aspects of adaptability that my thesis considers are: (1) adaptability in content presentation and communication and (2) adaptability in computational strategies used to achieve systems and users goals. Beside general user modeling questions such as how to acquire information about the user and construct a user model, this thesis also considers other issues that deal with device modeling such as (1) how can the system employ user and device models to adapt the content and determine the right combination of modalities effectively? (2) how can the system determine the right combination of multimodal contents that best suits the device? (3) how can one model the characteristics and constraints of devices? and (4) is it possible to generalize device models based on modalities rather than on their typical categories or physical appearance.
Archive | 2000
Susan Weber McRoy; Songsak Channarukul; Syed S. Ali
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | 2001
Songsak Channarukul; Susan Weber McRoy; Syed S. Ali
Intelligence | 2001
Susan Weber McRoy; Songsak Channarukul; Syed S. Ali