Piyawadee Noi Sukaviriya
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Piyawadee Noi Sukaviriya.
Proceedings of the IFIP TC2/WG2.7 Working Conference on Engineering for Human-Computer Interaction | 1995
Pedro A. Szekely; Piyawadee Noi Sukaviriya; Pablo Castells; Jeyakumar Muthukumarasamy; Ewald Salcher
Currently, building a user interface involves creating a large procedural program. Model-based programming provides an alternative new paradigm. In the model-based paradigm, developers create a declarative model that describes the tasks that users are expected to accomplish with a system, the functional capabilities of a system, the style and requirements of the interface, the characteristics and preferences of the users, and the I/O techniques supported by the delivery platform. Based on the model, a much smaller procedural program then determines the behavior of the system.
human factors in computing systems | 1993
Piyawadee Noi Sukaviriya; James D. Foley; T. W. Griffith
Several obstacles exist in the user interface design process which distract a developer from designing a good user interface. One of the problems is the lack of an application model to keep the designer in perspective with the application. The other problem is having to deal with massive user interface programming to achieve a desired interface and to provide users with correct help information on the interface. In this paper, we discuss an application model which captures information about the application to specifications of a desired interface. The application model is then used to control the dialogues at runtime and can be used by a help component to automatically generate animated and textual help. Specification changes in the application model will automatically result in behavioral changes in the interface.
user interface software and technology | 1990
Piyawadee Noi Sukaviriya; James D. Foley
Animated help can assist users in understanding how to use computer application interfaces. An animated help facility integrated into a runtime user interface support tool requires information pertaining to user interfaces, the applications being supported, the relationships between interface and application and precise detailed information sufficient for accurate illustrations of interface components. This paper presents aknowledge model developed to support such an animated help facility. Continuing our research efforts towards automatic generation of user interfaces from specifications, a framework has been developed to utilize one knowledge model to automatically generate animated help at runtime and to assist the management of user interfaces. Cartoonist is a system implemented based on the framework. Without the help facility, Cartoonist functions as a knowledge-driven user interface. With the help facility added to Cartoonist’s user interface architecture, we demonstrate how animation of user’s actions can be simulated by superimposing animation on the actual interface. The animation sequences imitate user actionsandCartoonist’s user interface dialogue controller responds to animation “inputs”exactly as if they were from a user. The user interface runtime information managed by Cartoonist is shared with the help facility to furnish animation scenarios and to vary scenarios to suit the current user context. The Animator and the UI controller are modeled so that the Animator incorporates what is essential to the animation task and the UI controller assumes responsibility of the rest of the interactions an approach which maintains consistency between help animation and the actual user interface.
human factors in computing systems | 1994
Michael D. Byrne; D. Wood; Piyawadee Noi Sukaviriya; James D. Foley; David E. Kieras
One method for user interface analysis that has proven successful is formal interface analysis, such as GOMSbased analysis. Such methods are often criticized for being difficult to learn, or at the very least an additional burden for the system designer. However, if the process of constructing and using formal models could be automated as part of the interface design environment, such models could be of even greater value. This paper describes an early version of such a system, called USAGE (the UIDE System for semi-Automated GOMS Evaluation). Given the application model necessary to drive the UIDE system, USAGE generates an NGOMSL model of the interface which can be “run” on a typical set of user tasks and provide execution and learning time estimates.
DSV-IS | 1995
James D. Foley; Piyawadee Noi Sukaviriya
UIDE, the User Interface Design Environment, was conceived in 1987 as a next-generation user interface design and implementation tool to embed application semantics into the earlier generation of User Interface Management Systems (UIMSs), which focused more on the look and feel of an interface. UIDE models an application’s data objects, actions, and attributes, and the pre- and post- conditions associated with the actions. The model is then used for a variety of design-time and run-time services, such as to: automatically create windows, menus, and control panels; critique the design for consistency and completeness; control execution of the application; enable and disable menu items and other controls and make windows visible and invisible; generate context-sensitive animated help, in which a mouse moves on the screen to show the user how to accomplish a task; generate text help explaining why commands are disabled and what must be done to enable them; and support a set of transformations on the model which change certain user interface characteristics in a correctness-preserving way.
intelligent user interfaces | 1993
Robert Neches; James D. Foley; Pedro A. Szekely; Piyawadee Noi Sukaviriya; Ping Luo; Srdjan Kovacevic; Scott E. Hudson
We describe MASTERMIND, a step toward our vision of a knowledge-based design-time and run-time environment where human-computer interfaces development is centered around an all-encompassing design model. The MASTERMIND approach is intended to provide integration and continuity across the entire life cycle of the user interface. In addition it facilitates higher quality work within each phase of the life cycle. MASTERMIND is an open framework, in which the design knowledge base allows multiple tools to come into play and makes knowledge created by each tool accessible to the others.
international conference on computer graphics and interactive techniques | 1988
Piyawadee Noi Sukaviriya
Help provided as traditional text descriptions has become incompatible with graphical interfaces. Animation suggests a better association between help and a graphical interface. This paper describes a prototype system implemented to demonstrate the use of dynamic scenarios as help. A scenario animates the execution of a task as a sequence of steps in the actual interface and work context. Each scenario is dynamically generated depending on the current work context of the user. The system reasons from the users request for help as well as from the context what and how much to animate. In addition to the animation driving mechanism, construction of animated help requires knowledge about application semantics, user interface semantics, user interface syntax and application context. The application semantics determines the steps needed to satisfy the help request. The user interface semantics determines whether the current state of the graphical interface will support the appropriate animated help scenarios. The user interface syntax gives detailed information on how each step will actually be performed. Preconditions are used in both application and user interface semantics for reasoning in help construction. The restoring of context is performed using help session history data to return to the original work context after an animation session. The implemented example uses a directory tree program where the graphical interface is kept simple. In future research the concept will be applied to more complicated applications.
intelligent user interfaces | 1993
Piyawadee Noi Sukaviriya; James D. Foley
Developing an adaptive interface requires a user interface that can be adapted, a user model, and an adaptation strategy. Research on adaptive interfaces in the past suffered from a lack of supporting tools which allow an interface to be easily created and modified. Also, adding adaptivity to a user interface so far has not been supported by any user interface systems or environments. In this paper, we present an overview of a knowledge base model of the User Interface Design Environment (UIDE). UIDE uses the knowledge of an application to support the run-time execution of the application’s interface and provides various kinds of automatic help. We present how the knowledge model can be used as a basic construct of a user model. Finally, we present adaptive interface and adaptive help behaviors that can be extended to the current UIDE architecture utilizing the user model. These behaviors are options from which an application designer can choose for an application interface.
user interface software and technology | 1993
Krishna Bharat; Piyawadee Noi Sukaviriya
Animated demonstration systems such as MacroMind Director [6] are becoming popular. Our notion of animation is more restricted. We define user interface animation as the process of emulating the interaction of a user with the interface. The interaction should be real, in the sense that it should engage the actual application. Such systems are powerful because of their ability to invoke actions, and expressive because they can be used to demonstrate interaction techniques. The effect of the presentation may be enhanced by displaying the behavior of input devices audio-visually.
human factors in computing systems | 1992
Piyawadee Noi Sukaviriya; Ellen Isaacs; Krishna Bharat
On-line help systems have not paralleled recent advances in user interface technology. In particular, traditional textual help does not support visualization of the interaction processes needed to complete tasks, especially in graphical interfaces. In this demonstration, we present an experimental prototype which is capable of presenting help information in text, audio, static graphics, video, and context-sensitive animation. The prototype is used in a study on how multimedia technology enhances user performance.