Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ephraim P. Glinert is active.

Publication


Featured researches published by Ephraim P. Glinert.


ACM Computing Surveys | 1996

Strategic directions in human-computer interaction

Brad A. Myers; James D. Hollan; Isabel F. Cruz; Steve Bryson; Dick C. A. Bulterman; Tiziana Catarci; Wayne Citrin; Ephraim P. Glinert; Jonathan Grudin; Yannis E. Ioannidis

Human-computer interaction (HCI) is the study of how people design, implement, and use interactive computer systems and how computers affect individuals, organizations, and society. This encompasses not only ease of use but also new interaction techniques for supporting user tasks, providing better access to information, and creating more powerful forms of communication. It involves input and output devices and the interaction techniques that use them; how information is presented and requested; how the computer’s actions are controlled and monitored; all forms of help, documentation, and training; the tools used to design, build, test, and evaluate user interfaces; and the processes that developers follow when creating interfaces. This report describes the historical and intellectual foundations of HCI and then summarizes selected strategic directions in human-computer interaction research. Previous important reports on HCI directions include the results of the 1991 [Sibert and Marchionini 1993] and 1994 [Strong 1994] NSF studies on HCI in general, and the 1994 NSF study on the World-Wide Web [Foley and Pitkow 1994].


human factors in computing systems | 1995

Improving GUI accessibility for people with low vision

Richard L. Kline; Ephraim P. Glinert

We present UnWindows VI, a set of tools designed to assist low vision users of X Windows in effectively accomplishing two mundane yet critical interaction tasks: selectively magnifying areas of the screen so that the contents can be seen comfortably, and keeping track of the location of the mouse pointer. We describe our software from both the end users and implementors points of view, with particular emphasis on issues related to screen magnification techniques. We conclude with details regarding software availability and plans for future extensions.


IEEE MultiMedia | 1996

Multimodal integration

Meera M. Blattner; Ephraim P. Glinert

Advances in multimedia, virtual reality, and immersive environments have expanded human computer interaction beyond text and vision to include touch, gestures, voice, and 3D sound. Although well developed single modalities for communication already exist, we do not really understand the general problem of designing integrated multimodal systems. We explore this issue and the diverse approaches to it, with emphasis on a generic platform to support multimodal interaction.


ACM Sigcaph Computers and The Physically Handicapped | 1994

UnWindows 1.0: X Windows tools for low vision users

Richard L. Kline; Ephraim P. Glinert

UnW1nd0w5 1.0 15 a c011ect10n 0f pr09ram5 wr1tten f0r the X W1nd0w 5y5tem de519ned t0 a5515t v15ua11y-1mpa1red u5er5 wh0 are n0t 611nd 1n w0rk1n9 w1th a w1nd0w-6a5ed w0rk5tat10n 1nterface. 7 h e ut111t1e5 are de519ned t0 pr0v1de a5515tance 1n tw0 c0mm0n ta5k5:10cat1n9 the m0u5e p01nter* 0n the 5creen, and 5e1ect1ve1y •ma9n1fy1n9• p0rt10n5 0f the 5creen. M0u5e p01nter 10cat10n 15 91ven thr0u9h 60th v15ua1 and aura1 feed6ack t0 the u5er. A m0d1f1ed ver510n 0f t w m , a10n9 w1th the 6 0 r d e r , chan9e•50und5, and c010reye5 pr09ram5, acc0mp115h th15 ta5k. 5creen ma9n1f1cat10n 15 acc0mp115hed w1th the d y n a m a 9 pr09ram. 7he 1nd1v1dua1 pr09ram5 that make up UnW1nd0w5 1.0 are each de5cr16ed 1n the1r 0wn 5ect10n5 6e10w.


human factors in computing systems | 1989

An experiment into the use of auditory cues to reduce visual workload

Megan L. Brown; Sandra L. Newsome; Ephraim P. Glinert

The potential utility of dividing the information flowing from computer to human among several sensory modalities is investigated by means of a rigorous experiment which compares the effectiveness of auditory and visual cues in the performance of a visual search task. The results indicate that a complex auditory cue can be used to replace cues traditionally presented in the visual modality. Implications for the design of multimodal workstations are discussed.


ieee symposium on visual languages | 1988

C/sup 2/: a mixed textual/graphical environment for C

Mark E. Kopache; Ephraim P. Glinert

A visual programming environment for a subset of the C language is described. The C/sup 2/ environment, as it is called, runs on a personal workstation with high-resolution graphics display. Both conventional textual code entry and editing, and program composition by means of an experimental hybrid textual graphical method, are supported and coexist side by side on the screen at all times. The built-in text editor incorporates selected Unix VI commands in conjunction with a C syntax interpreter. Hybrid textual/graphical program composition is facilitated by a BLOX-type environment in which graphical icons represent program structures and text in the icons represents user-supplied parameters attached to those structures. The two representations are coupled, so that modifications entered using either one automatically generate the appropriate update in the other. Although not all of the C language is yet supported. C/sup 2/ is not a toy system. Textual files that contain C programs serve as input and output. Graphical representations serve merely as internally generated aids to the programmers and are not stored between runs.<<ETX>>


ieee symposium on visual languages | 1995

Online parsing of visual languages using adjacency grammars

Joaquim A. Jorge; Ephraim P. Glinert

Visual computing environments continue to grow in importance, yet fast, general parsing algorithms for visual languages remain elusive. In this paper, we present an incremental parsing algorithm for a broad class of visual languages which do not contain overlapping elements. Our algorithm is based on the concept of adjacency grammars, where adjacencies are defined so as to encompass both spatial and logical constraints. Our approach combines bottom-up and top-down methods to support incremental parsing of visual input, allowing for measurably efficient online parsing of diagram-like visual languages, with observed linear run-times for large visual sentences.


computer software and applications conference | 1992

Metawidgets: towards a theory of multimodal interface design

Meera M. Blattner; Ephraim P. Glinert; Joaquim A. Jorge; Gary R. Ormsby

The authors analyze two intertwined and fundamental issues concerning computer-to-human communication in the multimodal interface; the interplay between sound and graphics, and the role of object persistence. The observations lead to metawidgets as abstract entities capable of manifesting themselves to users as image, as sound, or as various combinations and/or sequences of the two media. The authors show examples of metawidgets in action and discuss mechanisms for choosing among alternative media for metawidget instantiation. Two experimental microworlds implemented to explore these ideas are described.<<ETX>>


ieee symposium on visual languages | 1991

Visual tools and languages: directions for the '90s

Ephraim P. Glinert; Meera M. Blattner; Christopher J. Frerking

The authors identify and discuss three domains where innovative application of visual programming languages is likely to make a significant impact in the near term: concurrent computing, computer-based assistance for people with disabilities, and the multimedia/multimodal environments of tomorrow in which it will be possible to hear and physically interact with information as well as see it.<<ETX>>


Journal of Visual Languages and Computing | 1990

Exploring the general-purpose visual alternative

Ephraim P. Glinert; Mark E. Kopache; David W. McIntyre

Although it is now universally accepted that graphics should be an integral part of the human-computer interface, the proper role for graphics in programming, if any, remains controversial. Some impressive visual programming systems have been developed for novices, and for specific application domains. But visual environments that support larger-scale general-purpose programming, in the sense of main-line textual languages such as Pascal or C, are not yet available. In this paper, we report on two experiments involving the design and implementation of general-purpose visual programming environments: SunPict and C^2. In each case, we explain the motivation for the project, provide an overview of system capabilities, and discuss and evaluate system advantages and drawbacks. We then draw conclusions, based on our experiences, as to where future efforts in this field should probably be directed.

Collaboration


Dive into the Ephraim P. Glinert's collaboration.

Top Co-Authors

Avatar

Meera M. Blattner

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David W. McIntyre

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Richard L. Kline

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark E. Kopache

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avram Vener

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Charles D. Norton

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge