Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul U. Lee is active.

Publication


Featured researches published by Paul U. Lee.


conference on spatial information theory | 1999

Pictorial and Verbal Tools for Conveying Routes

Barbara Tversky; Paul U. Lee

Traditionally, depictions and descriptions have been seen as complementary; depictions have been preferred to convey iconic or metaphorically iconic information whereas descriptions have been preferred for abstract information. Both are external representations designed to complement human memory and information processing. We have found the same underlying structure and semantics for route maps and route directions. Here we find that limited schematic map and direction toolkits are sufficient for constructing directions, supporting the possibility of automatic translation between them.


Lecture Notes in Computer Science | 2000

Lines, Blobs, Crosses and Arrows: Diagrammatic Communication with Schematic Figures

Barbara Tversky; Jeffrey M. Zacks; Paul U. Lee; Julie Heiser

In producing diagrams for a variety of contexts, people use a small set of schematic figures to convey certain context specific concepts, where the forms themselves suggest meanings. These same schematic figures are interpreted appropriately in context. Three examples will support these conclusions: lines, crosses, and blobs in sketch maps; bars and lines in graphs; and arrows in diagrams of complex systems.


Spatial Cognition and Computation | 1999

Why do speakers mix perspectives

Barbara Tversky; Paul U. Lee; Scott D. Mainwaring

Although considerations of discourse coherence and cognitive processing suggest that communicators should adopt consistent perspectives when describing spatial scenes, in many cases they switch perspectives. Ongoing research examining cognitive costs indicates that these are small and exacted in establishing a mental model of a scene but not in retrieving information from a well-known scene. A perspective entails a point of view, a referent object, and terms of reference. These may change within a perspective, exacting cognitive costs, so that the costs of switching perspective may not be greater than the costs of maintaining the same perspective. Another project investigating perspective choice for self and other demonstrates effects of salience of referent object and ease of terms of reference. Perspective is mixed not just in verbal communications but also in pictorial ones, suggesting that at times, switching perspective is more effective than maintaining a consistent one.


Journal of Visual Languages and Computing | 2005

Wayfinding choremes-a language for modeling conceptual route knowledge

Alexander Klippel; Heike Tappe; Lars Kulik; Paul U. Lee

The emergent interest in ontological and conceptual approaches to modeling route information results from new information technologies as well as from a multidisciplinary interest in spatial cognition. Linguistics investigates verbal route directions; cartography carries out research on route maps and on the information needs of map users; and computer science develops formal representations of routes with the aim to build new wayfinding applications. In concert with geomatics, ontologies of spatial domain knowledge are assembled while sensing technologies for location-aware wayfinding aids are developed simultaneously (e.g. cell phones, GPS-enabled devices or PDAs). These joint multidisciplinary efforts have enhanced cognitive approaches for route directions. In this article, we propose an interdisciplinary approach to modeling route information, the wayfinding choreme theory. Wayfinding choremes are mental conceptualizations of functional wayfinding and route direction elements. With the wayfinding choreme theory, we propose a formal treatment of (mental) conceptual route knowledge that is based on qualitative calculi and refined by behavioral experimental research. This contribution has three parts: First, we introduce the theory of wayfinding choremes. Second, we present term rewriting rules that are grounded in cognitive principles and can tailor route directions to different user requirements. Third, we exemplify various application scenarios for our approach.


Spatial Cognition and Computation | 2005

Interplay Between Visual and Spatial: The Effect of Landmark Descriptions on Comprehension of Route/Survey Spatial Descriptions

Paul U. Lee; Barbara Tversky

Successful wayfinding requires accurate encoding of two types of information: landmarks and the spatial relations between them (e.g., landmark X is left/north of Y). Although both types of information are crucial to wayfinding, behavioral and neurological evidence suggest that they have different substrates. In this paper, we consider the nature of the difference by examining comprehension times of spatial information (i.e., route and survey descriptions) and landmark descriptions. In two studies, participants learned simple environments by reading descriptions from route or survey perspectives, half with a single perspective switch. On half of the switch trials, a landmark description was introduced just prior to the perspective switch. In the first study, landmarks were embellished with descriptions of visual details, while in the second study, landmarks were embellished with descriptions of historic or other factual information. The presence of landmark descriptions did not increase the comprehension time of either route or survey descriptions, suggesting that landmark descriptions are perspective-neutral. Furthermore, visual landmark descriptions speeded comprehension time when the perspective was switched, whereas factual landmark descriptions had no effect on perspective switching costs. Taken together, the findings support separate processes for landmark and spatial information in construction of spatial mental models, and point to the importance of visual details of landmarks in facilitating mental model construction.


international conference on robotics and automation | 1994

Dynamic simulation of interactive robotic environment

Paul U. Lee; Diego C. Ruspini; Oussama Khatib

A dynamic simulation package, which can accurately model the interactions between robots and their environment, has been developed. This package creates a virtual environment where various controllers and workcells may be tested. The simulator is divided in two parts: local objects that compute their own dynamic equations of motion, and a global coordinator that resolves interactive forces between objects. This simulator builds upon previous work on dynamic simulation of simple rigid bodies and extends it to correctly model and efficiently compute the dynamics of multi-link robots.<<ETX>>


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2004

Top-down learning strategies: can they facilitate stylus keyboard learning?

Paul U. Lee; Shumin Zhai

Abstract Learning a new stylus keyboard layout is time-consuming yet potentially rewarding, as optimized virtual keyboards can substantially increase performance for expert users. This paper explores whether the learning curve can be accelerated using top-down learning strategies. In an experiment, one group of participants learned a stylus keyboard layout with top-down methods, such as visuo-spatial grouping of letters and mnemonic techniques, to build familiarity with a stylus keyboard. The other (control) group learned the keyboard by typing sentences. The top-down learning group liked the stylus keyboard better and perceived it to be more effective than the control group. They also had better memory recall performance. Typing performance after the top-down learning process was faster than the initial performance of the control group, but not different from the performance of the control group after they had spent an equivalent amount of time typing. Therefore, top-down learning strategies improved the explicit recall as expected, but the improved memory of the keyboard did not result in quicker typing speeds. These results suggest that quicker acquisition of declarative knowledge does not improve the acquisition speed of procedural knowledge, even during the initial cognitive stage of the virtual keyboard learning. They also suggest that top-down learning strategies can motivate users to learn a new keyboard more than repetitive rehearsal, without any loss in typing performance.


Spatial Cognition and Computation | 2004

Events by Hands and Feet

Barbara Tversky; Jeffrey M. Zacks; Paul U. Lee

The human mind carves time into events much as it carves space into objects. Events are activities that are perceived to have beginnings, middles, and ends, such as going to work and making a bed. Events performed by humans can be enacted by feet, as in getting to work, or by hands, as in making a bed. Although continuous, events are perceived to have discrete parts. Events by feet are segmented into actions at nodes, or turns at landmarks, as revealed in spontaneously produced route maps and route directions. In contrast, events by hands are segmented hierarchically. At the coarse level, the segments are punctuated by objects or object parts, sheets, pillowcases, and blanket in the case of making the bed. At the fine level, segments are punctuated by articulated actions on the same object, spreading the sheet, tucking in the corners, smoothing it out. For both events by feet and events by hands, the segments correspond to changes in goals in subgoals, signaled by perceptually salient changes in physical activity.


smart graphics | 2003

The effect of motion in graphical user interfaces

Paul U. Lee; Alexander Klippel; Heike Tappe

Motion can be an effective tool to focus users attention and to support the parsing of complex information in graphical user interfaces. Despite the ubiquitous use of motion in animated displays, its effectiveness has been marginal at best. The ineffectiveness of many animated displays may be due to a mismatch between the attributes of motion and the nature of the task at hand. To test this hypothesis, we examined different modes of route presentation that are commonly used today (e.g. internet maps, GPS maps, etc.) and their effects on the subsequent route memory. Participants learned a route from a map of a fictitious town. The route was presented to them either as a solid line (static) or as a moving dot (dynamic). In a subsequent memory task, participants recalled fewer pertinent landmarks (i.e. landmarks at the turns) in the dynamic condition, likely due to the moving dot that focused equally on critical and less important parts of the route. A second study included a combined (i.e. both static and dynamic) presentation mode, which potentially had a better recall than either presentation mode alone. Additionally, verbalization data confirmed that the static presentation mode allocated the attention to the task relevant information better than the dynamic mode. These findings support the hypothesis that animated tasks are conceived of as sequences of discrete steps, and that the motion in animated displays inhibits the discretization process. The results also suggest that a combined presentation mode can unite the benefits of both static and dynamic modes.


Archive | 2002

Costs of switching perspectives in route and survey descriptions

Paul U. Lee; Barbara Tversky

Collaboration


Dive into the Paul U. Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Klippel

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jeffrey M. Zacks

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott D. Mainwaring

Interval Research Corporation

View shared research outputs
Top Co-Authors

Avatar

Lars Kulik

University of Melbourne

View shared research outputs
Researchain Logo
Decentralizing Knowledge