Scott S. Snibbe
Interval Research Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Scott S. Snibbe.
user interface software and technology | 1993
Manojit Sarkar; Scott S. Snibbe; Oren J. Tversky; Steven P. Reiss
We propose the metaphor of rubber sheet stretching for viewing large and complex layouts within small display areas. Imagine the original 2D layout on a rubber sheet. Users can select and enlarge different areas of the sheet by holding and stretching it with a set of special tools called handles. As the user stretches an area, a greater level of detail is displayed there. The technique has some additional desirable features such as areas specified as arbitrary closed polygons, multiple regions of interest, and uniform scaling inside the stretched regions.
interactive 3d graphics and games | 1992
Brookshire D. Conner; Scott S. Snibbe; Kenneth P. Herndon; Daniel C. Robbins; Robert C. Zeleznik; Andries van Dam
The 3D components of today’s user interfaces are still underdeveloped. Direct interaction with 3D objects has been limited thus far to gestural picking, manipulation with linear transformations, and simple camera motion. Further, there are no toolkits for building 3D user interfaces. We present a system which allows experimentation with 3D widgets, encapsulated 3D geometry and behavior. Our widgets are first-class objects in the same 3D environment used to develop the application. This integration of widgets and application objects provides a higher bandwidth between interface and application than exists in more traditional UI toolkit-based interfaces. We hope to allow user-interface designers to build highly interactive 3D environments more easily than is possible with today’s tools.
user interface software and technology | 1992
Kenneth P. Herndon; Robert C. Zeleznik; Daniel C. Robbins; D. Brookshire Conner; Scott S. Snibbe; Andries van Dam
It is often difficult in computer graphics applications to understand spatial relationships between objects in a 3D scene or effect changes to those objects without specialized visualization and manipulation techniques. We present a set of three-dimensional tools (widgets) called “shadows” that not only provide valuable perceptual cues about the spatial relationships between objects, but also provide a direct manipulation interface to constrained transformation techniques. These shadow widgets provide two advances over previous techniques. First, they provide high correlation between their own geometric feedback and their effects on the objects they control. Second, unlike some other 3D widgets, they do not obscure the objects they control.
Archive | 2002
Mark Scheeff; John Pinto; Kris Rahardja; Scott S. Snibbe; Robert Tow
In an effort to explore human response to a socially competent embodied agent, we have a built a life-like teleoperated robot. Our robot uses motion, gesture and sound to be social with people in its immediate vicinity. We explored human-robot interaction in both private and public settings. Our users enjoyed interacting with Sparky and treated it as a living thing. Children showed more engagement than adults, though both groups touched, mimicked and spoke to the robot and often wondered openly a bout its intentions and capabilities. Evidence from our experiences with a teleoperated robot showed a need for next-generation autonomous social robots to develop more sophisticated sensory modalities that are better able to pay attention to people.
user interface software and technology | 2001
Scott S. Snibbe; Karon E. MacLean; Robert S. Shaw; Jayne B. Roderick; William L. Verplank; Mark Scheeff
We introduce a set of techniques for haptically manipulating digital media such as video, audio, voicemail and computer graphics, utilizing virtual mediating dynamic models based on intuitive physical metaphors. For example, a video sequence can be modeled by linking its motion to a heavy spinning virtual wheel: the user browses by grasping a physical force-feedback knob and engaging the virtual wheel through a simulated clutch to spin or brake it, while feeling the passage of individual frames. These systems were implemented on a collection of single axis actuated displays (knobs and sliders), equipped with orthogonal force sensing to enhance their expressive potential. We demonstrate how continuous interaction through a haptically actuated device rather than discrete button and key presses can produce simple yet powerful tools that leverage physical intuition.
international conference on computer graphics and interactive techniques | 1992
Scott S. Snibbe; Kenneth P. Herndon; Daniel C. Robbins; D. Brookshire Conner; Andries van Dam
We are developing a framework for creating interactive 3D environments for applications in design, education, and the communication of information and ideas [3]. Our most recent work focuses on providing a useful and powerful interface to such a complex environment. To this end we have developed 3D widgets, objects that encapsulate 3D geometry and behavior, to control other objects in the scene [2]. We build 3D widgets as first-class objects in our real-time animation system. Because our system allows rapid prototyping of objects, we hope to enlarge today’s surprisingly small vocabulary of 3D widgets that includes menus floating in 3D, gestural picking, translation and rotation, cone trees, and perspective walls. As a way to focus on issues of 3D widget design, we have developed widgets to perform a particular task: applying high-level deformations to 3D objects [1]. The complexity of these operations makes numerical specification or panels of sliders difficult to use, and yet direct manipulation interfaces cannot provide meaningful feedback without fixing most parameters. In this video paper, we show a set of new 3D widgets to control deformations called racks. A simple rack consists of a bar specifying the axis of deformation and some number of handles attached to the bar specifying additional deformation parameters. For example, a taper rack has two additional handles. Moving the ends of the handles towards or away from the axis bar changes the amount of taper of the deformed object; changing the distance between the handles changes the region over which the deformation is applied. A more complex rack can have multiple handles specifying different deformations. The racks in Figures 1–3 all have handles for twisting (purple), tapering (blue), and bending (red) an object. The deformation range is the region between the twist and taper handles. 2 The Issues in 3D Widget Design
non-photorealistic animation and rendering | 2000
Scott S. Snibbe; Golan Levin
The history of abstract animation and light performance points towards an aesthetic of temporal abstraction which digital computer graphics can ideally explore. Computer graphics has leapt forward to embrace three-dimensional texture mapped imagery, but stepped over the broad aesthetic terrain of two-dimensional interactive dynamic
human factors in computing systems | 2000
Karon E. MacLean; Scott S. Snibbe; Golan Levin
Discrete and continuous modes of manual control are fundamentally different: buttons select or change state, while handles persistently modulate an analog parameter. User interfaces for many electronically aided tasks afford only one of these modes when both are needed. We describe an integration of two kinds of physical interfaces (tagged objects and force feedback) that enables seamless execution of such multimodal tasks while applying the benefits of physicality; and demonstrate application scenarios with conceptual and engineering prototypes. Our emphasis is on sharing insights gained in a design case study, including expert user reactions.
Computer Graphics Forum | 1995
Scott S. Snibbe
We present a new set of interface techniques for visualizing and editing animation directly in a single three‐dimensional scene. Motion is edited using direct‐manipulation tools which satisfy high‐level goals such as “reach this point at this time” or “go faster at this moment”. These tools can be applied over an arbitrary temporal range and maintain arbitrary degrees of spatial and temporal continuity.
human factors in computing systems | 2009
Scott S. Snibbe; Hayes Solos Raffle