Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ken Perlin is active.

Publication


Featured researches published by Ken Perlin.


international conference on computer graphics and interactive techniques | 1993

Pad: an alternative approach to the computer interface

Ken Perlin; David Strickman Fox

We believe that navigation in information spaces is best supported by tapping into our natural spatial and geographic ways of thinking. To this end, we are developing a new computer interface model called Pad. The ongoing Pad project uses a spatial metaphor for computer interface design. It provides an intuitive base for the support of such applications as electronic marketplaces, information services, and on-line collaboration. Pad is an infinite two dimensional information plane that is shared among users, much as a network file system is shared. Objects are organized geographically; every object occupies a well defined region on the Pad surface. For navigation, Pad uses “portals” - magnifying glasses that can peer into and roam over different parts of this single infinite shared desktop; links to specific items are established and broken continually as the portal’s view changes. Portals can recursively look onto other portals. This paradigm enables the sort of peripheral activity generally found in real physical working environments. The apparent size of an object to any user determines the amount of detail it presents. Different users can share and view multiple applications while assigning each a desired degree of interaction. Documents can be visually nested and zoomed as they move back and forth between primary and secondary working attention. Things can be peripherally accessible. In this paper we describe the Pad interface. We discuss how to efficiently implement its graphical aspects, and we illustrate some of our initial applications.


Journal of Visual Languages and Computing | 1996

Pad++: A Zoomable Graphical Sketchpad For Exploring Alternate Interface Physics

Benjamin B. Bederson; James D. Hollan; Ken Perlin; Jonathan Meyer; David Bacon; George W. Furnas

We describe Pad++, a zoomable graphical sketchpad that we are exploring as an alternative to traditional window and icon-based interfaces. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly contrast it with current design strategies. We envision a rich world of dynamic persistent informational entities that operate according to multiple physics specifically designed to provide cognitively facile access and serve as the basis for design of new computationally-based work materials.


international conference on computer graphics and interactive techniques | 2002

Improving noise

Ken Perlin

Two deficiencies in the original Noise algorithm are corrected: second order interpolation discontinuity and unoptimal gradient computation. With these defects corrected, Noise both looks better and runs faster. The latter change also makes it easier to define a uniform mathematical reference standard.


user interface software and technology | 1998

Quikwriting: continuous stylus-based text entry

Ken Perlin

We present a “heads-up” shorthand for entering text on a stylus-based computer very rapidly. The innovations are that (i) the stylus need never be lifted from the surface, and that (ii) the user need never stop moving the stylus. Continuous multi-word text of arbitrary length can be written fluidly, even as a single continuous gesture if desired.


ACM Transactions on Graphics | 2014

Real-Time Continuous Pose Recovery of Human Hands Using Convolutional Networks

Jonathan Tompson; Murphy Stein; Yann LeCun; Ken Perlin

We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.


non-photorealistic animation and rendering | 2000

Painterly rendering for video and interaction

Aaron Hertzmann; Ken Perlin

We present new methods for painterly video processing. Based on our earlier still image processing technique, we “paint over” successive frames of animation, applying paint only in regions where the source video is changing. Image regions with minimal changes, such as due to video noise, are also left alone, using a simple difference masking technique. Optionally, brush strokes may be warped between frames using computed or procedural optical flow. These methods produce video with a novel visual style distinct from previously demonstrated algorithms. Without optical flow, the video gives the effect of a painting that has been repeatedly updated and photographed, similar to painton-glass animation. We feel that this gives a subjective impression of the work of a human hand. With optical flow, the painting surface flows and deforms to follow the shape of the world. We have constructed an interactive painting exhibit, in which a painting is continually updated. Viewers have found this to be a compelling experience, suggesting the promise of non-photorealistic rendering for creating compelling interactive visual experiences.


international conference on computer graphics and interactive techniques | 2000

An autostereoscopic display

Ken Perlin; Salvatore Paxia; Joel S. Kollin

We present a display device which solves a long-standing problem: to give a true stereoscopic view of simulated objects, without artifacts, to a single unencumbered observer, while allowing the observer to freely change position and head rotation. Based on a novel combination of temporal and spatial multiplexing, this technique will enable artifact-free stereo to become a standard feature of display screens, without requiring the use of special eyewear. The availability of this technology may significantly impact CAD and CHI applications, as well as entertainment graphics. The underlying algorithms and system architecture are described, as well as hardware and software aspects of the implementation.


international conference on computer graphics and interactive techniques | 2009

The UnMousePad: an interpolating multi-touch force-sensing input pad

Ilya D. Rosenberg; Ken Perlin

Recently, there has been great interest in multi-touch interfaces. Such devices have taken the form of camera-based systems such as Microsoft Surface [de los Reyes et al. 2007] and Perceptive Pixels FTIR Display [Han 2005] as well as hand-held devices using capacitive sensors such as the Apple iPhone [Jobs et al. 2008]. However, optical systems are inherently bulky while most capacitive systems are only practical in small form factors and are limited in their application since they respond only to human touch and are insensitive to variations in pressure [Westerman 1999]. We have created the UnMousePad, a flexible and inexpensive multitouch input device based on a newly developed pressure-sensing principle called Interpolating Force Sensitive Resistance. IFSR sensors can acquire high-quality anti-aliased pressure images at high frame rates. They can be paper-thin, flexible, and transparent and can easily be scaled to fit on a portable device or to cover an entire table, floor or wall. The UnMousePad can sense three orders of magnitude of pressure variation, and can be used to distinguish multiple fingertip touches while simultaneously tracking pens and styli with a positional accuracy of 87 dpi, and can sense the pressure distributions of objects placed on its surface. In addition to supporting multi-touch interaction, IFSR is a general pressure imaging technology that can be incorporated into shoes, tennis racquets, hospital beds, factory assembly lines and many other applications. The ability to measure high-quality pressure images at low cost has the potential to dramatically improve the way that people interact with machines and the way that machines interact with the world.


Computer Graphics Forum | 2010

A Survey of Procedural Noise Functions

Ares Lagae; Sylvain Lefebvre; Robert L. Cook; Tony DeRose; George Drettakis; David S. Ebert; John P. Lewis; Ken Perlin; Matthias Zwicker

Procedural noise functions are widely used in computer graphics, from off‐line rendering in movie production to interactive video games. The ability to add complex and intricate details at low memory and authoring cost is one of its main attractions. This survey is motivated by the inherent importance of noise in graphics, the widespread use of noise in industry and the fact that many recent research developments justify the need for an up‐to‐date survey. Our goal is to provide both a valuable entry point into the field of procedural noise functions, as well as a comprehensive view of the field to the informed reader. In this report, we cover procedural noise functions in all their aspects. We outline recent advances in research on this topic, discussing and comparing recent and well‐established methods. We first formally define procedural noise functions based on stochastic processes and then classify and review existing procedural noise functions. We discuss how procedural noise functions are used for modelling and how they are applied to surfaces. We then introduce analysis tools and apply them to evaluate and compare the major approaches to noise generation. We finally identify several directions for future work.


international conference on computer graphics and interactive techniques | 1997

Layered compositing of facial expression

Ken Perlin

How does one make an embodied agent react with appropriate facial expression, without resorting to repetitive prebuilt animations? How does one mix and transition between facial expressions to visually represent shifting moods and attitudes? How can authors of these agents relate lower level facial movements to higher level moods and intentions? We introduce a computational engine which addresses these questions with a stratified approach. We first define a low level movement model having a discrete number of degrees of freedom. Animators can combine and layer these degrees of freedom to create elements of autonomous facial motion. Animators can then recursively build on this movement model to construct higher level models.

Collaboration


Dive into the Ken Perlin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge