Okan Arikan
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Okan Arikan.
ACM Transactions on Graphics | 2009
Leslie Ikemoto; Okan Arikan; David A. Forsyth
One way that artists create compelling character animations is by manipulating details of a characters motion. This process is expensive and repetitive. We show that we can make such motion editing more efficient by generalizing the edits an animator makes on short sequences of motion to other sequences. Our method predicts frames for the motion using Gaussian process models of kinematics and dynamics. These estimates are combined with probabilistic inference. Our method can be used to propagate edits from examples to an entire sequence for an existing character, and it can also be used to map a motion from a control character to a very different target character. The technique shows good generalization. For example, we show that an estimator, learned from a few seconds of edited example animation using our methods, generalizes well enough to edit minutes of character animation in a high-quality fashion. Learning is interactive: An animator who wants to improve the output can provide small, correcting examples and the system will produce improved estimates of motion. We make this interactive learning process efficient and natural with a fast, full-body IK system with novel features. Finally, we present data from interviews with professional character animators that indicate that generalizing and propagating animator edits can save artists significant time and work.
international conference on computer graphics and interactive techniques | 2002
Okan Arikan; David A. Forsyth
There are many applications that demand large quantities of natural looking motion. It is difficult to synthesize motion that looks natural, particularly when it is people who must move. In this paper, we present a framework that generates human motions by cutting and pasting motion capture data. Selecting a collection of clips that yields an acceptable motion is a combinatorial problem that we manage as a randomized search of a hierarchy of graphs. This approach can generate motion sequences that satisfy a variety of constraints automatically. The motions are smooth and human-looking. They are generated in real time so that we can author complex motions interactively. The algorithm generates multiple motions that satisfy a given set of constraints, allowing a variety of choices for the animator. It can easily synthesize multiple motions that interact with each other using constraints. This framework allows the extensive re-use of motion capture data for new purposes.
Foundations and Trends in Computer Graphics and Vision | 2005
David A. Forsyth; Okan Arikan; Leslie Ikemoto; James F. O'Brien; Deva Ramanan
We review methods for kinematic tracking of the human body in video. The review is part of a projected book that is intended to cross-fertilize ideas about motion representation between the animation and computer vision communities. The review confines itself to the earlier stages of motion, focusing on tracking and motion synthesis; future material will cover activity representation and motion generation. In general, we take the position that tracking does not necessarily involve (as is usually thought) complex multimodal inference problems. Instead, there are two key problems, both easy to state. The first is lifting, where one must infer the configuration of the body in three dimensions from image data. Ambiguities in lifting can result in multimodal inference problem, and we review what little is known about the extent to which a lift is ambiguous. The second is data association, where one must determine which pixels in an image Full text available at: http://dx.doi.org/10.1561/0600000005
interactive 3d graphics and games | 2007
Perumaal Shanmugam; Okan Arikan
We introduce a visually pleasant ambient occlusion approximation running on real-time graphics hardware. Our method is a multi-pass algorithm that separates the ambient occlusion problem into high-frequency, detailed ambient occlusion and low-frequency, distant ambient occlusion domains, both capable of running independently and in parallel. The high-frequency detailed approach uses an image-space method to approximate the ambient occlusion due to nearby occluders caused by high surface detail. The low-frequency approach uses the intrinsic properties of a modern GPU to greatly reduce the search area for large and distant occluders with the help of a low-detail approximated version of the occluder geometry. Our method utilizes the highly parallel, stream processors (GPUs) to perform real-time visually pleasant ambient occlusion. We show that our ambient occlusion approximation works on a wide variety of applications such as molecular data visualization, dynamic deformable animated models, highly detailed geometry. Our algorithm demonstrates scalability and is well-suited for the current and upcoming graphics hardware.
international conference on computer graphics and interactive techniques | 2006
Okan Arikan
We present a lossy compression algorithm for large databases of motion capture data. We approximate short clips of motion using Bezier curves and clustered principal component analysis. This approximation has a smoothing effect on the motion. Contacts with the environment (such as foot strikes) have important detail that needs to be maintained. We compress these environmental contacts using a separate, JPEG like compression algorithm and ensure these contacts are maintained during decompression.Our method can compress 6 hours 34 minutes of human motion capture from 1080 MB data into 35.5 MB with little visible degradation. Compression and decompression is fast: our research implementation can decompress at about 1.2 milliseconds/frame, 7 times faster than real-time (for 120 frames per second animation). Our method also yields smaller compressed representation for the same error or produces smaller error for the same compressed size.
symposium on computer animation | 2005
Okan Arikan; David A. Forsyth; James F. O'Brien
We present an algorithm for animating characters being pushed by an external source such as a user or a game environment. We start with a collection of motions of a real person responding to being pushed. When a character is pushed, we synthesize new motions by picking a motion from the recorded collection and modifying it so that the character responds to the push from the desired direction and location on its body. Determining the deformation parameters that realistically modify a recorded response motion is difficult. Choosing the response motion that will look best when modified is also non-trivial, especially in real-time. To estimate the envelope of deformation parameters that yield visually plausible modifications of a given motion, and to find the best motion to modify, we introduce an oracle. The oracle is trained using a set of synthesized response motions that are identified by a user as good and bad. Once trained, the oracle can, in real-time, estimate the visual quality of all motions in the collection and required deformation parameters to serve a desired push.Our method performs better than a baseline algorithm of picking the closest response motion in configuration space, because our method can find visually plausible transitions that do not necessarily correspond to similar motions in terms of configuration. Our method can also start with a limited set of recorded motions and modify them so that they can be used to serve different pushes on the upper body.
international conference on computer graphics and interactive techniques | 2005
Okan Arikan; David A. Forsyth; James F. O'Brien
In this paper we present an approximate method for accelerated computation of the final gathering step in a global illumination algorithm. Our method operates by decomposing the radiance field close to surfaces into separate far- and near-field components that can be approximated individually. By computing surface shading using these approximations, instead of directly querying the global illumination solution, we have been able to obtain rendering time speed ups on the order of 10x compared to previous acceleration methods. Our approximation schemes rely mainly on the assumptions that radiance due to distant objects will exhibit low spatial and angular variation, and that the visibility between a surface and nearby surfaces can be reasonably predicted by simple location and orientation-based heuristics. Motivated by these assumptions, our far-field scheme uses scattered-data interpolation with spherical harmonics to represent spatial and angular variation, and our near-field scheme employs an aggressively simple visibility heuristic. For our test scenes, the errors introduced when our assumptions fail do not result in visually objectionable artifacts or easily noticeable deviation from a ground-truth solution. We also discuss how our near-field approximation can be used with standard local illumination algorithms to produce significantly improved images at only negligible additional cost.
interactive 3d graphics and games | 2006
Leslie Ikemoto; Okan Arikan; David A. Forsyth
Footskate, where a characters foot slides on the ground when it should be planted firmly, is a common artifact resulting from almost any attempt to modify motion capture data. We describe an online method for fixing footskate that requires no manual clean-up. An important part of fixing footskate is determining when the feet should be planted. We introduce an oracle that can automatically detect when foot plants should occur. Our method is more accurate than baseline methods that check the height or speed of the feet. These baseline methods perform especially poorly on noisy or imperfect data, requiring manual fixing. Once trained, our oracle is robust and can be used without manual clean-up, making it suitable for large databases of motion. After the foot plants are detected we use an off-the-shelf inverse kinematics based method to maintain ground contact during each foot plant. Our foot plant detection mechanism coupled with an IK based fixer can be treated as a black box that produces natural-looking motion of the feet, making it suitable for interactive systems. We demonstrate several applications which would produce unrealistic motion without our method.
interactive 3d graphics and games | 2007
Leslie Ikemoto; Okan Arikan; David A. Forsyth
We describe a discriminative method for distinguishing natural-looking from unnatural-looking motion. Our method is based on physical and data-driven features of motion to which humans seem sensitive. We demonstrate that our technique is significantly more accurate than current alternatives. We use this technique as the testing part of a hypothesize-and-test motion synthesis procedure. The mechanism we build using this procedure can quickly provide an application with a transition of user-specified duration from any frame in a motion collection to any other frame in the collection. During pre-processing, we search all possible 2-, 3-, and 4-way blends between representative samples of motion obtained using clustering. The blends are automatically evaluated, and the recipe (i.e., the representatives and the set of weighting functions) that created the best blend is cached. At run-time, we build a transition between motions by matching a future window of the source motion to a representative, matching the past of the target motion to a representative, and then applying the blend recipe recovered from the cache to source and target motion. People seem sensitive to poor contact with the environment like sliding foot plants. We determine appropriate temporal and positional constraints for each foot plant using a novel technique, then apply an off-the-shelf inverse kinematics technique to enforce the constraints. This synthesis procedure yields good-looking transitions between distinct motions with very low online cost.
international conference on computer graphics and interactive techniques | 2005
Leslie Ikemoto; Okan Arikan; David A. Forsyth
Figure 1: This sequence was recorded from a live, interactive demo, in which the user controls the crate and can move it anywhere at anytime. A virtual agent is tasked with traveling from the left of the scene to the target on the right. While the agent is running towards the target, the user moves the crate into the position shown on the left, blocking the agent’s intended path. The local motion planner selects frames that avoid hitting the object but still make progress toward the goal (A). The user again moves the object into the agent’s path, and again the system successfully copes (B). Altogether, the user tries to block the agent by moving the crate 3 times, but the agent still dodges it and arrives at the target position seamlessly.