Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Russell Turner is active.

Publication


Featured researches published by Russell Turner.


Proceedings of the National Academy of Sciences of the United States of America | 2004

Whole-genome shotgun assembly and comparison of human genome assemblies

Sorin Istrail; Granger Sutton; Liliana Florea; Aaron L. Halpern; Clark M. Mobarry; Ross A. Lippert; Brian Walenz; Hagit Shatkay; Ian M. Dew; Jason R. Miller; Michael Flanigan; Nathan Edwards; Randall Bolanos; Daniel Fasulo; Bjarni V. Halldórsson; Sridhar Hannenhalli; Russell Turner; Shibu Yooseph; Fu Lu; Deborah Nusskern; Bixiong Shue; Xiangqun Holly Zheng; Fei Zhong; Arthur L. Delcher; Daniel H. Huson; Saul Kravitz; Laurent Mouchard; Knut Reinert; Karin A. Remington; Andrew G. Clark

We report a whole-genome shotgun assembly (called WGSA) of the human genome generated at Celera in 2001. The Celera-generated shotgun data set consisted of 27 million sequencing reads organized in pairs by virtue of end-sequencing 2-kbp, 10-kbp, and 50-kbp inserts from shotgun clone libraries. The quality-trimmed reads covered the genome 5.3 times, and the inserts from which pairs of reads were obtained covered the genome 39 times. With the nearly complete human DNA sequence [National Center for Biotechnology Information (NCBI) Build 34] now available, it is possible to directly assess the quality, accuracy, and completeness of WGSA and of the first reconstructions of the human genome reported in two landmark papers in February 2001 [Venter, J. C., Adams, M. D., Myers, E. W., Li, P. W., Mural, R. J., Sutton, G. G., Smith, H. O., Yandell, M., Evans, C. A., Holt, R. A., et al. (2001) Science 291, 1304–1351; International Human Genome Sequencing Consortium (2001) Nature 409, 860–921]. The analysis of WGSA shows 97% order and orientation agreement with NCBI Build 34, where most of the 3% of sequence out of order is due to scaffold placement problems as opposed to assembly errors within the scaffolds themselves. In addition, WGSA fills some of the remaining gaps in NCBI Build 34. The early genome sequences all covered about the same amount of the genome, but they did so in different ways. The Celera results provide more order and orientation, and the consortium sequence provides better coverage of exact and nearly exact repeats.


Journal of Visualization and Computer Animation | 1991

Rendering hair using pixel blending and shadow buffers

André M. Leblanc; Russell Turner; Daniel Thalmann

A technique is described for adding natural-looking hair to standard rendering algorithms. Using an explicit hair model, in which each individual hair is represented by a three-dimensional curve, the technique uses pixel blending combined with Z-buffer and shadow buffer information from the scene to yield a final anti-aliased image with soft shadows. Although developed for rendering human hair, this technique can also be used to render any model consisting of long filaments of sub-pixel width. The technique can be adapted to any rendering method that outputs Z-buffer and shadow buffer information and is amenable to hardware implementation


Proc. Communicating with Virtual Worlds | 1993

The Elastic Surface Layer Model for Animated Character Construction

Russell Turner; Daniel Thalmann

A model is described for creating three-dimensional animated characters. In this new type of layered construction technique, called the elastic surface layer model, a simulated elastically deformable skin surface is wrapped around a traditional kinematic articulated figure. Unlike previous layered models, the skin is free to slide along the underlying surface layers constrained by reaction forces which push the surface out and spring forces which pull the surface in to the underlying layers. By tuning the parameters of the physically-based model, a variety of surface shapes and behaviors can be obtained such as more realistic-looking skin deformation at the joints, skin sliding over muscles, and dynamic effects such as squash-and-stretch and follow-through. Since the elastic model derives all of its input forces from the underlying articulated figure, the animator may specify all of the physical properties of the character once, during the initial character design process, after which a complete animation sequence can be created using a traditional skeleton animation technique. A reasonably complex character at low surface resolution can be simulated at interactive speeds so than an animator can both design the character and animate it in a completely interactive, direct-manipulation environment. Once a motion sequence has been specified, the entire simulation can be recalculated at a higher surface resolution for better visual results. An implementation on a Silicon Graphics Iris workstation is described.


Computer Graphics Forum | 1998

Interactive Construction and Animation of Layered Elastically Deformable Characters

Russell Turner; Enrico Gobbetti

An interactive system is described for creating and animating deformable 3D characters. By using a hybrid layered model of kinematic and physics‐based components together with an immersive 3D direct manipulation interface, it is possible to quickly construct characters that deform naturally when animated and whose behavior can be controlled interactively using intuitive parameters. In this layered construction technique, called the elastic surface layer model, a simulated elastically deformable skin surface is wrapped around a kinematic articulated figure. Unlike previous layered models, the skin is free to slide along the underlying surface layers constrained by geometric constraints which push the surface out and spring forces which pull the surface in to the underlying layers. By tuning the parameters of the physics‐based model, a variety of surface shapes and behaviors can be obtained such as more realistic‐looking skin deformation at the joints, skin sliding over muscles, and dynamic effects such as squash‐and‐stretch and follow‐through. Since the elastic model derives all of its input forces from the underlying articulated figure, the animator may specify all of the physical properties of the character once, during the initial character design process, after which a complete animation sequence can be created using a traditional skeleton animation technique. Character construction and animation are done using a 3D user interface based on two‐handed manipulation registered with head‐tracked stereo viewing. In our configuration, a six degree‐of‐freedom head‐tracker and CrystalEyes shutter glasses are used to display stereo images on a workstation monitor that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. Hand‐eye coordination is made possible by registering virtual space to physical space, allowing a variety of complex 3D tasks necessary for constructing 3D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques.


computer graphics international | 1991

Physically-based interactive camera motion control using 3D input devices

Russell Turner; Francis Balaguer; Enrico Gobbetti; Daniel Thalmann

The newest three-dimensional input devices, together with high speed graphics workstations, make it possible to interactivity specify virtual camera motions for animation in real time. The authors describe how naturalistic interaction and realistic-looking motion can be achieved by using a physically-based model of the cameras behavior. The approach is to create an abstract physical model of the camera, using the laws of classical mechanics, which is used to simulate the virtual camera motion in real time in response to force data from the various 3D input devices (e.g. the spaceball, polhemus and dataglove). The behavior of the model is determined by several physical parameters such as mass, moment of inertia, and various friction coefficients which can all be varied interactively, and by constraints on the cameras degrees of freedom which can be simulated by setting certain friction parameters to very high values. This allows one to explore a continuous range of physically-based metaphors for controlling the camera motion. They present the results of experiments with several of these metaphors and contrast them with existing ones


international symposium on visual computing | 1992

Animation based on the interaction of L-systems with vector force fields

Hansrudi Noser; Daniel Thalmann; Russell Turner

This paper discusses the use of rewriting systems for animation purpose. In particular, it describes the design of timed parameterized L-systems with conditional and pseudo stochastic productions. it proposes a formulation to integrate the various features of L-systems into a unique L-system. It also introduces into the symbolism a way of using vector force fields to simulate interaction with the environment. The implementation, based on an object-oriented methodology, is described and animation examples are presented.


Computer Graphics | 1995

LEMAN: a system for constructing and animating layered elastic characters

Russell Turner

Publisher Summary This chapter describes LEMAN—a system for constructing and animating layered elastic characters. Layered construction techniques which model anatomical features have shown promise in creating character models that deform automatically around an articulated skeleton. But purely geometric models, although they can be very expressive, usually require too much user intervention to achieve realistic-looking results. A hybrid approach in which layered models are constructed using a combination of geometric, kinematic and physically based techniques is the most promising one. The ideal 3D character model should provide a good compromise between interactive speed and realism, and between animator control and physically realistic behavior. The exact details of such a model are no more important, however, than the types of interactive technique used to construct and animate it. High-performance 3D graphics work stations and variety of multidimensional input devices have begun to make highly interactive, direct manipulation environments practical. This chapter describes the LEMAN system, originally developed at the Computer Graphics Lab of the Swiss Federal Institute of Technology, which can be used to construct and animate 3D characters based on the elastic surface layer model in such an interactive, direct-manipulation environment.


Proceedings IEEE 2001 Symposium on Parallel and Large-Data Visualization and Graphics (Cat. No.01EX520) | 2001

Visualization challenges for a new cyber-pharmaceutical computing paradigm

Russell Turner; Kabir Chaturvedi; Nathan Edwards; Daniel Fasulo; Aaron L. Halpern; Daniel H. Huson; Oliver Kohlbacher; Jason R. Miller; Knut Reinert; Karin A. Remington; Russell Schwartz; Brian Walenz; Shibu Yooseph; Sorin Istrail

Celera has encountered a number of visualization problems in the course of developing tools for bioinformatics research, applying them to our data generation efforts, and making that data available to our customers. This paper presents several examples from Celeras experience. In the area of genomics, challenging visualization problems have come up in assembling genomes, studying variations between individuals, and comparing different genomes to one another. The emerging area of proteomics has created new visualization challenges in interpreting protein expression data, studying protein regulatory networks, and examining protein structure. These examples illustrate how the field of bioinformatics is posing new challenges concerning the communication of data that are often very different from those that have heretofore dominated scientific computing. Addressing the level of detail, the degree of complexity, and the interdisciplinary barriers that characterize bioinformatic problems can be expected to be a sizable but rewarding task for the field of scientific visualization.


eurographics | 1999

Metis: An Object-Oriented Toolkit for Constructing Virtual Reality Applications

Russell Turner; Song Li; Enrico Gobbetti

Virtual reality systems provide realistic look and feel by seamlessly integrating three‐dimensional input and output devices. One software architecture approach to constructing such systems is to distribute the application between a computation‐intensive simulator back‐end and a graphics‐intensive viewer front‐end which implements user interaction. In this paper we discuss Metis, a toolkit we have been developing based on such a software architecture, which can be used for building interactive immersive virtual reality systems with computationally intensive components. The Metis toolkit defines an application programming interface on the simulator side, which communicates via a network with a standalone viewer program that handles all immersive display and interactivity. Network bandwidth and interaction latency are minimized, by use of a constraint network on the viewer side that declaratively defines much of dynamic and interactive behavior of the application.


Computer Graphics Forum | 1996

Head‐Tracked Stereo Viewing with Two‐Handed 3 D Interaction for Animated Character Construction

Russell Turner; Enrico Gobbetti; Ian Soboroff

In this paper, we demonstrate how a new interactive 3 D desktop metaphor based on two‐handed 3 D direct manipulation registered with head‐tracked stereo viewing can be applied to the task of constructing animated characters. In our configuration, a six degree‐of‐freedom head‐tracker and CrystalEyes shutter glasses are used to produce stereo images that dynamically follow the user head motion. 3 D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3 D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3 D navigation and object movement, while the right hand, holding a 3 D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. In this way, both incremental and absolute interactive input techniques are provided by the system. Hand‐eye coordination is made possible by registering virtual space exactly to physical space, allowing a variety of complex 3 D tasks necessary for constructing 3 D animated characters to be performed more easily and more rapidly than is possible using traditional interactive techniques. The system has been tested using both Polhemus Fastrak and Logitech ultrasonic input devices for tracking the head and 3 D mouse.

Collaboration


Dive into the Russell Turner's collaboration.

Top Co-Authors

Avatar

Enrico Gobbetti

Congressional Research Service

View shared research outputs
Top Co-Authors

Avatar

Daniel Thalmann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Brian Walenz

J. Craig Venter Institute

View shared research outputs
Top Co-Authors

Avatar

Jason R. Miller

J. Craig Venter Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angelo Mangili

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shibu Yooseph

J. Craig Venter Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge