Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Starck is active.

Publication


Featured researches published by Jonathan Starck.


IEEE Computer Graphics and Applications | 2007

Surface Capture for Performance-Based Animation

Jonathan Starck; Adrian Hilton

Creating realistic animated models of people is a central task in digital content production. Traditionally, highly skilled artists and animators construct shape and appearance models for digital character. They then define the characters motion at each time frame or specific key-frames in a motion sequence to create a digital performance. Increasingly, producers are using motion capture technology to record animations from an actors performance. This technology reduces animation production time and captures natural movements to create a more believable production. However, motion capture requires the use of specialist suits and markers and only records skeletal motion. It lacks the detailed secondary surface dynamics of cloth and hair that provide the visual realism of a live performance. Over the last decade, we have investigated studio capture technology with the objective of creating models of real people that accurately reflect the time-varying shape and appearance of the whole body with clothing. Surface capture is a fully automated system for capturing a humans shape and appearance as well as motion from multiple video cameras to create highly realistic animated content from an actors performance in full wardrobe. Our system solves two key problems in performance capture: scene capture from a limited number of camera views and efficient scene representation for visualization


international conference on computer vision | 2007

Correspondence labelling for wide-timeframe free-form surface matching

Jonathan Starck; Adrian Hilton

This paper addresses the problem of estimating dense correspondence between arbitrary frames from captured sequences of shape and appearance for surfaces undergoing free-form deformation. Previous techniques require either a prior model, limiting the range of surface deformations, or frame-to-frame surface tracking which suffers from stabilisation problems over complete motion sequences and does not provide correspondence between sequences. The primary contribution of this paper is the introduction of a system for wide-timeframe surface matching without the requirement for a prior model or tracking. Deformation- invariant surface matching is formulated as a locally isometric mapping at a discrete set of surface points. A set of feature descriptors are presented that are invariant to isometric deformations and a novel MAP-MRF framework is presented to label sparse-to-dense surface correspondence, preserving the relative distribution of surface features while allowing for changes in surface topology. Performance is evaluated on challenging data from a moving person with loose clothing. Ground-truth feature correspondences are manually marked and the recall-accuracy characteristic is quantified in matching. Results demonstrate an improved performance compared to non-rigid point-pattern matching using robust matching and graph-matching using relaxation labelling, with successful matching achieved across wide variations in human body pose and surface topology.


International Journal of Computer Vision | 2010

Shape Similarity for 3D Video Sequences of People

Peng Huang; Adrian Hilton; Jonathan Starck

This paper presents a performance evaluation of shape similarity metrics for 3D video sequences of people with unknown temporal correspondence. Performance of similarity measures is compared by evaluating Receiver Operator Characteristics for classification against ground-truth for a comprehensive database of synthetic 3D video sequences comprising animations of fourteen people performing twenty-eight motions. Static shape similarity metrics shape distribution, spin image, shape histogram and spherical harmonics are evaluated using optimal parameter settings for each approach. Shape histograms with volume sampling are found to consistently give the best performance for different people and motions. Static shape similarity is extended over time to eliminate the temporal ambiguity. Time-filtering of the static shape similarity together with two novel shape-flow descriptors are evaluated against temporal ground-truth. This evaluation demonstrates that shape-flow with a multi-frame alignment of motion sequences achieves the best performance, is stable for different people and motions, and overcome the ambiguity in static shape similarity. Time-filtering of the static shape histogram similarity measure with a fixed window size achieves marginally lower performance for linear motions with the same computational cost as static shape descriptors. Performance of the temporal shape descriptors is validated for real 3D video sequence of nine actors performing a variety of movements. Time-filtered shape histograms are shown to reliably identify frames from 3D video sequences with similar shape and motion for people with loose clothing and complex motion.


symposium on computer animation | 2005

Video-based character animation

Jonathan Starck; Gregor Miller; Adrian Hilton

In this paper we introduce a video-based representation for free viewpoint visualization and motion control of 3D character models created from multiple view video sequences of real people. Previous approaches to video-based rendering provide no control of scene dynamics to manipulate, retarget, and create new 3D content from captured scenes. Here we contribute a new approach, combining image based reconstruction and video-based animation to allow controlled animation of people from captured multiple view video sequences. We represent a character as a motion graph of free viewpoint video motions for animation control. We introduce the use of geometry videos to represent reconstructed scenes of people for free viewpoint video rendering. We describe a novel spherical matching algorithm to derive global surface to surface correspondence in spherical geometry images for motion blending and the construction of seamless transitions between motion sequences. Finally, we demonstrate interactive video-based character animation with real-time rendering and free viewpoint visualization. This approach synthesizes highly realistic character animations with dynamic surface shape and appearance captured from multiple view video of people.


international conference on computer vision | 2005

Spherical matching for temporal correspondence of non-rigid surfaces

Jonathan Starck; Adrian Hilton

This paper introduces spherical matching to estimate dense temporal correspondence of non-rigid surfaces with genus-zero topology. The spherical domain gives a consistent 1D parameterization of non-rigid surfaces for matching. Non-rigid 3D surface correspondence is formulated as the recovery of a bijective mapping between two surfaces in the 2D domain. Formulating matching as a 2D bijection guarantees a continuous one-to-one surface correspondence without overfolding. This overcomes limitations of direct estimation of non-rigid surface correspondence in the 3D domain. A multiple resolution coarse-to-fine algorithm is introduced to robustly estimate the dense correspondence which minimizes the disparity in shape and appearance between two surfaces. Spherical matching is applied to derive the temporal correspondence between non-rigid surfaces reconstructed at successive frames from multiple view video sequences of people. Dense surface correspondence is recovered across complete motion sequences for both textured and uniform regions, without the requirement for a prior model of human shape or kinematics structure for tracking


international symposium on 3d data processing visualization and transmission | 2002

From 3D shape capture of animated models

Adrian Hilton; Jonathan Starck; Gordon Collins

This paper presents a framework for construction of animated models from captured surface shape of real objects. Algorithms are introduced to transform the captured surface shape into a layered model. The layered model comprises an articulation structure, generic control model and a displacement map to represent the high-resolution surface detail. Novel methods are presented for automatic control model generation, shape constrained fitting and displacement mapping of the captured data. Results are demonstrated for surface shape captured using both multiple view images and active surface measurement. The framework enables rapid transformation of captured data into a structured representation suitable for realistic animation.


british machine vision conference | 2006

Volumetric stereo with silhouette and feature constraints

Jonathan Starck; Gregor Miller; Adrian Hilton

This paper presents a novel volumetric reconstruction technique that combines shape-from-silhouette with stereo photo-consistency in a global optimisation that enforces feature constraints across multiple views. Human shape reconstruction is considered where extended regions of uniform appearance, complex self-occlusions and sparse feature cues represent a challenging problem for conventional reconstruction techniques. A unified approach is introduced to first reconstruct the occluding contours and left-right consistent edge contours in a scene and then incorporate these contour constraints in a global surface optimisation using graph-cuts. The proposed technique maximises photo-consistency on the surface, while satisfying silhouette constraints to provide shape in the presence of uniform surface appearance and edge feature constraints to align key image features across views.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

The Multiple-Camera 3-D Production Studio

Jonathan Starck; Atsuto Maki; Shohei Nobuhara; Adrian Hilton; Takashi Matsuyama

Multiple-camera systems are currently widely used in research and development as a means of capturing and synthesizing realistic 3-D video content. Studio systems for 3-D production of human performance are reviewed from the literature, and the practical experience gained in developing prototype studios is reported across two research laboratories. System design should consider the studio backdrop for foreground matting, lighting for ambient illumination, camera acquisition hardware, the camera configuration for scene capture, and accurate geometric and photometric camera calibration. A ground-truth evaluation is performed to quantify the effect of different constraints on the multiple-camera system in terms of geometric accuracy and the requirement for high-quality view synthesis. As changing camera height has only a limited influence on surface visibility, multiple-camera sets or an active vision system may be required for wide area capture, and accurate reconstruction requires a camera baseline of 25deg, and the achievable accuracy is 5-10-mm at current camera resolutions. Accuracy is inherently limited, and view-dependent rendering is required for view synthesis with sub-pixel accuracy where display resolutions match camera resolutions. The two prototype studios are contrasted and state-of-the-art techniques for 3-D content production demonstrated.


computer vision and pattern recognition | 2009

Human motion synthesis from 3D video

Peng Huang; Adrian Hilton; Jonathan Starck

Multiple view 3D video reconstruction of actor performance captures a level-of-detail for body and clothing movement which is time-consuming to produce using existing animation tools. In this paper we present a framework for concatenative synthesis from multiple 3D video sequences according to user constraints on movement, position and timing. Multiple 3D video sequences of an actor performing different movements are automatically constructed into a surface motion graph which represents the possible transitions with similar shape and motion between sequences without unnatural movement artifacts. Shape similarity over an adaptive temporal window is used to identify transitions between 3D video sequences. Novel 3D video sequences are synthesized by finding the optimal path in the surface motion graph between user specified key-frames for control of movement, location and timing. The optimal path which satisfies the user constraints whilst minimizing the total transition cost between 3D video sequences is found using integer linear programming. Results demonstrate that this framework allows flexible production of novel 3D video sequences which preserve the detailed dynamics of the captured movement for an actress with loose clothing and long hair without visible artifacts.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2005

Virtual view synthesis of people from multiple view video sequences

Jonathan Starck; Adrian Hilton

This paper addresses the synthesis of novel views of people from multiple view video. We consider the target area of the multiple camera 3D Virtual Studio for broadcast production with the requirement for free-viewpoint video synthesis for a virtual camera with the same quality as captured video. A framework is introduced for view-dependent optimisation of reconstructed surface shape to align multiple captured images with sub-pixel accuracy for rendering novel views. View-dependent shape optimisation combines multiple view stereo and silhouette constraints to robustly estimate correspondence between images in the presence of visual ambiguities such as uniform surface regions, self-occlusion, and camera calibration error. Free-viewpoint rendering of video sequences of people achieves a visual quality comparable to the captured video images. Experimental evaluation demonstrates that this approach overcomes limitations of previous stereo- and silhouette-based approaches to rendering novel views of moving people.

Collaboration


Dive into the Jonathan Starck's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Atsuto Maki

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge