P. M. Hillman
University of Edinburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by P. M. Hillman.
computer vision and pattern recognition | 2001
P. M. Hillman; John Hannah; D. Renshaw
For motion picture special effects, it is often necessary to take a source image of an actor, segment the actor from the unwanted background, and then composite over a new background. The standard approach requires the unwanted background to be a blue screen. While this technique is capable of handling areas where the foreground blends into the background, the physical requirements present many practical problems. This paper presents an algorithm that requires minimal human interaction to segment motion picture resolution images and image sequences. We show that it can be used not only to segment badly lit or noisy blue screen images, but also to segment actors where the background is more varied.
international conference on image processing | 2010
P. M. Hillman; John P. Lewis; Sebastian Sylwan; Erik Winquist
Many published machine vision algorithms are designed to be real-time and fully automatic with low computational complexity. These attributes are essential for applications such as stereo robotic vision. Motion Picture Digital Visual Effect facilities, however, have massive computation resources available and can afford human interaction to initialise algorithms and to guide them towards a good solution. On the other hand, motion pictures have significantly higher accuracy requirements and other unique challenges. Not all machine vision algorithms can readily be adapted to this environment. In this paper we outline the requirements of visual effects and indicate several challenges involved in using image processing and machine vision algorithms for stereo motion picture visual effects.
Archive | 2014
Andrew Chalmers; John P. Lewis; P. M. Hillman; Charlie Tait; Taehyun Rhee
In a visual effects studio for movie production, sky maps play an important role for acting as a sky backdrop to a scene. The backdrop to a scene is often represented using a high-resolution sky map. This motivates the need for a large collection of sky maps to match various moods and lighting conditions. A comprehensive collection of images is not useful however, without a method of searching for desired images within that database. In this paper we define a feature space that supports an interactive search function for HDR sky maps, allowing users to find ideal images based on its appearance. The set of features are automatically extracted from the sky maps in an offline pre-processing step, and are queried in real time for progressive browsing. The system uses unsupervised learning techniques, discarding the need for labelling a large set of existing sky maps.
international conference on image processing | 2006
P. M. Hillman; Paul Kuo; John Hannah
Model-based coding techniques require a very accurate initial fit of a facial model to an image sequence in order to achieve realistic results, particularly around the eyes and mouth. This paper presents a facial model-fitting technique which uses a combination of texture analysis using an active appearance model and feature position location to give accurate initial fits. Results, based on the XM2VTS database, show this hybrid technique superior to using active appearance models alone especially in the ability to locate the eyes and mouth.
Archive | 2005
P. M. Hillman; John Hannah
conference on visual media production | 2005
P. M. Hillman; John Hannah; D. Renshaw
IEE International Conference on Visual Information Engineering (VIE 2005) | 2005
Paul Kuo; P. M. Hillman; John Hannah
Archive | 2004
P. M. Hillman; John Hannah; D. Renshaw
Archive | 2001
P. M. Hillman; John Hannah; D. Renshaw
conference on visual media production | 2004
P. M. Hillman; John Hannah; D. Renshaw