Tom E. Bishop
Heriot-Watt University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tom E. Bishop.
international conference on acoustics, speech, and signal processing | 2006
Tom E. Bishop; James R. Hopgood
We present a novel method for blind image restoration which is a multidimensional extension of an approach used successfully for audio restoration. A nonstationary image model is used to increase reliability of blur estimates. This source model consists of a separate autoregressive model in each region of the image. A hierarchical Bayesian model for the observations is used, and a maximum marginalised a posteriori (MMAP) blur estimate is obtained by optimising the resulting probability density function
international conference on image processing | 2007
Tom E. Bishop; Rafael Molina; James R. Hopgood
The variational Bayesian approach has recently been proposed to tackle the blind image restoration (BIR) problem. We consider extending the procedures to include realistic boundary modelling and non-stationary image restoration. Correctly modelling the boundaries is essential for achieving accurate blind restorations of photographic images, whilst nonstationary models allow for better adaptation to local image features, and therefore improvements in quality.
international conference on image processing | 2008
Tom E. Bishop; Rafael Molina; James R. Hopgood
We propose a new image and blur prior model, based on non-stationary autoregressive (AR) models, and use these to blindly deconvolve blurred photographic images, using the Gibbs sampler. As far as we are aware, this is the first attempt to tackle a real-world blind image deconvolution (BID) problem using Markov chain Monte Carlo (MCMC) methods. We give examples with simulated and real out-of-focus images, which show the state-of-the-art results that the proposed approach provides.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012
Tom E. Bishop; Paolo Favaro
Portable light field (LF) cameras have demonstrated capabilities beyond conventional cameras. In a single snapshot, they enable digital image refocusing and 3D reconstruction. We show that they obtain a larger depth of field but maintain the ability to reconstruct detail at high resolution. In fact, all depths are approximately focused, except for a thin slab where blur size is bounded, i.e., their depth of field is essentially inverted compared to regular cameras. Crucial to their success is the way they sample the LF, trading off spatial versus angular resolution, and how aliasing affects the LF. We show that applying traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing. We address these challenges using an explicit image formation model, and incorporate Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework, eliminating aliasing by fusing multiview information. We demonstrate the method on synthetic and real images captured with our LF camera, and show that it can outperform other computational camera systems.
international conference on computational photography | 2009
Tom E. Bishop; Sara Zanetti; Paolo Favaro
Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.
asian conference on computer vision | 2010
Tom E. Bishop; Paolo Favaro
In this paper we show how to obtain full-resolution depth maps from a single image obtained from a plenoptic camera. Previous work showed that the estimation of a low-resolution depth map with a plenoptic camera differs substantially from that of a camera array and, in particular, requires appropriate depth-varying antialiasing filtering. In this paper we show a quite striking result: One can instead recover a depth map at the same full-resolution of the input data. We propose a novel algorithm which exploits a photoconsistency constraint specific to light fields captured with plenoptic cameras. Key to our approach is handling missing data in the photoconsistency constraint and the introduction of novel boundary conditions that impose texture consistency in the reconstructed full-resolution images. These ideas are combined with an efficient regularization scheme to give depth maps at a higher resolution than in any previous method. We provide results on both synthetic and real data.
international conference on image processing | 2010
Manuel Martinello; Tom E. Bishop; Paolo Favaro
In this paper we present analysis and a novel algorithm to estimate depth from a single image captured by a coded aperture camera. This is a challenging problem which requires new tools and investigations, compared with multi-view reconstruction. Unlike previous approaches, which need to recover both sharp image and depth, we consider directly estimating only depth, whilst still accounting for the statistics of the sharp image. The problem is formulated in a Bayesian framework, which enables us to reduce the estimation of the original sharp image to the local space-varying statistics of the texture. This yields an algorithm that can be solved via graph cuts (without user interaction). Performance and results on both synthetic and real data are reported and compared with previous methods.
international conference on image processing | 2009
Javier Mateos; Tom E. Bishop; Rafael Molina; Aggelos K. Katsaggelos
In this paper we present a new Bayesian methodology for the restoration of blurred and noisy images. Bayesian methods rely on image priors that encapsulate prior image knowledge and avoid the ill-posedness of image restoration problems. We use a spatially varying image prior utilizing a Gamma-Normal hyperprior distribution on the local precision parameters. This kind of hyperprior distribution, which to our knowledge has not been used before in image restoration, allows for the incorporation of information on local as well as global image variability, models correlation of the local precision parameters and is a conjugate hyperprior to the image model used in the paper. The proposed restoration technique is compared with other image restoration approaches, demonstrating its improved performance.
Blind image deconvolution: theory and applications | 2007
Tony F. Chan; Tom E. Bishop; S.D. Babacan; Bruno Amizic; Aggelos K. Katsaggelos; Rafael Molina
international conference on computer vision | 2013
Stephen Milborrow; Tom E. Bishop; Fred Nicolls