Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Courtney is active.

Publication


Featured researches published by Patrick Courtney.


Optics Express | 2007

High speed optically sectioned fluorescence lifetime imaging permits study of live cell signaling events

David M. Grant; J. McGinty; Ewan J. McGhee; Tom D. Bunney; Dylan M. Owen; Clifford Talbot; Wei Zhang; Sunil Kumar; Ian Munro; Peter M. P. Lanigan; Gordon T. Kennedy; Christopher Dunsby; Anthony I. Magee; Patrick Courtney; M. Katan; Mark A. A. Neil; Paul M. W. French

We present a time domain optically sectioned fluorescence lifetime imaging (FLIM) microscope developed for high-speed live cell imaging. This single photon excited system combines wide field parallel pixel detection with confocal sectioning utilizing spinning Nipkow disc microscopy. It can acquire fluorescence lifetime images of live cells at up to 10 frames per second (fps), permitting high-speed FLIM of cell dynamics and protein interactions with potential for high throughput cell imaging and screening applications. We demonstrate the application of this FLIM microscope to real-time monitoring of changes in lipid order in cell membranes following cholesterol depletion using cyclodextrin and to the activation of the small GTP-ase Ras in live cells using FRET.


Computer Vision and Image Understanding | 2008

Performance characterization in computer vision: A guide to best practices

Neil A. Thacker; Adrian F. Clark; John L. Barron; J. Ross Beveridge; Patrick Courtney; William R. Crum; Visvanathan Ramesh; Christine Clark

It is frequently remarked that designers of computer vision algorithms and systems cannot reliably predict how algorithms will respond to new problems. A variety of reasons have been given for this situation and a variety of remedies prescribed in literature. Most of these involve, in some way, paying greater attention to the domain of the problem and to performing detailed empirical analysis. The goal of this paper is to review what we see as current best practices in these areas and also suggest refinements that may benefit the field of computer vision. A distinction is made between the historical emphasis on algorithmic novelty and the increasing importance of validation on particular data sets and problems.


Optics Letters | 2005

Optically sectioned fluorescence lifetime imaging using a Nipkow disk microscope and a tunable ultrafast continuum excitation source

D. M. Grant; D. S. Elson; D. Schimpf; Christopher Dunsby; Jose Requejo-Isidro; Egidijus Auksorius; Ian Munro; Mark A. A. Neil; P. M. W. French; E. Nye; Gordon Stamp; Patrick Courtney

We demonstrate an optically sectioned fluorescence lifetime imaging microscope with a wide-field detector, using a convenient, continuously tunable (435-1150 nm) ultrafast source for fluorescence imaging applications that is derived from a visible supercontinuum generated in a microstructured fiber.


machine vision applications | 1997

Algorithmic modelling for performance evaluation

Patrick Courtney; Neil A. Thacker; Adrian F. Clark

Many of the machine vision algorithms described in the literature are tested on a very small number of images. It is generally agreed that algorithms need to be tested on much larger numbers if any statistically meaningful measure of performance is to be obtained. However, these tests are rarely performed; in our opinion this is normally due to two reasons. Firstly, the scale of the testing problem is daunting when high levels of reliability are sought, since it is the proportion of failure cases that allows the reliability to be assessed and a large number of failure cases are needed to form an accurate estimation of reliability. For reliable and robust algorithms, this requires an inordinate number of test cases. Secondly, there is the difficulty of selecting test images to ensure that they are representative. This is aggravated by the fact that the assumptions made may be valid in one application domain but not in another. Hence, it is very difficult to relate the results of one evaluation to other users’ requirements. While it is true that published papers in the vision area must contain some evidence of the successful application of the suggested technique, a whole host of reasons have been put forward as to why researchers do not attempt to evaluate their algorithms more rigorously. These objections are valid only within a closely defined context and do not stand up to critical examination [13]. The real problem seems to be the time required for the various stages of algorithm development. The ratiotheory: implementation: evaluation seems to scale according to the rule of thumb 1 : 10 : 100 [13]. The effort required to get a new idea published is thus far less than an extensive empirical evaluation, which is a considerable demotivation for researchers to do evaluation work, particularly as evaluation is not much valued as publishable material in either conferences or journals. However, the truth of the matter is that unless algorithms are evaluated – and in a manner that can be used to predict the capabilities of a technique on an unseen data set – it is unlikely to be re-implemented and used. Moreover, the subject cannot advance without a well-founded scientific methodology, which it will not have without an acknowledged system for


Neural Networks | 1997

Supervised learning extensions to the CLAM network

Neil A. Thacker; Ian Abraham; Patrick Courtney

The contextual layered associative memory (CLAM) has been developed as a self-generating structure which implements a probabilistic encoding scheme. The training algorithms are geared towards the unsupervised generation of a layerable associative mapping ([Thacker and Mayhew, 1989]). We show here that the resulting structure will support layers which can be trained to produce outputs that approximate conditional probabilities of classification. Unsupervised and supervised learning algorithms operate independently permitting the unsupervised representational layer to be developed before supervision is available. The system thus supports learning which is inherently more flexible than conventional node labelling schemes. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.


british machine vision conference | 1992

Statistical Analysis of a Stereo Matching Algorithm

Neil A. Thacker; Patrick Courtney

This paper discusses the problems of image processing algorithm design and comparison and suggests that a suitable approach may be to model algorithms. We introduce the corner matching algorithm which we have used to provide reliable data for 3D computation modules [5][6]. The development of a simple model of the matching process permits the understanding of the influence of various parameters in the matching algorithm. This model also allows optimisation of the algorithm using data distributions obtained from representative scenes.


international conference on pattern recognition | 1992

A hardware architecture for image rectification and ground plane obstacle detection

Patrick Courtney; Neil A. Thacker; Chris R. Brown

Describes image rectification, explains why it is necessary, how it is performed and proposes a hardware implementation for realtime operation as well as outlining a number of important additional applications that such a module may support, including ground plane obstacle detection.<<ETX>>


Proceedings of the Theoretical Foundations of Computer Vision, TFCV on Performance Characterization in Computer Vision | 1998

Databases for Performance Characterization

Adrian F. Clark; Patrick Courtney

The principal aim of machine vision is, naturally, to develop techniques and systems that allow computers to be aware of their surroundings and take actions consequent on what is seen. Many man-years of effort have been expended on this goal; although significant progress has been made, we are arguably not much closer to solving the basic problem. While it is undoubtedly an extremely difficult goal, we are all hampered to some extent by the fact that we do not necessarily know which technique works best in which situation. The realization that this problem needs to be addressed in order for vision to become an engineering discipline rather than purely a research area has been long in coming; and this idea of knowing what to use and when is the underlying tenet of performance characterization.


international conference on pattern recognition | 1994

Specification and design of a general purpose image processing chip

Neil A. Thacker; Patrick Courtney; Simon N. Walker; Stephen J. Evans; R. B. Yates

The computational and communication requirements of a wide range of image processing algorithms were analysed and used to specify the design of a general purpose image processing device. The algorithms considered are restricted to image-in and image or feature-out type algorithms. These algorithms are categorised by the demands that they would place on a hardware architecture. We conclude that the major consideration is the off-chip input/output limitation which needs to be able to exploit redundancy in both the data and processor instructions. This paper summarises: the main conclusions of this work, an architecture which we have designed in an attempt to meet the identified constraints and how it differs from previous attempts.


british machine vision conference | 1992

Online Calibration of a 4 DOF Stereo Head

Neil A. Thacker; Patrick Courtney

This paper addresses the problem of recovering accurate 3D geometry from a 4 degree of freedom stereo robot head. We argue that successful implementation of stereo vision in a real world application will require a self tuning system. This paper describes a statistical framework for the combination of many sources of information for the calibration of a stereo camera system which would allow continual recalibration during normal use of the cameras. The calibration is maintained using modules at three levels: fixed verge, variable verge and pan/tilt/verge calibration. Together these modules provide the means to fuse data obtained at various head positions into a single coordinate frame.

Collaboration


Dive into the Patrick Courtney's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Munro

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge