Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Zammit is active.

Publication


Featured researches published by Paul Zammit.


Optica | 2014

Extended depth-of-field imaging and ranging in a snapshot

Paul Zammit; Andrew R. Harvey; Guillem Carles

Traditional approaches to imaging require that an increase in depth of field is associated with a reduction in numerical aperture, and hence with a reduction in resolution and optical throughput. In their seminal work, Dowski and Cathey reported how the asymmetric point-spread function generated by a cubic-phase aberration encodes the detected image such that digital recovery can yield images with an extended depth of field without sacrificing resolution [Appl. Opt.34, 1859 (1995)10.1364/AO.34.001859APOPAI1559-128X]. Unfortunately recovered images are generally visibly degraded by artifacts arising from subtle variations in point-spread functions with defocus. We report a technique that involves determination of the spatially variant translation of image components that accompanies defocus to enable determination of spatially variant defocus. This in turn enables recovery of artifact-free, extended depth-of-field images together with a two-dimensional defocus and range map of the imaged scene. We demonstrate the technique for high-quality macroscopic and microscopic imaging of scenes presenting an extended defocus of up to two waves, and for generation of defocus maps with an uncertainty of 0.036 waves.


Optics Express | 2018

Computational localization microscopy with extended axial range

Yongzhuang Zhou; Paul Zammit; Guillem Carles; Andrew R. Harvey

A new single-aperture 3D particle-localization and tracking technique is presented that demonstrates an increase in depth range by more than an order of magnitude without compromising optical resolution and throughput. We exploit the extended depth range and depth-dependent translation of an Airy-beam PSF for 3D localization over an extended volume in a single snapshot. The technique is applicable to all bright-field and fluorescence modalities for particle localization and tracking, ranging from super-resolution microscopy through to the tracking of fluorescent beads and endogenous particles within cells. We demonstrate and validate its application to real-time 3D velocity imaging of fluid flow in capillaries using fluorescent tracer beads. An axial localization precision of 50 nm was obtained over a depth range of 120μm using a 0.4NA, 20× microscope objective. We believe this to be the highest ratio of axial range-to-precision reported to date.


Optical Systems Design 2015: Computational Optics | 2015

3D imaging and ranging in a snapshot

Paul Zammit; Guillem Carles; Andrew R. Harvey

Imaging samples with a depth in excess of the depth of field of the objective poses a serious challenge in microscopy. The available techniques such as focus-stacking accomplish the task; however, besides necessitating complicated optical and mechanical arrangements, these techniques often exhibit very long acquisition times. As a result, their applicability is limited to static samples. We describe a simple and practical hybrid 3D imaging technique which permits the acquisition of 3D images in a single snapshot. Additionally, the proposed method solves the post-recovery artefact formation problem which plagues hybrid imaging systems; thus, enabling high-quality, artefact-free images to be obtained. Experimental results indicate that this method can yield an image quality comparable to that given by a focus-stack (which can require up to a few hundred snapshots) from a single snapshot.


Spie Newsroom | 2017

Computational imaging for 3D micrographs with 10-fold depth-of-field enhancement

Guillem Carles; Paul Zammit; Andrew R. Harvey

The state-of-the-art microscopes that are found in today’s scientific and industrial facilities benefit from a century of scientific innovations, underpinned by cutting-edge technologies such as ultrasensitive cameras and intense, agile light sources. The fundamental approach to imaging, however, is unchanged from the principles that were used in the 17th century—e.g., by Galileo, Hooke, and Leeuwenhoek—in the construction of instruments that eventually became the compound microscope. The focus today remains on engineering optical elements to optimally focus light from the sample to form a sharp image at a single plane. In this traditional approach to imaging, a specific sample plane (which is imaged to the camera plane) is inherently defined so that objects not located exactly in the sample plane are out of focus. In high-resolution microscopy, the depth of field (DOF) over which a sharp image is recorded is typically a micrometer or less, which offers the benefit of optical sectioning (i.e., the ability to produce clear images of focal planes within a thick sample). However, samples that exceed the DOF of a microscope are the norm, which means it is necessary refocus the image throughout its depth to build a clear picture. In the first computational solution to this dilemma (developed by Gerd Häusler), a single image was recorded as the sample was swept through the plane of best focus, and a timeconsuming coherent optical processor was subsequently used to recover a sharp image.1 In a modern approach commonly used by microscopists, a ‘Z-stack’ of up to 100 images are recorded and combined computationally into a single sharp image with an extended DOF. Other more sophisticated techniques have been developed for high-resolution 3D microscopy, such as light-sheet fluorescence microscopy, confocal/multiphoton Figure 1. Images obtained with (a) our computational complementary kernel matching (CKM) technique, compared with (b) conventional microscopy, acquired in a single snapshot. Left: Images of lily flower pollen grains obtained in fluorescence. Right: Images of an irregular brass surface. Both CKM samples exceed the depth of field (DOF) of a conventional microscope.


Imaging and Applied Optics 2017 (3D, AIO, COSI, IS, MATH, pcAOP) | 2017

Video-rate 3D Particle Tracking with Extended Depth-of-field in Thick Biological Samples

Yongzhuang Zhou; Paul Zammit; Vytautas Zickus; Guillem Carles; Jonathan M. Taylor; Andrew R. Harvey

We present a single-aperture 3D particle localisation and tracking technique with a vastly increased depth-of-field without compromising optical resolution and throughput. Flow measurements in a FEP capillary and a zebrafish blood vessel are demonstrated experimentally.


Imaging and Applied Optics 2016 (2016), paper CW2D.1 | 2016

Three-Dimensional Imaging and Ranging in a Snapshot with an Extended Depth-of-Field

Paul Zammit; Guillem Carles; Andrew R. Harvey

We describe a unique single-snapshot three-dimensional imaging technique and experimentally demonstrate that it gives an image quality and depth information from a single snapshot comparable to that given by a focus-stack from tens of snapshots.


3D Image Acquisition and Display: Technology, Perception and Applications | 2016

3D microfluidic particle image velocimetry with extended depth-of-field and a single camera

Yongzhuang Zhou; Paul Zammit; Andrew R. Harvey

We describe a novel 3D particle image velocimetry (PIV) technique in microscopy using the wave-front coding method. Unlike conventional stereoscopic PIV, this technique has an extended depth-of-field (DOF) and requires only a single lens and detector.


Imaging and Applied Optics 2015 (2015), paper CT2E.2 | 2015

Practical Single Snapshot 3D Imaging Method with an Extended Depth of Field

Paul Zammit; Andrew R. Harvey; Guillem Carles

We propose a new, practical method of capturing extended depth-of-field, 3D images in a snapshot, based on hybrid-imaging and Complementary Kernel Matching. We demonstrate significantly improved image quality in microscopy compared to conventional hybrid-imaging.


Encyclopedia of Materials: Science and Technology (Second Edition) | 2007

Wavefront-coded, Hybrid Imaging for the Alleviation of Optical Aberrations

Andrew R. Harvey; Paul Zammit; Guillem Carles; Gonzalo Muyo; Samir Mezouari

Hybrid optical–digital imaging involves a combination of optical coding during image formation and digital decoding and image reconstruction. This process offers enhanced functionality beyond what is possible using conventional purely optical imaging, for example, in the fields of miniaturization, cost reduction, extended depth of field and three-dimentional imaging. This article focuses on one of the most important hybrid imaging techniques namely, wavefront coding, which was proposed principally as a means to extend the depth of field of an imaging system. A review of the theory behind wavefront coding, the manufacturing of wavefront-coding elements and various applications for which it has been proposed is provided.


arXiv: Optics | 2018

High-speed extended-volume blood flow measurement using engineered point-spread function

Yongzhuang Zhou; Vytautas Zickus; Paul Zammit; Jonathan M. Taylor; Andrew R. Harvey

Collaboration


Dive into the Paul Zammit's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge