Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chensheng Wu is active.

Publication


Featured researches published by Chensheng Wu.


Journal of The Optical Society of America A-optics Image Science and Vision | 2015

Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor

Chensheng Wu; Jonathan Ko; Christopher C. Davis

We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beams propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-π,π) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2π. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens arrays front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper, we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.


Proceedings of SPIE | 2013

Modified plenoptic camera for phase and amplitude wavefront sensing

Chensheng Wu; Christopher C. Davis

Shack-Hartmann sensors have been widely applied in wavefront sensing. However, they are limited to measuring slightly distorted wavefronts whose local tilt doesn’t surpass the numerical aperture of its micro-lens array and cross talk of incident waves on the mrcro-lens array should be strictly avoided. In medium to strong turbulence cases of optic communication, where large jitter in angle of arrival and local interference caused by break-up of beam are common phenomena, Shack-Hartmann sensors no longer serve as effective tools in revealing distortions in a signal wave. Our design of a modified Plenoptic Camera shows great potential in observing and extracting useful information from severely disturbed wavefronts. Furthermore, by separating complex interference patterns into several minor interference cases, it may also be capable of telling regional phase difference of coherently illuminated objects.


Optics Letters | 2016

Using a plenoptic sensor to reconstruct vortex phase structures.

Chensheng Wu; Jonathan Ko; Christopher C. Davis

A branch point problem and its solution commonly involve recognizing and reconstructing a vortex phase structure around a singular point. In laser beam propagation through random media, the destructive phase contributions from various parts of a vortex phase structure will cause a dark area in the center of the beams intensity profile. This null of intensity can, in turn, prevent the vortex phase structure from being recognized. In this Letter, we show how to use a plenoptic sensor to transform the light field of a vortex beam so that a simple and direct reconstruction algorithm can be applied to reveal the vortex phase structure. As a result, we show that the plenoptic sensor is effective in detecting branch points and can be used to reconstruct phase distortion in a beam in a wide sense.


Proceedings of SPIE | 2014

Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

Chensheng Wu; Jonathan Ko; William Nelson; Christopher C. Davis

A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.


Optics Express | 2016

Plenoptic mapping for imaging and retrieval of the complex field amplitude of a laser beam.

Chensheng Wu; Jonathan Ko; Christopher C. Davis

The plenoptic sensor has been developed to sample complicated beam distortions produced by turbulence in the low atmosphere (deep turbulence or strong turbulence) with high density data samples. In contrast with the conventional Shack-Hartmann wavefront sensor, which utilizes all the pixels under each lenslet of a micro-lens array (MLA) to obtain one data sample indicating sub-aperture phase gradient and photon intensity, the plenoptic sensor uses each illuminated pixel (with significant pixel value) under each MLA lenslet as a data point for local phase gradient and intensity. To characterize the working principle of a plenoptic sensor, we propose the concept of plenoptic mapping and its inverse mapping to describe the imaging and reconstruction process respectively. As a result, we show that the plenoptic mapping is an efficient method to image and reconstruct the complex field amplitude of an incident beam with just one image. With a proof of concept experiment, we show that adaptive optics (AO) phase correction can be instantaneously achieved without going through a phase reconstruction process under the concept of plenoptic mapping. The plenoptic mapping technology has high potential for applications in imaging, free space optical (FSO) communication and directed energy (DE) where atmospheric turbulence distortion needs to be compensated.


Proceedings of SPIE | 2015

Object recognition through turbulence with a modified plenoptic camera

Chensheng Wu; Jonathan Ko; Christopher C. Davis

Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object’s location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as “superimposed” turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed “lucky image” can help the viewer identify the object even when a “lucky image” by ordinary cameras is not achievable.


Proceedings of SPIE | 2013

Geometrical optics analysis of atmospheric turbulence

Chensheng Wu; Christopher C. Davis

2D phase screen methods have been frequently applied to estimate atmospheric turbulence in free space optic communication and imaging systems. In situations where turbulence is “strong” enough to cause severe discontinuity of the wavefront (small Fried coherence length), the transmitted optic signal behaves more like “rays” rather than “waves”. However, to achieve accurate simulation results through ray modeling requires both a high density of rays and a large number of eddies. Moreover, their complicated interactions require significant computational resources. Thus, we introduce a 3D ray model based on simple characteristics of turbulent eddies regardless of their particular geometry. The observed breakup of a beam wave into patches at a receiver and the theoretical description indicates that rays passing through the same sequence of turbulent eddies show “group” behavior whose wavefront can still be regarded as continuous. Thus, in our approach, we have divided the curved trajectory of rays into finite line segments and intuitively related their redirections to the refractive property of large turbulent eddies. As a result, our proposed treatment gives a quick and effective high-density ray simulation of a turbulent channel which only requires knowledge of the magnitude of the refractive index deviations. And our method points out a potential correction in reducing equivalent Cn2 by applying adaptive optics. This treatment also shows the possibility of extending 2D phase screen simulations into more general 3D treatments.


Proceedings of SPIE | 2012

Using a plenoptic camera to measure distortions in wavefronts affected by atmospheric turbulence

Mohammed Eslami; Chensheng Wu; John Rzasa; Christopher C. Davis

Ideally, as planar wave fronts travel through an imaging system, all rays, or vectors pointing in the direction of the propagation of energy are parallel, and thus the wave front is focused to a particular point. If the wave front arrives at an imaging system with energy vectors that point in different directions, each part of the wave front will be focused at a slightly different point on the sensor plane and result in a distorted image. The Hartmann test, which involves the insertion of a series of pinholes between the imaging system and the sensor plane, was developed to sample the wavefront at different locations and measure the distortion angles at different points in the wave front. An adaptive optic system, such as a deformable mirror, is then used to correct for these distortions and allow the planar wave front to focus at the point desired on the sensor plane, thereby correcting the distorted image. The apertures of a pinhole array limit the amount of light that reaches the sensor plane. By replacing the pinholes with a microlens array each bundle of rays is focused to brighten the image. Microlens arrays are making their way into newer imaging technologies, such as “light field” or “plenoptic” cameras. In these cameras, the microlens array is used to recover the ray information of the incoming light by using post processing techniques to focus on objects at different depths. The goal of this paper is to demonstrate the use of these plenoptic cameras to recover the distortions in wavefronts. Taking advantage of the microlens array within the plenoptic camera, CODE-V simulations show that its performance can provide more information than a Shack-Hartmann sensor. Using the microlens array to retrieve the ray information and then backstepping through the imaging system provides information about distortions in the arriving wavefront.


Applied Optics | 2018

Using turbulence scintillation to assist object ranging from a single camera viewpoint

Chensheng Wu; Jonathan Ko; Joseph T. Coffaro; Daniel A. Paulson; John Rzasa; Larry C. Andrews; Ronald L. Phillips; Robert Crabbs; Christopher C. Davis

Image distortions caused by atmospheric turbulence are often treated as unwanted noise or errors in many image processing studies. Our study, however, shows that in certain scenarios the turbulence distortion can be very helpful in enhancing image processing results. This paper describes a novel approach that uses the scintillation traits recorded on a video clip to perform object ranging with reasonable accuracy from a single camera viewpoint. Conventionally, a single camera would be confused by the perspective viewing problem, where a large object far away looks the same as a small object close by. When the atmospheric turbulence phenomenon is considered, the edge or texture pixels of an object tend to scintillate and vary more with increased distance. This turbulence induced signature can be quantitatively analyzed to achieve object ranging with reasonable accuracy. Despite the inevitable fact that turbulence will cause random blurring and deformation of imaging results, it also offers convenient solutions to some remote sensing and machine vision problems, which would otherwise be difficult.


Optics Letters | 2016

Using an incoherent target return to adaptively focus through atmospheric turbulence.

William Nelson; J. P. Palastro; Chensheng Wu; Christopher C. Davis

A laser beam propagating to a remote target through atmospheric turbulence acquires intensity fluctuations. If the target is cooperative and provides a coherent return beam, the phase measured near the beam transmitter and adaptive optics, in principle, can correct these fluctuations. Generally, however, the target is uncooperative. In this case, we show that an incoherent return from the target can be used instead. Using the principle of reciprocity, we derive a novel relation between the field at the target and the returned field at a detector. We simulate an adaptive optics system that utilizes this relation to focus a beam through atmospheric turbulence onto a rough surface.

Collaboration


Dive into the Chensheng Wu's collaboration.

Top Co-Authors

Avatar

Joseph T. Coffaro

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Robert Crabbs

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

J. P. Palastro

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Christopher A. Smith

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Larry C. Andrews

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Ronald L. Phillips

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Jonathan Spychalsky

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Melissa Beason

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

P. Sprangle

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge