Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason D. Schmidt is active.

Publication


Featured researches published by Jason D. Schmidt.


Archive | 2010

Simple Computations Using Fourier Transforms

Jason D. Schmidt

There are many useful computations such as correlations and convolutions that can be implemented using FTs. In fact, taking advantage of computationally efficient DFT techniques such as the FFT often executes much faster than more straightforward implementations. Subsequent chapters reuse these tools in an optical context. For example, convolution is used in Ch. 5 to simulate the effects of diffraction and aberrations on image quality, and structure functions are used in Ch. 9 to validate the statistics of turbulent phase screens. Three of these tools, namely convolution, correlation, and structure functions, are closely related and have similar mathematical definitions. Furthermore, they are all written in terms of FTs in this chapter. However, their uses are quite different, and each common use is explained in the upcoming sections. These different uses cause the implementations of each to be quite different. For example, correlations and structure functions are usually performed on data that pass through an aperture. Consequently, their computations are modified to remove the effects of the aperture. The last computation discussed in this chapter is the derivative. Like the other computations in this chapter, the method presented is based on FTs to allow for efficient computation. The method is then generalized to computing gradients of two-dimensional functions. While derivatives and gradients are not used again in later chapters, derivatives are discussed because some readers might want to compute derivatives for topics related to optical turbulence, like simulating the operation of wavefront sensors. 3.1 Convolution We begin this discussion of FT-based computations with convolution for a couple of reasons. First, convolution plays a central role in linear-systems theory.14 The output of a linear system is the convolution of the input signal with the systems impulse response. In the context of simulating optical wave propagation, the linear-systems formalism applies to coherent and incoherent imaging, analog optical image processing, and free-space propagation.


Archive | 2010

Sampling Requirements for Fresnel Diffraction

Jason D. Schmidt

The primary reason to use simulations is to tackle problems that are analytically intractable. As a result, any computer code that simulates optical-wave propagation needs to handle almost any type of source field. Wave-optics simulations are based on DFTs, and we saw in Ch. 2 that aliasing poses a challenge to DFTs. When the waveform to be transformed is bandlimited, we just need to sample it finely enough to avoid aliasing altogether (satisfying the Nyquist criterion). However, most optical sources are not spatially bandlimited, and the quadratic phase term inside the Fresnel diffraction integral certainly is not bandlimited. These issues have been explored by many authors. 30, 31, 35, 37, 42, 54, 55 Because an optical fields spatial-frequency spectrum maps directly to its planewave spectrum, 5 propagation geometry places a limit on how much spatial frequency content from the source can be seen within the observing aperture. Note that this is physical; it is not caused by sampling. This principle is the foundation of Coys approach to sampling, and guides most of our discussion on sampling needs in this chapter. 7.1 Imposing a Band Limit The optical field at each point in the source plane emits a bundle of rays that propagate toward the observation plane. Each ray represents a plane wave propagating in that direction. Let us start by examining the propagation geometry to determine the maximum plane-wave direction relative to the reference normal from the source that is incident upon the region of interest in the observation plane. Clearly, it is critical to pick the grid spacing and number of grid points to ensure an accurate simulation. The following development uses the propagation geometry to place limits on the necessary spatial-frequency bandwidth, and consequently, the number of sample points and grid spacing. This determines the size and spacing of the source-plane grid and the size and spacing of the observation-plane grid.


Archive | 2010

Fresnel Diffraction in Vacuum

Jason D. Schmidt

The goal of this chapter is to develop methods for modeling near-field optical-wave propagation with high delity and some flexibility, which is considerably more challenging than for far-field propagation. This chapter uses the same coordinate convention as in Fig. 1.2. It begins with a discussion of different forms of the Fresnel diffraction integral. These different forms can be numerically evaluated in different ways, each with benefits and drawbacks. Then, to emphasize the different mathematical operations in the notation, operators are introduced that are used throughout Chs. 6-8. The rest of this chapter develops basic algorithms for wave propagation in vacuum and other simulation details. The quadratic phase factor inside the Fresnel diffraction integral is not bandlimited, so it poses some challenges related to sampling. There are two different ways to evaluate the integral: as a single FT or as a convolution. This chapter develops both basic methods as well as more sophisticated versions that provide some flexibility. There are different types of flexibility that one might need. For example, Delen and Hooker present a method that is particularly useful for simulating propagation in integrated optical components. Because the interfaces in these components are often slanted or offset and the angles are not always paraxial, they developed a Rayleigh-Summerfeld propagation method that can handle propagation between arbitrarily oriented planes with good accuracy.28,29 In contrast, the applications discussed in this book involve parallel source and observation planes, and the paraxial approximation is a very good one. When long propagation distances are involved, beams can spread to be much larger than their original size. Accordingly, some algorithms discussed in this chapter provide the user with the flexibility to choose the scaling between the observation- and source-plane grid spacings. Many authors have presented algorithms with this ability including Tyler and Fried,30 Roberts,31 Coles,32 Rubio,33 Deng et al.,34 Coy,35 Rydberg and Bengtsson,36 and Voelz and Roggemann.37 Most of these methods are mathematically equivalent to each other. However, one unique algorithm was presented by Coles32 and later augmented by Rubio33 in which a diverging spherical coordinate system was used by an angular grid with constant angular grid spacing. This was done specifically because the source was a point source, which naturally diverges spherically.


Archive | 2010

Appendix B: MATLAB Code Listings

Jason D. Schmidt

Below are MATLAB code listings for several functions used throughout the book. They are provided here so that the reader knows exactly how to generate samples of these signals.


Archive | 2010

Imaging Systems and Aberrations

Jason D. Schmidt

At the surface, numerically evaluating imaging systems with monochromatic light is a simple extension of two-dimensional discrete convolution, as discussed in Sec. 3.1. This is because the response of light to an imaging system, whether the light is coherent or incoherent, can be modeled as a linear system. Determining the impulse response of an imaging system is more complicated, particularly when the system does not perfectly focus the image. This happens because of aberrations present in the imaging system. In this chapter, aberrations are treated first. Then, we show how aberrations affect the impulse response of imaging systems. Finally, the chapter finishes with a discussion of imaging system performance. 5.1 Aberrations The light from an extended object can be treated as a continuum of point sources. Each point source emits rays in all directions as shown in Fig. 5.1. In geometric optics, the rays from a given object point that pass all the way through an ideal imaging system are focused to another point. Each point of the object emits (or reflects) an optical field which becomes a diverging spherical wave at the entrance pupil of the imaging system. To focus the light to a point in the image plane, the imaging system must apply a spherical phase delay to convert a diverging spherical wavefront into a converging spherical wavefront. Aberrations are deviations from the spherical phase delay that cause the rays from a given object point to misfocus and form a finite-sized spot. When the image is viewed as a whole, the aberration manifests itself as blur. Light from different object points can experience different aberrations in the image plane depending on their distance from the optical axis. However, for the purposes of this book, we are not concerned with these field-angle-dependent aberrations but assume that they are constant.


Archive | 2010

Propagation through Atmospheric Turbulence

Jason D. Schmidt


Archive | 2010

Foundations of Scalar Diffraction Theory

Jason D. Schmidt


Archive | 2010

Appendix A: Function Definitions

Jason D. Schmidt


Archive | 2010

Relaxed Sampling Constraints with Partial Propagations

Jason D. Schmidt


Archive | 2010

Digital Fourier Transforms

Jason D. Schmidt

Collaboration


Dive into the Jason D. Schmidt's collaboration.

Researchain Logo
Decentralizing Knowledge