Phase-Retrieval-Based Wavefront Metrology for High Contrast Coronagraphy
Gregory R. Brady, Christopher Moriarty, Peter Petrone, Iva Laginja, Keira Brooks, Tom Comeau, Lucie Leboulleux, Remi Soummer
PPhase-Retrieval-Based Wavefront Metrology for HighContrast Coronagraphy
Gregory R. Brady a , Christopher Moriarty a , Peter Petrone a , Iva Laginja a , Keira Brooks a , TomComeau a , Lucie Leboulleux abc , R´emi Soummer aa Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA b Aix Marseille Universit´e, CNRS, LAM (Laboratoire d´Astrophysique de Marseille) UMR7326, 13388, Marseille, France c Office National d´Etudes et de Recherches A´erospatiales, 29 Avenue de la Division Leclerc,92320 Chˆatillon, France
ABSTRACT
We discuss the use of parametric phase-diverse phase retrieval as an in-situ high-fidelity wavefront measurementmethod to characterize and optimize the transmitted wavefront of a high-contrast coronagraphic instrument.We apply our method to correct the transmitted wavefront of the HiCAT (High contrast imager for ComplexAperture Telescopes) coronagraphic testbed. This correction requires a series of calibration steps, which wedescribe. The correction improves the system wavefront from 16 nm RMS to 3.0 nm RMS.
Keywords:
Phase retrieval, coronagraph, wavefront-sensing, deformable mirror, high-contrast imaging
1. INTRODUCTION
The next generation of space telescopes for direct imaging and spectroscopy of exoplanets will require corona-graphs with high-contrasts, on the order of 10 − . To achieve this, iterative speckle nulling procedures will beused with science data to eliminate residual light passing through the coronagraph by perturbing a deformablemirror (DM) and correcting the end-to-end system wavefront on the scale of a few picometers. However, mostof these procedures assume a linear approximation when relating the measured intensity to the wavefront phase,which becomes invalid for phase distributions with an amplitude larger than a small fraction of the wavelength. Thus we desire a method for decreasing the coronagraph system’s wavefront error so that it is within the validrange of the linear approximation.In this work we describe a procedure for measuring the wavefront of a well-aligned system using a phaseretrieval algorithm based on an explicit optimization of parameters describing the phase and other pertinentsystem characteristics. Phase retrieval algorithms accurately model the propagation from the pupil plane toimage plane and do not make use of a linear approximation. The wavefront measured using phase retrieval isused to derive a correction that is applied to the DM. Phase retrieval is not sufficiently sensitive to performmeasurements on the picometer scale and is used to pre-correct the system wavefront error to the order of a fewnanometers prior to speckle nulling.In addition, we will describe a series of phase-retrieval-based calibration steps that are necessary to ensurethat the wavefront correction is properly registered to the DM. Finally, we describe our final measurement ofthe system wavefront demonstrating dramatic improvement, bringing the wavefront error well within the validrange of the linear phase approximation.
Further author information: send correspondence to Gregory Brady: E-mail: [email protected], Telephone: +1 410338 6504 a r X i v : . [ a s t r o - ph . I M ] M a r . PARAMETRIC PHASE RETRIEVAL ALGORITHM Phase retrieval algorithms typically infer the phase in the exit pupil of an optical system using one or moremeasurements of the intensity patterns that the system produces in a plane near its nominal focus. A parametricphase retrieval algorithm specifies a system to be considered using a set of parameters and uses an optimizationalgorithm to find values of these parameters that gives a minimum value for an objective function (also termedan error metric). The parameters used typically specify the phase in the exit pupil using a polynomial expansion,such as Zernike polynomials, or another basis set, such as family of delta-functions to specify phase at individualpixel locations.
5, 6
Other parameters can make the algorithm more robust by allowing it to correct for inexactlyknown system parameters, such as the distance from the exit pupil to the measurement plane or planes,
7, 8 thecentration of the measured spots on the detector array,
8, 9 the amplitude distribution in the pupil plane,
10, 11 or the sampling rate of the image data. Further, these algorithms have the ability to jointly optimize overmultiple images, allowing for a more robust estimate of the wavefront. This ability to compensate for systematicuncertainties make a parametric, optimization-based algorithm an attractive choice when compared to iterative-transform-type algorithms such as the Gerchberg-Saxton-Misell algorithm.
5, 13–15
Iterative transform algorithmsare less flexible and less robust (i.e. more prone to stagnating in local minima) and rely on properties of thealgorithm alone for their convergence characteristics.
Here we employ a bias and gain-insensitive objective function to characterize the agreement between our modeledand measured intensity patterns,
8, 16
Φ = 1 − K K (cid:88) k (cid:34)(cid:80) p,q W k ( p, q ) G k ( p, q ) (cid:98) G k ( p, q ) (cid:35) (cid:34)(cid:80) p,q W k ( p, q ) G k ( p, q ) (cid:35) (cid:34)(cid:80) p,q W k ( p, q ) (cid:98) G k ( p, q ) (cid:35) , (1)where p and q are sample indices, G k ( p, q ) is one of K measured intensity patterns, (cid:98) G k ( p, q ) is the correspondingcalculated intensity pattern and W k ( p, q ) is a weighting function used to ignore the contribution of bad pixelsor to favorably weight certain portions of the intensity pattern with, for example, better Signal-to-Noise ratio(SNR). Throughout this paper we use the convention that variables topped with a circumflex, such as (cid:98) G k ( p, q ),are being estimated while other values are known through measurement or convention. If G k ( p, q ) and (cid:98) G k ( p, q )are identical, the value of the objective function is zero. We typically model the field, (cid:98) E p ( m, n ), in the exit pupil using one of two methods. The first method is by simplyusing sampled representations of the amplitude (cid:98) A ( m, n ) and phase (cid:98) φ ( m, n ) so that we have (cid:98) E p ( m, n ) = (cid:98) A ( m, n ) exp (cid:104) i (cid:98) φ ( m, n ) (cid:105) (2)at sample indices ( m, n ). We term this representation as point-by-point.The second method is by representing the phase by the coefficients (cid:98) α j of a basis set such as the Zernikepolynomials Z j ( m, n ) so that we have (cid:98) E p ( m, n ) = (cid:98) A ( m, n ) exp i (cid:88) j (cid:98) α j Z j ( m, n ) . (3)It is common practise to initially optimize over a few low order Zernike terms before continuing the optimizationwith higher order Zernikes or point-by-point phase values. .3 Propagation Model We employ a two step propagation to calculate the estimated field in each of the measurement planes from thefield in the pupil plane. First, Fresnel diffraction is used to calculate the field (cid:98) E f ( p, q ) in the paraxial focal plane, (cid:98) E f ( p, q ) = 1 N exp (cid:20) iπλf (cid:0) p ∆ p + q ∆ q (cid:1)(cid:21) N (cid:88) m,n (cid:98) E p ( m, n ) exp (cid:20) − i πN ( mp + nq ) (cid:21) , (4)where λ is the wavelength, f is the distance between the exit pupil and paraxial focal plane, N is the size ofthe square arrays representing the pupil and focal plane fields, and (∆ p , ∆ q ) are the spacing between samples(in units of length) in the paraxial focal plane. This is typically equivalent to the pixel spacing of the detectorused to record the image data. The second step uses angular spectrum propagation to propagate the smalldistance between the paraxial focal plane and each of the slightly defocused measurement planes. We calculatethe angular spectrum of plane waves (cid:98) U f ( m, n ) in the paraxial focal plane using a Fourier transform (cid:98) U f ( m, n ) = 1 N N (cid:88) p,q (cid:98) E f ( p, q ) exp (cid:20) − i πN ( pm + nq ) (cid:21) . (5)The angular spectrum is then propagated to each of the K measurement planes, (cid:98) U k ( m, n ) = (cid:98) U f ( m, n ) exp (cid:32) i π (cid:98) z k (cid:114) λ − m ∆ m − n ∆ n (cid:33) , (6)where (∆ m , ∆ n ) are frequency domain sample spacings (in units of inverse length) given by∆ m,n = 1 N ∆ p,q (7)and (cid:98) z k is the distance to the k th measurement plane. Note that we have included a circumflex in (cid:98) z k as wetypically include this as a free parameter in the optimization to compensate for possible positioning errors. Thepropagated field in the spatial domain is computed from the angular spectrum using an inverse Fourier transform, (cid:98) E k ( p, q ) = 1 N N (cid:88) m,n (cid:98) U k ( m, n ) exp (cid:20) i πN ( mp + nq ) (cid:21) . (8)The intensity in the measurement plane is then determined from the field, (cid:98) I k ( p, q ) = (cid:12)(cid:12)(cid:12) (cid:98) E k ( p, q ) (cid:12)(cid:12)(cid:12) . (9)A final step is often performed on this intensity image to shift the intensity spot laterally to model the effect ofa detector array not centered at the origin. This is done on a sub-pixel scale using the Fourier shift theorem.First, we perform the forward transform (cid:98) f k ( m, n ) = 1 N N (cid:88) p,q (cid:98) I k ( p, q ) exp (cid:20) − i πN ( mp + nq ) (cid:21) , (10)then we multiply by the linear phase Fourier filter (cid:98) g k ( m, n ) = exp (cid:20) − i πN ( m (cid:98) x k + n (cid:98) y k ) (cid:21) (cid:98) f k ( m, n ) , (11)where ( (cid:98) x k , (cid:98) y k ) is the amount by which the image is shifted in pixel units. Note that these are also typicallyoptimized for. Inverse transforming this gives us our final modeled intensity distribution, (cid:98) G k ( p, q ) = 1 N N (cid:88) m,n (cid:98) g k ( m, n ) exp (cid:20) i πN ( mp + nq ) (cid:21) , (12)which, along with the measured intensity data G k ( p, q ), is used to calculate the error metric given in Equation 1. .4 Optimization We typically employ gradient-based algorithms to minimize the objective function, such as the conjugate gra-dient or limited-memory BFGS algorithms. These algorithms are most useful when efficient analyticexpressions for the derivatives of the objective function with respect to the system parameters are available,rather than computing the derivatives using finite differences. This is particularly true in the case of point-by-point estimates of the phase, where N × N derivatives must be calculated and N is typically a few hundred toa few thousand. Efficient expressions for all of the parameters discussed have been derived and are compiled inthe appendix of Reference 8. To reiterate, our phase retrieval code has the capability to optimize over • Lateral spot position in each phase-diverse image, (cid:98) x k , (cid:98) y k • Axial location of each phase-diverse image, (cid:98) z k • Pupil amplitude distribution, (cid:98) A ( m, n ) • Pupil phase expressed as point-by-point phase values, (cid:98) φ ( m, n ) • Pupil phase expressed as coefficients of Zernike polynomials (or other basis), (cid:98) α j .These can be used jointly or individually, with the output of a preceding optimization serving as the input forsubsequent optimizations, as is necessary for the problem at hand. In the code used in this work, the followingsequence is typically used:1. Preprocess image data to calculate data weight masks, W k ( p, q ), and initial guesses for lateral positionsfrom image centroids.2. Optimize over lateral positions alone, using guess from centroids.3. Optimize jointly over lateral position (using result from step 2), axial position and Zernike coefficients.4. Fix values for lateral and axial positions at values determined in step 3, optimize over point-by-point phasevalues using Zernike-derived phase as initial guess.5. Optimize over lateral and axial position and point-by-point phase values using results from step 4.6. Optimize over lateral and axial position, point-by-point phase using results from step 5 as initial values,and pupil amplitude using assumed pupil as inital guess.This multiple-step process allows us to determine the unknown parameters of the system robustly. If we were toproceed directly to step 6 it is likely that the optimization would stagnate in a local minima. Steps 4 to 6 arenecessary only when point-by-point phase values are desired and may be omitted when Zernikes are all that areneeded. The data weights W k ( p, q ) are typically derived from the measured data in an attempt to weight the data suitablyfor the optimization to be done. Typically they are formed by thresholding the input images so that only thecentral core of the image is retained. However, there are regions near the image center where the intensity isbelow the threshold value. Furthermore this data is important. To retain these regions a dilation operation isperformed to fill in the mask. The resulting weight mask is highly suitable for retrieving the phase over lowspatial frequencies, particularly for determining Zernike coefficients. This type of mask is often useful to rejectnoise from low intensity regions of the images, which is often where the SNR is unacceptably low. This type ofmask is used in steps 2 and 3 above.In our application we need to work with data that has significant spatial frequency content, where a datamask retaining only the central core would not recover the phase information of interest. Further, this highspatial frequency information typically has much less energy than the central core. If a uniform weighting issed across the frame the central core will still dominate and the high-frequency information will effectively beignored. To recover this information we have found that it is necessary to weight the region away from thecentral core much more heavily than the central core. For example, we have achieved good results with a weightof 1 in the central core and a weight of 500 outside it. We employed this type of mask in steps 4 to 6 above. Thisweighting requires data that has good SNR in the high-frequency region, which we typically obtain by registeringand summing hundreds of individual frames.
3. EXPERIMENT
Our goal in this work is to measure, and subsequently correct, the phase of the HiCAT optical system up to theocculting aperture (or Focal Plane Mask, FPM) of the coronagraph. A sketch of the system is shown in Figure 1.The system uses a single-mode optical fiber collimated by an Off-Axis Parabolic (OAP) mirror, labeled as OAP1in Figure 1, to illuminate the pupil of a telescope simulator. The telescope simulator consists of a pupil amplitudeaperture that is imaged onto an Iris AO hexagonal-segment deformable mirror (IRIS-AO DM) by OAP2 and aparabolic mirror. This DM allows us to simulate the segments of a segmented aperture telescope. The DM pupilplane is again reimaged by the parabola and OAP3 onto a reflective apodizer needed for an Apodized Pupil LyotCoronagraph (APLC). The apodizer is the first step in controlling the light in the dark zone of the coronagraph.The apodizer is also used as a tip/tilt mirror to stabilize the beam under servo control. The apodizer pupilplane is imaged by OAP4 and a toroidal mirror, TOR1, onto a Boston Micromachines continuous face-sheetdeformable mirror, DM1. DM1 is used to correct for residual static aberrations of the system, the aberrationsthat we are measuring in this work. The beam reflected from DM1 is directed onto a second identical deformablemirror, DM2. DM2 is a short distance away and is not exactly conjugate to the other pupils of the system.Together DM1 and DM2 are used to suppress light in the dark zone of the coronagraph. After DM2, a secondtoroidal mirror, TOR2, is used to focus the light to a focus at which the FPM is placed when the coronagraphis operating. The FPM is responsible for blocking the majority of the star’s light. However, between TOR2 andthe FPM, a high-quality mirror can be inserted to intercept the converging beam and direct it to a camera ona motorized stage. This camera is used to collect phase-diverse phase retrieval data sets, the results of whichare our topic here. A very high-quality point-source image is formed at the FPM, allowing it to block star lightmost effectively. Beyond the FPM, a spherical mirror is used to re-image the pupil to the Lyot stop of thecoronagraph. The Lyot stop blocks light diffracted by the hard edge of the FPM that falls outside of the systempupil. Finally, fold mirrors and lenses are used to form a coronagraphic image with our desired dark zone or apupil image on the respective cameras.We have made a small number of notable changes from the nominal arrangement described above to helpensure good wavefront correction and facilitate phase retrieval measurements using our current software. First,the nominal pupil aperture has been replaced with a 22 mm diameter aperture to cover the entire area of theBoston Micromachine DMs. This allows us to better correct aberrations that are near the edge of the apodizerpupil (circumscribed diameter of 19 .
725 mm, slightly undersized compared to the DM diameter for alignmenttolerancing purposes). Because of the apodized profile, we use an 18mm aperture mask as a full-aperture proxyto evaluate the wavefront over the apodizer. Secondly, the IRIS-AO DM and apodizer have been replaced withsurrogate flat mirrors. This allows us to use our current phase retrieval code which does not currently handlethe case of a segmented, apodized pupil. Code is in development to handle this situation.
In order to apply the measured phase distributions it is necessary to perform a geometric calibration so that thewavefront can be applied to the DM. This was achieved by applying calibration phase patterns to the DM andrecovering them using phase retrieval. The first step of the calibration is to apply a letter “F” pattern to theDM, which allows us to determine how the DM and phase retrieval results are flipped, left-right and up-down,and rotated with respect to each other. In this step we also confirmed the polarity of the retrieved phase ascompared to the voltages applied to the DM. The DM command and retrieved phase are shown in Figure 2, withthe determined flips and rotations applied.The second calibration step is to determine the location of the center of the DM in the array of data calculatedby the phase retrieval algorithm. To facilitate this, four actuators around the center of the DM are poked, and igure 1. HiCAT testbed layout. The testbed includes a telescope simulator (IRIS-AO 37 segment deformable mirrorcombined with a pupil mask to produce the central obstruction and support structures); an Apodized Pupil Lyot Coro-nagraph (APLC) combining an apodizer, a hard-edge focal plane mask and a Lyot Stop; Imaging camera including acoronagraphic focal plane, a pupil viewing camera; and a phase retrieval arm to measure the wavefront at the focal planemask using a high-quality removable fold mirror to minimize non-common path errors.(a) (b)Figure 2. Letter F pattern (a) DM command and (b) retrieved phase. Color scale is in nm. the associated phase is retrieved. This is illustrated in Figure 3. The intersection of two diagonal lines connectingthe actuators unambiguously locates the center of the DM.Then, the mapping of the recovered wavefront to the individual DM actuators is determined. This is doneby putting a “checkerboard” pattern on the DM in which every fourth actuator is poked, illustrated in Figure 4.Sixteen data sets were recorded where each actuator in a 4 × a) (b)Figure 3. (a) DM command and (b) retrieved phase for the case where four known actuators are poked to determine thelocation of the center of the DM aperture with respect to the phase retrieval phase map. Color scale is in nm. system. Poked Column VariesPoked Row Varies
Figure 4. Sixteen checkerboard DM commands which drive, in turn, all sixteen actuators in a four by four cell. Note theshift of which actuator row and column is active between each of the 16 commands.
In order to measure the system wavefront, we first applied our best-known flat maps to DM1 and DM2, deter-mined before their installation using a Fizeau interferometer. Thus the system was as well-corrected as possiblewithout any additional correction using phase retrieval. The recorded intensity data is shown in the top row ofFigure 6. Each of these images is the sum of 200 registered images. This summing process results in an image
Figure 5. Retrieved phase maps for the 16 checkerboard patterns applied to the DM. Color scale is in nm. with high dynamic range and excellent SNR. The phase recovered from this data using the first 45 Zernike poly-nomials is shown in Figure 7 and the corresponding modeled intensity distributions are shown in the bottom rowof Figure 6. The Peak-to-Valley (PV) wavefront error is 120 nm and the Root-Mean-Squared (RMS) wavefronterror is 16 nm. r e t r i e v e d m ea s u r e d Figure 6. Intensity patterns measured in the system before any wavefront correction was performed using phase retrieval.The images in the top row are the minimally processed intensity measurements from the phase retrieval camera. Theimages in the bottom row are the intensity distributions modeled by our phase retrieval algorithm using the recoveredwavefront. The distances labeled across the top of the figure indicate the axial position of measurement plane with respectto the nominal focus plane.igure 7. Initial wavefront in system exit pupil before applying corrections. The PV wavefront error is 120 nm and theRMS wavefront error is 16 nm. The color scale is in units of nm.
The conjugate of the wavefront in Figure 7 was applied to the DM using the calibration and mapping determinedabove. The recorded PSF data is shown in the top row of Figure 8. The phase recovered from this data using thefirst 45 Zernike polynomials is shown in Figure 9. The PV wavefront error is 35.2 nm and the RMS wavefronterror is 5.5 nm. This is a significant improvement over the uncorrected case. This correction is a strong indicationof the precision and accuracy of our phase retrieval results and demonstrates the validity of our methodology forapplying the correction to the DM. r e t r i e v e d m ea s u r e d Figure 8. Intensity patterns measured in the system after a correction based on the wavefront shown in Figure 7 wasapplied. The images in the top row are the minimally processed intensity measurements from the phase retrieval camera.The images in the bottom row are the intensity distributions modeled by our phase retrieval algorithm using the recoveredwavefront. The distances labeled across the top of the figure indicate the axial position of measurement plane with respectto the nominal focus plane.
Figure 9. Corrected system wavefront, using correction derived from data in Figure 7, in exit pupil of system in the casewhere an oversized, 22 mm diameter aperture was used. The PV wavefront error is 35.2 nm and the RMS wavefront erroris 5.5 nm. The color scale is in units of nm.
After the first wavefront correction, the oversized 22 mm aperture was removed and the nominal 18 mm pupilaperture was inserted. A subsequent data set was taken. Example intensity images are shown in the top rowof Figure 10. The retrieved wavefront is shown in Figure 11. The PV wavefront error is 23.9 nm and the RMSwavefront error is 3.0 nm. An image of the in-focus PSF of the system, resulting from the average of 1000 imagesis shown in Figure 12. It is difficult to distinguish this from a plot of a perfect theoretical result. r e t r i e v e d m ea s u r e d Figure 10. Corrected PSFs, measured and retrieved over 18 mm aperture
Figure 11. Corrected wavefront over system’s nominal 18 mm aperture. The PV wavefront error is 23.9 nm and the RMSwavefront error is 3.0 nm. The color scale is in units of nm.Figure 12. In-focus PSF over 18 mm aperture. The data here is the sum of 1000 registered camera frames, resulting ina very low SNR image with high dynamic range. The data is plotted on a logarithmic scale with a compressed dynamicrange so that very dim outer rings are visible. The PSF is almost indistinguishable from a perfect theoretical result for acircular pupil with no aberrations and, qualitatively, is consistent with our measured wavefront error of 3 nm RMS. . SUMMARY
We have described the parametric approach to phase retrieval, including details of our forward model anddescription of the parameters and corresponding derivatives used as inputs to an optimization algorithm. Wethen described its application to the measurement of the transmitted wavefront error of a coronagraphic testbedsystem and its subsequent correction using a deformable mirror. We included details of the calibrations necessaryto register the measured wavefront maps to the DM actuators, including confirming the orientation and polarityof the data, determining the center point of the DM and calibrating for distortions. The measured wavefronterror improved from 16 nm RMS before correction to 3.0 nm RMS after correction, which is summarized inTable 1. The fact that we are able to measure a low amplitude wavefront, derive and apply a correction fromthis measurement, and realize a significantly improved transmitted wavefront, demonstrates the precision andaccuracy of our phase retrieval algorithm and correction methodology.
Table 1. Summary of wavefront errors before and after correction
Case RMS Wavefront Error (nm)
Initial Aligned System (DM1 and DM2 flattened) 16Correction Applied over 22 mm Aperture 5.5Correction Applied over 18 mm Aperture 3.0
ACKNOWLEDGMENTS
This work is supported by the National Aeronautics and Space Administration under Grants NNX12AG05G,NNX14AD33G issued through the Astrophysics Research and Analysis (APRA) program (PI: R. Soummer) theJWST Telescope Scientist Investigation, NASA Grant NNX07AR82G (PI: C. Matt Mountain) and the STScIDirectors Discretionary Research Funds.
REFERENCES [1] Give’on, A., Kern, B., Shaklan, S., Moody, D. C., and Pueyo, L., “Broadband wavefront correction algorithmfor high-contrast imaging systems,”
Proc.SPIE , 66910A–1 – 6691A–11 (2007).[2] Bottom, M., Femenia, B., Huby, E., Mawet, D., Dekany, R., Milburn, J., and Serabyn, E., “Speckle nullingwavefront control for Palomar and Keck,”
Proc.SPIE , 990955–1 – 990955–16 (2016).[3] Miller, K., Guyon, O., and Males, J., “Spatial linear dark field control: stabilizing deep contrast for exoplanetimaging using bright speckles,”
J. Astron. Telesc. Instrum. Syst. , 049002 (Oct. 2017).[4] Fienup, J. R., “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. , 27–29(Jul 1978).[5] Fienup, J. R., “Phase retrieval algorithms: a comparison,” Appl. Opt. , 2758–2769 (Aug 1982).[6] Fienup, J. R., “Phase retrieval for undersampled broadband images,” J. Opt. Soc. Am. A , 1831–1837(Jul 1999).[7] Brady, G. R., Application of Phase Retrieval to the Measurement of Optical Surfaces and Wavefronts ,dissertation, University of Rochester (2008).[8] Thurman, S. T., DeRosa, R. T., and Fienup, J. R., “Amplitude metrics for field retrieval with hard-edgedand uniformly illuminated apertures,”
J. Opt. Soc. Am. A , 700–709 (Mar 2009).[9] Brady, G. R., Guizar-Sicairos, M., and Fienup, J. R., “Optical wavefront measurement using phase retrievalwith transverse translation diversity,” Opt. Express , 624–639 (Jan 2009).[10] Brady, G. R. and Fienup, J. R., “Nonlinear optimization algorithm for retrieving the full complex pupilfunction,” Opt. Express , 474–486 (Jan 2006).[11] Thurman, S. T. and Fienup, J. R., “Complex pupil retrieval with undersampled data,” J. Opt. Soc. Am.A , 2640–2647 (Dec 2009).[12] Jurling, A. S. and Fienup, J. R., “Phase retrieval with unknown sampling factors via the two-dimensionalchirp z-transform,” J. Opt. Soc. Am. A , 1904–1911 (Sep 2014).13] Gerchberg, R. W. and Saxton, W. O., “Practical agorithm for determination of phase from image anddiffraction plane pictures,” OPTIK (2), 237–& (1972).[14] Misell, D. L., “A method for the solution of the phase problem in electron microscopy,” J. Phys. D (1),L6 (1973).[15] Dean, B. H., Aronstein, D. L., Smith, J. S., Shiri, R., and Acton, D. S., “Phase retrieval algorithm forJWST flight and testbed telescope,” Proc. SPIE , 626511–1 – 626511–17 (June 2006).[16] Thurman, S. T. and Fienup, J. R., “Phase retrieval with signal bias,”
J. Opt. Soc. Am. A , 1008–1014(Apr 2009).[17] Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P., [ Numerical Recipes 3rd Edition: TheArt of Scientific Computing ], Cambridge University Press, New York, NY, USA, 3 ed. (2007).[18] Broyden, C. G., “Quasi-newton methods and their application to function minimisation,”
Math. Com-put. (99), 368–381 (1967).[19] Byrd, R., Lu, P., Nocedal, J., and Zhu, C., “A limited memory algorithm for bound constrained optimiza-tion,” SIAM J. Sci. Comput. (5), 1190–1208 (1995).[20] Zhu, C., Byrd, R. H., Lu, P., and Nocedal, J., “Algorithm 778: L-bfgs-b: Fortran subroutines for large-scalebound-constrained optimization,” ACM Trans. Math. Softw.23