Neural Nano-Optics for High-quality Thin Lens Imaging
Ethan Tseng, Shane Colburn, James Whitehead, Luocheng Huang, Seung-Hwan Baek, Arka Majumdar, Felix Heide
NNeural Nano-Optics for High-quality Thin Lens Imaging
Ethan Tseng, ∗ Shane Colburn, ∗ James Whitehead, Luocheng Huang, Seung-Hwan Baek, ArkaMajumdar, , Felix Heide † Princeton University, Department of Computer Science University of Washington, Department of Electrical and Computer Engineering University of Washington, Department of Physics ∗ These authors contributed equally to this work † Corresponding author. E-mail: [email protected]
Nano-optic imagers that modulate light at sub-wavelength scales could unlock unprecedentedapplications in diverse domains ranging from robotics to medicine. Although metasurfaceoptics offer a path to such ultra-small imagers, existing methods have achieved image qualityfar worse than bulky refractive alternatives, fundamentally limited by aberrations at largeapertures and low f-numbers. In this work, we close this performance gap by presentingthe first neural nano-optics . We devise a fully differentiable learning method that learnsa metasurface physical structure in conjunction with a novel, neural feature-based imagereconstruction algorithm. Experimentally validating the proposed method, we achieve anorder of magnitude lower reconstruction error. As such, we present the first high-quality,nano-optic imager that combines the widest field of view for full-color metasurface operationwhile simultaneously achieving the largest demonstrated 0.5 mm, f/2 aperture.
The miniaturization of intensity sensors in recent decades has made today’s cameras ubi-quitous across many application domains, including medical imaging, commodity smartphones,security, robotics, and autonomous driving. However, only imagers that are an order of magnitudesmaller could enable novel applications in nano-robotics, in vivo imaging, AR/VR, and healthmonitoring. While sensors with sub-micron pixels do exist, further miniaturization has been pro-hibited by fundamental limitations of conventional optics. Traditional systems consist of a cascadeof refractive elements that correct for aberrations, and these bulky lenses impose a lower limit oncamera footprint. A further fundamental barrier is the difficulty of reducing focal length, as thisinduces greater chromatic aberrations.We turn towards computationally designed metasurface optics (meta-optics) to close thisgap and enable ultra-compact cameras that could allow for unprecedented capabilities in endo-scopy, brain imaging, or in a distributed fashion as collaborative optical “dust” on scene sur-faces. Ultrathin meta-optics utilize subwavelength nano-antennas to modulate incident light withgreater design freedom and space-bandwidth product over conventional diffractive optical elements(DOEs) . Researchers have harnessed this potential for building flat optics for imaging
5, 6 , po-larization control , and holography . Existing metasurface imaging methods, however, achieve anorder of magnitude higher reconstruction error than achievable with refractive compound lensesdue to severe, wavelength-dependent aberrations that arise from discontinuities in their impar-1 a r X i v : . [ phy s i c s . op ti c s ] F e b ed phase
2, 5, 9–15 . Dispersion-engineered metasurfaces aims to mitigate this by exploiting groupdelay and group delay dispersion to focus broadband light , but this technique is fundamentallylimited , constraining designs to apertures of ≈ ’s of microns. As such, existing approacheshave not been able to increase the achievable aperture sizes without significantly reducing the nu-merical aperture or supported wavelength range. Other attempted solutions only suffice for discretewavelengths or narrowband illumination .Metasurfaces also exhibit strong geometric aberrations that have limited their utility for widefield of view (FOV) imaging. Approaches that support wide FOV typically rely on either small in-put apertures that limit light collection or use multiple metasurfaces , which drastically increasesfabrication complexity. Moreover, these multiple metasurfaces are separated by a gap that scaleslinearly with the aperture, thus obviating the size benefit of meta-optics as the aperture increases.Recently, researchers have leveraged computational imaging to offload aberration correctionto post-processing software
9, 24, 25 . Although these approaches enable full-color imaging metasur-faces without stringent aperture limitations, they are limited to a FOV below ◦ and the recon-structed spatial resolution is an order of magnitude below that of conventional refractive optics.Researchers have similarly proposed camera designs that utilize a single optic instead ofcompound stacks
26, 27 , but these systems fail to match the performance of commodity imagers dueto low diffraction efficiency. Moreover, the most successful approaches hinder miniaturizationbecause of their long backfocal distances of more than
10 mm . Lensless cameras instead reducesize by replacing the optics with amplitude masks, but this severely limits spatial resolution andrequires long acquisition times.In this work, we propose neural nano-optics , leveraging a learned design method that over-comes these limitations of existing techniques. In contrast to previous works that rely on hand-crafted designs, we co-optimize the metasurface and deconvolution algorithm with an end-to-enddifferentiable model of image formation and computational reconstruction. This model exploits amemory-efficient differentiable nano-scatterer model, as well as a novel, neural feature-based re-construction architecture. The jointly optimized nano-optic can be fabricated at a mass scale usingdeep ultraviolet (DUV) lithography. Our approach departs from inverse-designed meta-optics
30, 31 in that we support larger aperture sizes and directly optimize the quality of the final image as op-posed to intermediate metrics such as the focal spot intensity. Although end-to-end optimization ofDOEs has been explored , existing methods using phase plates assume shift-invariant systemsand only support small FOVs of ≈ ◦ . Furthermore, existing learned deconvolution methods areonly minor variations of standard encoder-decoder architectures, such as the U-Net , and oftenfail to generalize to experimental measurements or handle large spatially-dependent aberrations,as found in metasurface images.With the proposed neural nano-optics, we achieve the first high-quality, polarization-insensitive2ano-optic imager for full-color (
400 nm to
700 nm ), wide FOV ( ◦ ) imaging with an f-numberof . For our µ m aperture, we optimized . × nano-scatterers, which is an order ofmagnitude greater than existing achromatic meta-optics. Compared to all existing heuristically de-signed metasurfaces and metasurface computational imaging approaches, we outperform existingmethods by an order of magnitude in reconstruction error outside the nominal wavelength rangeon experimental captures. Differentiable Metasurface Proxy Model
The proposed differentiable metasurface image form-ation model (Fig. 1) consists of three sequential stages that utilize differentiable tensor operations:metasurface phase determination, PSF simulation and convolution, and sensor noise. In our model,polynomial coefficients that determine the metasurface phase are optimizable variables, whereasexperimentally calibrated parameters characterizing the sensor readout and the sensor-metasurfacedistance are fixed.The optimizable metasurface phase function φ as a function of distance r from the opticalaxis is given by φ ( r ) = n (cid:88) i =0 a i (cid:16) rR (cid:17) i , (1)where the { a , . . . a n } are optimizable coefficients, R is the phase mask radius, and n is the numberof polynomial terms. We optimize the metasurface in this phase function basis as opposed to ina pixel-by-pixel manner to avoid local minima, see Supplemental Material. This phase, however,is only defined for a single, nominal design wavelength, which is a fixed hyperparameter in ouroptimization. While this mask alone is sufficient for modeling monochromatic light propagation,we require the phase at all target wavelengths to design for a broadband imaging scenario.To this end, at each position in our metasurface we apply two sequential operations. The firstoperation is an inverse, phase-to-structure mapping that computes the scatterer geometry giventhe desired phase at the nominal design wavelength. With the scatterer geometry determined, wecan then apply a forward, structure-to-phase mapping to calculate the phase at the remaining targetwavelengths. Leveraging an effective index approximation that ensures a unique geometry for eachphase shift in the 0 to π range, we ensure differentiability, and can directly optimize the phasecoefficients by adjusting the scatterer dimensions and computing the response at different targetwavelengths, see Supplemental Document.These phase distributions differentiably determined from the nano-scatterers allow us to thencalculate the PSF as a function of wavelength and field angle to efficiently model full-color imageformation over the whole FOV, see Supplemental Document. Finally, we simulate sensing andreadout with experimentally calibrated Gaussian and Poisson noise by using the reparameterizationand score-gradient techniques to enable backpropagation, see Supplemental Document.When compared directly against alternative computational forward simulation methods, such3s finite-difference time-domain (FDTD) simulation , our technique is approximate but is morethan three orders of magnitudes faster and more memory efficient. For the same aperture as ourdesign, FDTD simulation would require on the order of 30 terabytes for accurate meshing alone.Our technique instead only scales quadratically with length. This enables our entire end-to-endpipeline to achieve a memory reduction of over × , with metasurface simulation and imagereconstruction both fitting within a few gigabytes of GPU RAM. Neural Feature Propagation and Learned Nano-Optics Design
We propose a neural deconvo-lution method that incorporates learned priors while generalizing to unseen test data. Specifically,we design a neural network architecture that performs deconvolution on a learned feature space in-stead of on raw image intensity. This technique combines both the generalization of model-baseddeconvolution and the effective feature learning of neural networks, allowing us to tackle imagedeconvolution for meta-optics with severe aberrations and spatially large PSFs. This approachgeneralizes well to experimental captures even when trained only in simulation, see SupplementalDocument.The proposed reconstruction network architecture comprises three stages: a multi-scale fea-ture extractor f FE , a propagation stage f Z → W that deconvolves these features (i.e., propagates fea-tures Z to their deconvolved spatial positions W), and a decoder stage f DE that combines thepropagated features into a final image. Formally, our feature propagation network performs thefollowing operations: O = f DE ↑ Decoder ( Feature Propagation ↓ f Z → W ( f FE ↑ Feature Extraction ( I ) , PSF ) ) , (2)where I is the raw sensor measurement and O is the output image.Both the feature extractor and decoder are constructed as fully convolutional neural networks.The feature extractor identifies features at both the native resolution and multiple scales to facil-itate learning low-level and high-level features, allowing us to encode and propagate higher-levelinformation beyond raw intensity. The subsequent feature propagation stage f Z → W then propagatesthe features to their inverse-filtered positions using a differentiable deconvolution method. Finally,the decoder stage then converts the propagated features back into image space, see SupplementalDocument for architecture details. When compared against existing state-of-the-art deconvolu-tion approaches we achieve over PSNR improvement (more than . × reduction in meansquared error) for deconvolving challenging metasurface incurred aberrations, see SupplementalDocument.Both our metasurface image formation model and our deconvolution algorithm are incor-porated into a fully differentiable, end-to-end imaging chain. Our metasurface imaging pipelineallows us to apply first-order stochastic optimization methods to learn metasurface phase paramet-ers P META and parameters P DECONV for our deconvolution network f DECONV that will minimize our4ndpoint loss function L , which in our case is a perceptual quality metric. Our image formationmodel is thus defined as O = f DECONV ( P DECONV , f
SENSOR ( I ∗ f META ( P META )) , f META ( P META ))) , (3)where I is an RGB training image, f META generates the metasurface PSF from P META , ∗ is convo-lution, and f SENSOR models the sensing process including sensor noise. Since our deconvolutionmethod is non-blind, f DECONV takes in f META ( P META ) . We then solve the following optimizationproblem {P ∗ META , P ∗ DECONV } = argmin P META , P DECONV M (cid:88) i =1 L ( O ( i ) , I ( i ) ) . (4)The final learned parameters P ∗ META are used to manufacture the meta-optic and P ∗ DECONV determinesthe deconvolution algorithm, see Supplemental Material for further details.
Imaging Demonstration
High-quality, full-color image reconstructions using our neural nano-optic are shown in Fig. 2 and the Supplemental Document. We perform comparisons against a tra-ditional hyperbolic meta-optic designed for
511 nm and the state-of-the-art cubic meta-optic fromColburn et al. . Additional comparisons against alternative single-optic and meta-optic designsare shown in the Supplemental Document. Ground truth images are acquired using a six-elementcompound optic that is × larger in volume than the meta-optics. Our full computationalreconstruciton pipeline runs at real-time rates and requires only
58 ms to process a
720 px ×
720 px
RGB capture.The traditional hyperbolic meta-optic experiences severe chromatic aberrations at larger andshorter wavelengths. This is observed in the heavy red blurring in Fig. 2(A) and the washed outblue color in Fig. 2(C). The cubic meta-optic maintains better consistency across color channelsbut suffers from artifacts owing to its large, asymmetric PSF. In contrast, we demonstrate high-quality images without these aberrations, which are observable in the fine details in the fruits inFig. 2(A), the patterns on the lizard in Fig. 2(B), and the flower petals in Fig. 2(C). We quantitat-ively validate the proposed neural nano-optic by measuring reconstruction error on an unseen testset of natural images, on which we obtain × lower mean-squared error than existing approaches,see Supplemental Document. In addition to natural image reconstruction, we also measured thespatial resolution using standard test charts, see Supplemental Document. Our nano-optic imagerachieves an image-side spatial resolution of
214 lp / mm across all color channels at
120 mm objectdistance. We improve spatial resolution by an order of magnitude over the previous state-of-the-artby Colburn et al. which achieved
30 lp / mm . Characterizing Nano-Optics Performance
Through our optimization process, our meta-opticlearns to produces compact spatial PSFs that minimize chromatic aberrations across all color chan-nels. Unlike designs that exhibit a sharp focus for a single wavelength but significant aberrations atother wavelengths, our optimized design strikes a balance across wavelengths to facilitate full-color5maging. Furthermore, the learned metasurface avoids the spatially large PSFs used previously byColburn et al. for computational imaging.After optimization, we fabricated our neural meta-optics (Fig. 3), as well as several heuristicdesigns for comparison, see Supplemental Document. Note that commercial large-scale productionof our nano-optic can be performed using high-throughput processes based on DUV lithographywhich is standard for mature industries such as integrated circuits. The simulated and experi-mental PSFs are shown in Fig. 3 and are in strong agreement, validating the physical accuracyof the proxy metasurface model. To account for manufacturing imperfections, we perform a PSFcalibration step where we capture the spatial PSFs using the fabricated optics. We then finetuneour deconvolution network by replacing the proxy-based metasurface simulator with the capturedPSFs. The finetuned network is deployed on experimental captures using the setup shown in Fig.S8. This finetuning calibration step does not train on experimental captures, we only require themeasured PSFs, without requiring experimental collection of a vast image dataset.We observe that the PSF for our optimized meta-optic exhibits a combination of compactshape and minimal variance across field angles, as expected for our design. PSFs for a traditionalhyperbolic meta-optic (
511 nm ) instead have significant spatial variation across field angles andsevere chromatic aberrations that cannot be compensated through deconvolution. While the cubicdesign from Colburn et al. does exhibit spatial invariance, its asymmetry and large spatial extentintroduce severe artifacts that reduce image quality. See Fig. 3 and Supplemental Document forcomparisons of the traditional meta-optic and Colburn et al. against ours. We also show corres-ponding modulation transfer functions (MTFs) for our design in Fig. 3. The MTF does not changeappreciably with incidence angle and also preserves a broad range of spatial frequencies across thevisible spectrum. Discussion
In this work, we present a new paradigm for achieving high-quality, full-color, wideFOV imaging using neural nano-optics . Specifically, the proposed learned imaging method allowsfor an order of magnitude lower reconstruction error on experimental data than existing works.The key enablers of this result are our differentiable meta-optical image formation model andnovel deconvolution algorithm. Combined together as a differentiable end-to-end model, we jointlyoptimize the full computational imaging pipeline with the only target metric being the quality ofthe deconvolved RGB image – sharply deviating from existing methods that penalize focal spotsize in isolation from the reconstruction method.We have demonstrated, for the first time, the viability of meta-optics for high-quality ima-ging in full-color, over a wide FOV. No existing meta-optic demonstrated to date approaches acomparable combination of image quality, large aperture size, low f-number, wide fractional band-width, wide FOV, and polarization insensitivity (see Supplemental Document), and the proposedmethod could scale to mass production. Furthermore, we demonstrate image quality on par witha bulky, six-element commercial compound lens even though our design volume is 550000 × lower nd utilizes a single metasurface .We have designed neural nano-optics for a dedicated imaging task, but we envision ex-tending our work towards flexible imaging with reconfigurable nanophotonics for diverse tasks,ranging from extended depth of field to classification or object detection tasks. We believe thatthe proposed method takes an essential step towards ultra-small cameras that may enable novelapplications in endoscopy, brain imaging, or in a distributed fashion on scene surfaces. References
1. Engelberg, J. & Levy, U. The advantages of metalenses over diffractive lenses.
Nature Com-munications (2020).2. Lin, D., Fan, P., Hasman, E. & Brongersma, M. L. Dielectric gradient metasurface opticalelements. Science , 298–302 (2014).3. Wetzstein, G. et al.
Inference in artificial intelligence with deep optics and photonics.
Nature
588 7836 , 39–47 (2020).4. Peng, Y. et al.
Learned large field-of-view imaging with thin-plate optics.
ACM Transactionson Graphics (TOG) , 1 – 14 (2019).5. Yu, N. & Capasso, F. Flat optics with designer metasurfaces. Nature materials
13 2 , 139–50(2014).6. Aieta, F. et al.
Aberration-free ultrathin flat lenses and axicons at telecom wavelengths basedon plasmonic metasurfaces.
Nano Letters , 4932–4936 (2012).7. Arbabi, A., Horie, Y., Bagheri, M. & Faraon, A. Dielectric metasurfaces for complete controlof phase and polarization with subwavelength spatial resolution and high transmission. NatureNanotechnology
10 11 , 937–43 (2015).8. Zheng, G. et al.
Metasurface holograms reaching 80% efficiency.
Nature Nanotechnology , 308–12 (2015).9. Colburn, S., Zhan, A. & Majumdar, A. Metasurface optics for full-color computational ima-ging.
Science Advances (2018).10. Arbabi, A. et al. Miniature optical planar camera based on a wide-angle metasurface doubletcorrected for monochromatic aberrations.
Nature Communications (2016).11. Avayu, O., Almeida, E., Prior, Y. & Ellenbogen, T. Composite functional metasurfaces formultispectral achromatic optics. Nature Communications (2017).12. Aieta, F., Kats, M. A., Genevet, P. & Capasso, F. Multiwavelength achromatic metasurfacesby dispersive phase compensation. Science , 1342 – 1345 (2015).73. Khorasaninejad, M. et al.
Metalenses at visible wavelengths: Diffraction-limited focusing andsubwavelength resolution imaging.
Science , 1190–1194 (2016).14. Wang, S. et al.
A broadband achromatic metalens in the visible.
Nature Nanotechnology ,227–232 (2018).15. Shrestha, S., Overvig, A. C., Lu, M. Y., Stein, A. & Yu, N. Broadband achromatic dielectricmetalenses. Light, Science & Applications (2018).16. Ndao, A. et al. Octave bandwidth photonic fishnet-achromatic-metalens.
Nature Communic-ations (2020).17. Chen, W. et al. A broadband achromatic metalens for focusing and imaging in the visible.
Nature Nanotechnology , 220–226 (2018).18. Arbabi, E., Arbabi, A., Kamali, S. M., Horie, Y. & Faraon, A. Controlling the sign of chro-matic dispersion in diffractive optics with dielectric metasurfaces. Optica , 625–632 (2017).19. Khorasaninejad, M. et al. Achromatic metalens over 60 nm bandwidth in the visible andmetalens with reverse chromatic dispersion.
Nano letters , 1819–1824 (2017).20. Wang, S. et al. Broadband achromatic optical metasurface devices.
Nature communications , 1–9 (2017).21. Presutti, F. & Monticone, F. Focusing on bandwidth: achromatic metalens limits. Optica ,624–631 (2020).22. Wang, B. et al. Visible-frequency dielectric metasurfaces for multiwavelength achromatic andhighly dispersive holograms.
Nano letters
16 8 , 5235–40 (2016).23. Shalaginov, M. Y. et al.
Single-layer planar metasurface lens with > Fron-tiers in Optics + Laser Science APS/DLS
FM4C.1 (2019).24. Colburn, S. & Majumdar, A. Simultaneous achromatic and varifocal imaging with quarticmetasurfaces in the visible.
ACS Photonics , 120–127 (2020).25. Heide, F. et al. High-quality computational imaging through simple lenses.
ACM Transactionson Graphics (TOG) , 149:1–149:14 (2013).26. Peng, Y. et al. Computational imaging using lightweight diffractive-refractive optics.
Opticsexpress
23 24 , 31393–407 (2015).27. Peng, Y., Fu, Q., Heide, F. & Heidrich, W. The diffractive achromat full spectrum compu-tational imaging with diffractive optics.
ACM Transactions on Graphics (TOG) , 1 – 11(2016).28. Heide, F., Fu, Q., Peng, Y. & Heidrich, W. Encoded diffractive optics for full-spectrum com-putational imaging. Scientific reports , 33543 (2016).89. White, A. D., Khial, P. P., Salehi, F., Hassibi, B. & Hajimiri, A. A silicon photonics computa-tional lensless active-flat-optics imaging system. Scientific Reports (2020).30. Mansouree, M. et al. Multifunctional 2.5 d metastructures enabled by adjoint optimization.
Optica , 77–84 (2020).31. Chung, H. & Miller, O. D. High-na achromatic metalenses by inverse design. Optics express
28 5 , 6945–6965 (2020).32. Stork, D. G. & Gill, P. R. Optical, mathematical, and computational foundations of lenslessultra-miniature diffractive imagers and sensors.
International Journal on Advances in Systemsand Measurements , 4 (2014).33. Sitzmann, V. et al. End-to-end optimization of optics and image processing for achromaticextended depth of field and super-resolution imaging.
ACM Transactions on Graphics (TOG) , 1 – 13 (2018).34. Sun, Q., Tseng, E., Fu, Q., Heidrich, W. & Heide, F. Learning rank-1 diffractive optics forsingle-shot high dynamic range imaging. In IEEE Conference on Computer Vision and PatternRecognition (CVPR) (2020).35. Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical imagesegmentation. In
MICCAI (2015).
Code and Data Availability
The code and data used to generate the findings of this study will be madepublic on GitHub. Throughout the review process, code and data are attached to this submission in the zipfile named “Neural Nano-Optics Code.zip”.
Acknowledgements
This research was supported by NSF-1825308, DARPA (Contract no. 140D0420C0060),the UW Reality Lab, Facebook, Google, Futurewei, and Amazon. A.M. is also supported by a WashingtonResearch Foundation distinguished investigator award. Part of this work was conducted at the WashingtonNanofabrication Facility / Molecular Analysis Facility, a National Nanotechnology Coordinated Infrastruc-ture (NNCI) site at the University of Washington with partial support from the National Science Foundationvia awards NNCI-2025489 and NNCI-1542101.
Author Contributions
E.T. and F.H. developed the novel feature space-deconvolution technique, integ-rated the metasurface model and deconvolution framework, performed the final design optimizations, andled the manuscript writing. S.C. developed the differentiable metasurface and sensor noise model, led theexperiment, and assisted E.T. in writing the manuscript. J.W. fabricated all the devices and assisted in theexperiment. L.H. developed the scripts for automated image capture and assisted in the experiment. S.B.assisted in writing the manuscript. A.M. and F.H. supervised the project and assisted in writing the manu-script.
Competing Interests
A.M. is co-founder of Tunoptix Inc., which is commercializing technology dis-cussed in this manuscript. upplementary Information Supplementary Information is attached to this submission.
Correspondence
Correspondence should be addressed to Felix Heide.
05 nm350 nm100 to 290 nm Si N SiO Sensor
Meta-Optic
A B DC
100 μm
10 μm
500 nm
Sensor FeatureDeconvolutionMeta-OpticScene Sensor Image g DeconvolvedResult LossGround Truth f ProxyModelBackwardForward PSF * Noise E Figure 1:
Our learned, ultrathin meta-optic as shown in (A) is µ m in thickness anddiameter, allowing for the design of a miniature camera. The manufactured optic is shownin (B). A zoom-in is shown in (C) and nanopost dimensions are shown in (D). Our end-to-end imaging pipeline shown in (E) is composed of the proposed efficient metasurfaceimage formation model and the feature-based deconvolution algorithm. From the optimiz-able phase profile our differentiable model produces spatially-variant PSFs, which are thenpatch-wise convolved with the input image to form the sensor measurement. The sensorreading is then deconvolved using our algorithm to produce the final image. olburn et al. 2018Hyperboloid (511 nm) Neural Nano-Optics Ground Truthusing Compound Optic ABC
Figure 2:
Experimental imaging results are shown in (A), (B), and (C), with insets in-cluded below each row. Compared to existing state-of-the-art designs, the proposed neuralnano-optics produces high-quality wide FOV reconstructions corrected for aberrations. Wecompare our reconstructions to ground truth acquisitions using a high-quality, six-elementcompound refractive optic, and we demonstrate accurate reconstructions even though ourmeta-optic volume is × lower than the compound optic. ° 10° 20° 0° 10° 20° M ea s u r e d
0° 10° 20° S i m u l a t e d M T F Simulated Phase Pattern Fabricated Electron Microscope Capture
100 μm100 μm M ea s u r e d N e u r a l N a no - O p ti c s C o l bu r n e t a l . H yp e r bo l o i d ( n m ) M ea s u r e d
25 μm25 μm
606 nm 511 nm 462 nm
Figure 3:
The proposed learned meta-optic is fabricated using electron-beam lithographyand dry etching, and the corresponding measured PSFs, simulated PSFs, and simulatedMTFs are shown. Before capturing images, we first use the fabricated optics to capturespatial PSFs to account for fabrication inaccuracies. Nevertheless, the match between thesimulated PSFs and the captured PSFs demonstrates the accuracy of our metasurface proxymodel. In contrast, the PSFs of the traditional meta-optic and the cubic design proposed byColburn et al. demonstrate severe chromatic aberrations at the red and blue wavelengthsand across the different field angles. The proposed learned design maintains consistentPSF shape across the visible spectrum and for all field angles across the FOV, facilitatingdownstream deconvolution and the final image reconstruction.demonstrate severe chromatic aberrations at the red and blue wavelengthsand across the different field angles. The proposed learned design maintains consistentPSF shape across the visible spectrum and for all field angles across the FOV, facilitatingdownstream deconvolution and the final image reconstruction.