Deep-STORM: super-resolution single-molecule microscopy by deep learning
Elias Nehme, Lucien E. Weiss, Tomer Michaeli, Yoav Shechtman
DDeep-STORM: super-resolution single-molecule microscopy by deep learning
Elias Nehme , Lucien E. Weiss , Tomer Michaeli , and Yoav Shechtman Department of Electrical Engineering, Technion, 32000 Haifa, Israel Department of Biomedical Engineering, Technion, 32000 Haifa, Israel * Corresponding author: [email protected]
Abstract
We present an ultra-fast, precise, parameter-free method, whichwe term Deep-STORM, for obtaining super-resolution im-ages from stochastically-blinking emitters, such as fluorescentmolecules used for localization microscopy. Deep-STORM usesa deep convolutional neural network that can be trained on sim-ulated data or experimental measurements, both of which aredemonstrated. The method achieves state-of-the-art resolutionunder challenging signal-to-noise conditions and high emitterdensities, and is significantly faster than existing approaches.Additionally, no prior information on the shape of the underly-ing structure is required, making the method applicable to anyblinking data-set. We validate our approach by super-resolutionimage reconstruction of simulated and experimentally obtaineddata.
In conventional microscopy, the spatial resolution of an image isbounded by Abbe’s diffraction limit, corresponding to approxi-mately half the optical wavelength. Super resolution methods,e.g. stimulated emission depletion (STED) [1, 2], structured il-lumination micrscopy (SIM) [3–5], and localization microscopy,namely photo-activated localization microscopy ((F)PALM) [6, 7]and stochastic optical reconstruction microscopy (STORM) [8]have revolutionized biological imaging in the last decade, en-abling the observation of cellular structures at the nanoscale[9]. Localization microscopy relies on acquiring a sequence ofdiffraction-limited images, each containing point-spread func-tions (PSFs) produced by a sparse set of emitting fluorophores.Next, the emitters are localized with high precision. By com-bining all of the recovered emitter positions from each frame, asuper-resolved image is produced with resolution typically anorder of magnitude better than the diffraction limit (down totens of nanometers).In localization microscopy, regions with a high density ofoverlapping emitters pose an algorithmic challenge. Thisemitter-sparsity constraint leads to a long acquisition time (sec-onds to minutes), which limits the ability to capture fast dy-namics of sub-wavelength processes within live cells. Variousalgorithms have been developed to handle overlapping PSFs.Existing classes of algorithms are based on sequential fittingof emitters, followed by subtraction of the model PSF [10–13];blinking statistics [14–16]; sparsity [17–23]; multi-emitter max-imum likelihood estimation [24]; or even single-image super-resolution by dictionary learning [25, 26]. While successful lo-calization of densely-spaced emitters has been demonstrated,all existing methods suffer from two fundamental drawbacks:data-processing time and sample-dependent paramter tuning. Even accelerated sparse-recovery methods such as CEL0 [21],which employs the fast FISTA algorithm [27], still involve atime-consuming iterative procedure, and scale poorly with therecovered grid size. In addition, current methods rely on pa-rameters that balance different tradeoffs in the recovery process.These need to be tuned carefully through trial and error to obtainsatisfactory results; ergo, requiring user expertise and tweaking-time.Here we demonstrate precise, fast, parameter-free, super-resolution image reconstruction by harnessing Deep-Learning.Convolutional neural networks have shown impressive resultsin a variety of image processing and computer-vision tasks, suchas single-image resolution enhancement [28–32] and segmen-tation [33–35]. In this work, we employ a fully convolutionalneural network for super-resolution image reconstruction fromdense fields of overlapping emitters. Our method, dubbed Deep-STORM, does not explicitly localize emitters. Instead, it createsa super-resolved image from the raw data directly. The net pro-duces images with reconstruction resolution comparable or bet-ter than existing methods; furthermore, the method is extremelyfast, and our software can leverage GPU computation for furtherenhanced speed. Moreover, Deep-STORM is parameter free,requiring no expertise from the user, and is easily implementedfor any single-molecule dataset. Importantly, Deep-STORM isgeneral and does not rely on any prior knowledge of the struc-ture in the sample, unlike recently demonstrated, single-shotimage enhancement by Deep-Learning [36].
In short, Deep-STORM utilizes an artificial neural net that re-ceives a set of frames of (possibly very dense) point emitters andoutputs a set of super-resolved images (one per frame), basedon prior training performed on simulated or experimentally ob-tained images with known emitter positions. The output imagesare then summed to produce a single super-resolved image.
The net-architecture is based on a fully convolutional encoder-decoder network and was inspired by previous work on cellcounting [37]. The network (Figure 1) first encodes the inputintensity-image into a dense, aggregated feature-representation,through three 3 × × a r X i v : . [ phy s i c s . op ti c s ] M a y ig. 1. Network architecture. A set of low-resolutiondiffraction-limited images of stochastically blinking emitters isfed into the network to produce reconstructed high-resolutionimages. The resulting outputs are then summed to generatethe final super-resolved image.deconvolution layers, each consisting of 2 × × × M trainable parameters. The final pixel-wiseprediction (super-resolution frame) is created using a depth-reducing 1 × Given the camera specifications, PSF model, approximate signal-to-noise ratio (SNR), and the expected emitter density, twenty64 ×
64 pixel images containing randomly positioned emittersare simulated using the ImageJ [40, 41] ThunderSTORM plugin[42]. From each frame we extract 500 random 26 ×
26 regionsand their respective ground truth xy emitter-positions. To gen-erate the final training examples we upsample each region by afactor of 8, and project the appropriate emitter positions on thehigh-resolution grid. The result is a set of 10 K pairs of upsam-pled low-resolution regions (208 ×
208 pixels) alongside imageswith spikes at the ground truth positions, used as training exam-ples. Each region is normalized using the mean and averagedstandard deviation (per-region) of the dataset without additionaldata augmentation. An example training input-image and thecorresponding output (after training) are shown in Figure 2.
Unlike typical localization-microscopy approaches, Deep-STORM directly outputs the super-resolved images rather than alocalization list. Therefore, as a loss function for training the net,we adapt a regression approach. Specifically, we measure thesquared (cid:96) distance between the network’s prediction and theground truth image (consisting of delta functions in the emitterpositions) convolved with a small 2D Gaussian kernel. To pro-mote sparsity of the network’s output, we also introduce an (cid:96) penalizer. Let x i be the image with delta functions at the groundtruth positions, ˆ x i be the network’s prediction, g the Gaussiankernel, N the number of images in the training set, and (cid:126) denoteconvolution, then the resulting loss function is: (a) (b)Diffraction Limited Super Resolved Fig. 2.
Simulated dense emitters. (a) Low-resolution image.Scale bar is 0.5 µ m . (b) Deep-STORM prediction on a 12.5 nm grid with ground truth emitter locations overlaid as crossmarks on top. (cid:96) ( x , ˆ x ) = N Σ Ni = (cid:107) ˆ x i (cid:126) g − x i (cid:126) g (cid:107) + (cid:107) ˆ x i (cid:107) (1)It is possible to incorporate a regularization parameter tothe (cid:96) term controlling the desired sparsity level; however, weobserved high robustness of the resulting predictions to such aparameter. Hence, we chose to keep Deep-STORM parameterfree. The network was implemented in Keras [43] with a Tensor-Flow [44] backend. We trained the network for 100 epochs onbatches of 16 samples using the the Adam [45] optimizer withthe default parameters, a Gaussian kernel with σ = i − ∼ Quantum dot (QD) samples were prepared by diluting 705 nm-emitting QDs (Invitrogen) 1 : 1000 v/v in 1% poly(vinyl alcohol)(Mowiol 8-88, Sigma-Aldrich), then spin coating onto a stan-dard glass coverslip (no. 1.5, Fisherscientific). Images wererecorded using Nikon Imaging Software which controlled a stan-dard inverted microscope (Eclipse TI2, Nikon) with a 405 nmlight source (iChrome MLE, Toptica). Fluorescence emissionfrom the QDs was collected using a high numerical aperture(1.49), 100 × objective lens (CFI Apochromat TIRF 100XC Oil,Nikon), chromatically filtered to remove background (ZT488rdc& ET500LP, Chroma), then captured with a 400 ms exposuretime on an sCMOS camera (95B, Photometrics). To achieve avariety of SNRs and emitter densities, images were taken withvarious laser powers and combined in post processing. We validated Deep-STORM on both simulated and experimen-tal data. All microtubule reconstructions were obtained on agrid with a 12.5 nm pixel size, and QD reconstructions wereobtained on a grid with a 13.75 nm pixel size. In order to esti-mate the expected resolution of the net’s output, we simulatedreconstruction of a synthetic structure of horizontal stripes atdecreasing separations, on various emitter densities, using nets2 ap Distance [nm]10 I n t e n s it y [ a . u . ] (e) Ground Truth1
Emitter per µ m
35 79
144 69 56 44 38 31 19 6Density [Emitters per µ m ] S i ng l e F r a m e D ee p - S T O R M G r ound T r u t h D i ffr ac ti on L i m it e d (c) (b)(d) Fig. 3.
Resolution and emitter density (simulation). (a) Diffraction-limited image of horizontal lines, scale bar 500 nm. (b) Simulatedsingle-frames of emitters at various densities with a mean of 10 background photons per pixel and 1000 signal photons per emitter.(c) The ground truth positions of simulated emitters. (d) Deep-STORM reconstructed images. (e) Sum along the horizontal axis ofthe reconstruction intensities.that were trained accordingly, for a reasonable single-moleculelevel SNR of 1000 signal photons and 10 background photonsper pixel (Fig. 3). Notably, the minimal resolvable distance be-tween stripes increases as a function of emitter density, rangingfrom at least 19 nm for 1 [ emitter µ m ] to 31 nm for 9 [ emitter µ m ] . Asimilar resolution analysis for various SNR values is included inthe Supplementary Information section.Next, we tested Deep-STORM on super-resolution data, andbenchmarked against a recently developed high-performancemulti-emitter fitting algorithm (CEL0 [21]). First, we recon-structed a simulated microtubule dataset available on the EPFLSMLM challenge website [47] (Fig. 4). The optimal regulariza-tion parameter for CEL0 was set empirically to λ = NMSE ( ˆ x , x ) = (cid:107) ˆ x − x (cid:107) (cid:107) x (cid:107) × σ = σ = sparse , randomly-distributed quantum dots (a total of 1560 emitters); then lo-calized them with high precision using ThunderSTORM [42].The sparse frames were next cropped into smaller regions and3 a) (b) (c) (d)Diffraction Limited Ground Truth CEL0 Deep-STORM(e) Ground Truth CEL0 Deep-STORM (f) Ground Truth CEL0 Deep-STORM d e Fig. 4.
Simulated microtubules. (a) Sum of the acquisition stack. Scale bar is 1 µ m . (b) Ground truth. (c) Reconstruction by the CEL0method (d) Reconstruction by Deep-STORM. (e)-(f) Magnified views of two selected regions. Scale bars are 0.5 µ m . (a) Ground Truth (b) CEL0 Deep-STORM Fig. 5.
Reconstruction accuracy. (a) Ground truth image ofsimulated microtubules. Scale bar is 1 µ m. (b) Merged re-construction with the ground truth. Red shows the groundtruth, green corresponds to the recovery result, and yellowmarks their overlap. Note that CEL0 (left) does not followthe twisted shape in all places (white triangles), while Deep-STORM (right) better recovers the underlying structure.summed to generate dense regions for training (1200 regions)or evaluation (360 regions). Specifically, we chose 8 random re-gions at a time, and summed them. Notably, by combining andshifting portions of only 100 images, we produced a library of10 K summed regions that was used to train the network, and 3 K for testing. The resulting imaging conditions were challenging:the emitter density of the regions was around 2 (cid:104) emitter µ m (cid:105) , therewere 2500 mean signal photons per emitter, and total additivegaussian noise with a standard deviation of σ =
20 photons perpixel. In the 3 K regions reserved for evaluation, Deep-STORMcorrectly identified 96% of emitters localized by ThunderSTORMprior to combining frames, with a low false positive rate of 1.6%(Fig. 7). In these conditions, Deep-STORM generates super-resolved images containing small “blobs”, usually in 3 × ∼ (cid:104) emitters µ m (cid:105) . In addition, we found that in case of a mismatchin SNR, it is preferable to train on lower background examples toprevent a high false positives rate (Supplementary Information).Deep-STORM not only yields image reconstruction resultsthat are comparable to or better than leading algorithms, butalso does so ∼ − ∼ K emitters in total, with mean density of 5.48 [ emitter µ m ] . Theexperimental dataset consists of 500 frames containing ∼ K emitters, with mean density of 6.35 [ emitter µ m ] , approximated us-ing the number of emitters recovered by CEL0. Deep-STORMexhibits significantly superior runtime, especially when intro-ducing GPU acceleration, equivalent to localizing ∼ ∼ d)(a) (b) (c) d e (e)CEL0 Deep-STORM CEL0 Deep-STORMCEL0 Deep-STORMDiffraction Limited (f Position [nm] Deep-STORMCEL0 d e f ) Fig. 6.
Experimentally measured microtubules. (a) Sum of the acquisition stack. Scale bar is 2 µ m . (b) Reconstruction by the CEL0method. (c) Reconstruction by Deep-STORM. (d)-(e) Magnified views of two selected regions. Scale bars are 0.5 µ m . (f) The widthprojection of the highlighted yellow region. The attained FWHM (black triangles) for CEL0 was 61 nm and 67 nm for Deep-STORM.The black line shows the diffraction-limited projection. (a) (b) (c) Fig. 7.
Quantum dot experimental data. (a) Acquired low res-olution image. Scale bar is 1 µ m . (b) Deep-STORM reconstruc-tion with ground truth emitter positions (red crosses). (c) Mag-nified view of the a selected region in (b). Table 1.
Runtime comparison
Dataset
Grid size
CEL0 FALCON Deep-STORMCPU [s] CPU / GPU [s] CPU / GPU [s]Sim. 512 ×
512 18677 1465/122 123/4Exp. 1024 × Since the introduction of single-molecule localization mi-croscopy, numerous algorithms have been developed to re-construct super-resolved images from movies of stochastically-blinking emitters.In particular, considerable effort has been invested in solv- ing the high-density emitter-fitting problem. Indeed, currentmethods for multi-emitter fitting produce high-quality images;however, this comes at a high computational cost, i.e. runtime, aswell as frequently necessitating parameter-tuning. In this work,we have presented a fast, precise, and parameter-free methodfor super-resolution imaging from localization-microscopy typedata. Deep-STORM uses a convolutional neural network trainedon easily-simulated or experimental data.Our experiments show that the net used in this work per-forms well up to a density of ∼ [ emitter µ m ] , which is similar toleading multi-emitter fitting methods, after tuning their parame-ters accordingly. We note that, in general, the maximal allowabledensity would depend also on SNR. Notably, the main reasonDeep Learning is highly suitable for the application presentedin this work is the simplicity in which training data can be gen-erated. Namely, single-molecule images with realistic noisemodels are straight forward to simulate in large numbers, whichare often required in Deep Learning.Our simulations show that Deep-STORM exhibits high ro-bustness to emitter density and SNR used for training (Supple-mentary Information); nevertheless, in order to further increaseperformance for cases such as time-varying emitter densities orsignal/background levels, the following simple generalizationcan be considered: Since training of the net is performed offline,a pre-training of a set of nets for various SNR and density valuescan be performed once. Then, in the reconstruction stage, a fastoptimal selection step per-frame can be applied to each capturedframe, routing it to the best net, considering the estimated SNRand emitter density of the current frame.Although Deep-STORM uses localization-microscopy typemovies to produce a super-resolved image, it is not a localiza-5ion based technique. Localization microscopy is based on theadditional information inherent in blinking molecules. How-ever, as was demonstrated by other techniques, e.g. SOFI [14],extracting this information does not necessarily require com-piling a list of molecular positions. Deep-STORM implicitlyuses this additional information content to directly reconstruct asuper-resolved image. The technique combines state-of-the-artresolution enhancement, unprecedented speed, and high flex-ibility (parameter-free operation). This combination producesa technique capable of video-rate analysis of super-resolutionlocalization-microscopy data that requires no expertise from theend user, overcoming some of the the most significant limitationsof existing localization methods. References [1] S. W. Hell and J. Wichmann, “Breaking the diffraction reso-lution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,”
Opt. Lett. , vol. 19,pp. 780–782, Jun 1994.[2] T. A. Klar and S. W. Hell, “Subdiffraction resolution in far-field fluorescence microscopy,”
Opt. Lett. , vol. 24, pp. 954–956, Jul 1999.[3] M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of ob-taining optical sectioning by using structured light in aconventional microscope,”
Opt. Lett. , vol. 22, pp. 1905–1907,Dec 1997.[4] W. Lukosz and M. Marchand, “Optischen abbildung unterÜberschreitung der beugungsbedingten auflösungsgrenze,”
Optica Acta: International Journal of Optics , vol. 10, no. 3,pp. 241–255, 1963.[5] M. Gustafsson, “Surpassing the lateral resolution limit bya factor of two using structured illumination microscopy.,”
Journal of microscopy , vol. 198, no. Pt 2, pp. 82–7, 2000.[6] E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser,S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescentproteins at nanometer resolution,”
Science , 2006.[7] S. T. Hess, T. P. Girirajan, and M. D. Mason, “Ultra-highresolution imaging by fluorescence photoactivation local-ization microscopy,”
Biophysical Journal , vol. 91, no. 11,pp. 4258 – 4272, 2006.[8] M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limitimaging by stochastic optical reconstruction microscopy(STORM),”
Nature Methods , vol. 3, no. 10, pp. 793–795, 2006.[9] S. J. Sahl and W. Moerner, “Super-resolution fluorescenceimaging with single molecules,”
Current Opinion in Struc-tural Biology , vol. 23, no. 5, pp. 778 – 787, 2013.[10] J. Högbom, “Aperture synthesis with a non-regular dis-tribution of interferometer baselines,”
Journal of ChemicalInformation and Modeling , vol. 53, no. 9, pp. 1689–1699, 1974.[11] A. Sergé, N. Bertaux, H. Rigneault, and D. Marguet, “Dy-namic multiple-target tracing to probe spatiotemporal car-tography of cell membranes,”
Nature Methods , vol. 5, no. 8,pp. 687–694, 2008.[12] X. Qu, D. Wu, L. Mets, and N. F. Scherer, “Nanometer-localized multiple single-molecule fluorescence mi-croscopy,”
Proceedings of the National Academy of Sciences ,vol. 101, no. 31, pp. 11298–11303, 2004.[13] M. P. Gordon, T. Ha, and P. R. Selvin, “Single-moleculehigh-resolution imaging with photobleaching,”
Proceedings of the National Academy of Sciences , vol. 101, pp. 6462–6465,apr 2004.[14] T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein,“fluctuation imaging ( SOFI ),”
Proceedings of the NationalAcademy of Sciences of the United States of America , vol. 106,no. 52, pp. 22287–92, 2009.[15] S. Cox, E. Rosten, J. Monypenny, T. Jovanovic-Talisman,D. T. Burnette, J. Lippincott-Schwartz, G. E. Jones, andR. Heintzmann, “Bayesian localization microscopy revealsnanoscale podosome dynamics,”
Nature Methods , vol. 9,no. 2, pp. 195–200, 2012.[16] N. Gustafsson, S. Culley, G. Ashdown, D. M. Owen, P. M.Pereira, and R. Henriques, “Fast live-cell conventional fluo-rophore nanoscopy with ImageJ through super-resolutionradial fluctuations,”
Nature Communications , vol. 7, pp. 1–9,2016.[17] S. J. Holden, S. Uphoff, and A. N. Kapanidis, “DAOS-TORM: An algorithm for high-density super-resolutionmicroscopy,”
Nature Methods , vol. 8, no. 4, pp. 279–280,2011.[18] L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “FasterSTORM using compressed sensing,”
Nature Methods , vol. 9,no. 7, pp. 721–723, 2012.[19] A. Barsic, G. Grover, and R. Piestun, “Three-dimensionalsuper-resolution and localization of dense clusters of singlemolecules,”
Scientific Reports , vol. 4, pp. 1–8, 2014.[20] J. Min, C. Vonesch, H. Kirshner, L. Carlini, N. Olivier,S. Holden, S. Manley, J. C. Ye, and M. Unser, “FALCON:fast and unbiased reconstruction of high-density super-resolution microscopy data,”
Scientific Reports , vol. 4, no. 1,p. 4577, 2015.[21] S. Gazagnes, E. Soubies, and L. Blanc-Féraud, “High den-sity molecule localization for super-resolution microscopyusing CEL0 based sparse approximation,”
ISBI 2017-IEEEInternational Symposium on Biomedical Imaging , no. 1, p. 4,2017.[22] S. Hugelier, J. J. De Rooi, R. Bernex, S. Duwé, O. Devos,M. Sliwa, P. Dedecker, P. H. Eilers, and C. Ruckebusch,“Sparse deconvolution of high-density super-resolution im-ages,”
Scientific Reports , vol. 6, no. February, pp. 1–11, 2016.[23] O. Solomon, Y. C. Eldar, M. Mutzafi, and M. Segev,“Sparcom: Sparsity based super-resolution correlation mi-croscopy,” arXiv preprint arXiv:1707.09255 , 2017.[24] F. Huang, S. L. Schwartz, J. M. Byars, and K. A. Lidke,“Simultaneous multiple-emitter fitting for single moleculesuper-resolution imaging,”
Biomedical Optics Express , vol. 2,no. 5, p. 1377, 2011.[25] M. Mutzafi, Y. Shechtman, Y. C. Eldar, and M. Segev,“Single-Shot Sparsity-based Sub-wavelength FluorescenceImaging of Biological Structures Using Dictionary Learn-ing,”
Cleo: 2015 , vol. 3, p. STh4K.5, 2015.[26] M. Mutzafi, Y. Shechtman, O. Dicker, L. Weiss, Y. C. El-dar, W. E. Moerner, M. Segev, and M. Segev, “Experimen-tal Demonstration of Sparsity-Based Single-Shot Fluores-cence Imaging at Sub-wavelength Resolution,”
Conferenceon Lasers and Electro-Optics , vol. 4, p. AW1A.6, 2017.[27] A. Beck and M. Teboulle, “A Fast Iterative Shrinkage-Thresholding Algorithm,”
Society for Industrial and AppliedMathematics Journal on Imaging Sciences , vol. 2, no. 1, pp. 183–202, 2009.[28] C. Dong, C. C. Loy, K. He, and X. Tang, “Image Super-Resolution Using Deep Convolutional Networks,”
IEEETransactions on Pattern Analysis and Machine Intelligence ,6ol. 38, no. 2, pp. 295–307, 2016.[29] J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,”
Cvpr 2016 , pp. 1646–1654, 2016.[30] Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang, “Deepnetworks for image super-resolution with sparse prior,”
Proceedings of the IEEE International Conference on ComputerVision , vol. 11-18-Dece, no. 7, pp. 370–378, 2016.[31] X. Mao, C. Shen, and Y.-B. Yang, “Image restoration usingvery deep convolutional encoder-decoder networks withsymmetric skip connections,” in
Advances in Neural Infor-mation Processing Systems 29 (D. D. Lee, M. Sugiyama, U. V.Luxburg, I. Guyon, and R. Garnett, eds.), pp. 2802–2810,Curran Associates, Inc., 2016.[32] Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang,and A. Ozcan, “Deep learning microscopy,”
Optica , vol. 4,no. 11, p. 1437, 2017.[33] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutionalnetworks for semantic segmentation,”
Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition , vol. 07-12-June, pp. 3431–3440, 2015.[34] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convo-lutional networks for biomedical image segmentation,”in
International Conference on Medical image computing andcomputer-assisted intervention , pp. 234–241, Springer, 2015.[35] H. Noh, S. Hong, and B. Han, “Learning deconvolution net-work for semantic segmentation,”
Proceedings of the IEEEInternational Conference on Computer Vision , vol. 2015 In-ternational Conference on Computer Vision, ICCV 2015,pp. 1520–1528, 2015.[36] M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov,A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Cul-ley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden,R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak,L. Royer, F. Jug, and E. W. Myers, “Content-aware imagerestoration: Pushing the limits of fluorescence microscopy,” bioRxiv , 2017.[37] W. Xie, J. A. Noble, and A. Zisserman, “Microscopy cellcounting and detection with fully convolutional regressionnetworks,”
Computer Methods in Biomechanics and BiomedicalEngineering: Imaging and Visualization , pp. 1–10, 2016.[38] S. Ioffe and C. Szegedy, “Batch normalization: Acceleratingdeep network training by reducing internal covariate shift,”in
Proceedings of the 32nd International Conference on MachineLearning (F. Bach and D. Blei, eds.), vol. 37 of
Proceedingsof Machine Learning Research , (Lille, France), pp. 448–456,PMLR, 07–09 Jul 2015.[39] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier Nonlin-earities Improve Neural Network Acoustic Models,”
Pro-ceedings of the 30 th International Conference on Machine Learn-ing , vol. 28, p. 6, 2013.[40] C. T. Rueden, J. Schindelin, M. C. Hiner, B. E. DeZonia,A. E. Walter, E. T. Arena, and K. W. Eliceiri, “Imagej2: Im-agej for the next generation of scientific image data,”
BMCBioinformatics , vol. 18, p. 529, Nov 2017.[41] J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig,M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld,B. Schmid, J.-Y. Tinevez, D. J. White, V. Hartenstein, K. Eli-ceiri, P. Tomancak, and A. Cardona, “Fiji: an open-sourceplatform for biological-image analysis,”
Nature Methods ,vol. 9, pp. 676–682, jun 2012.[42] M. Ovesný, P. Kˇrížek, J. Borkovec, Z. Švindrych, andG. M. Hagen, “ThunderSTORM: A comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging,”
Bioinformatics , vol. 30, no. 16, pp. 2389–2390, 2014.[43] F. Chollet et al. , “Keras.” https://github.com/fchollet/keras , 2015.[44] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen,C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe-mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia,R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané,R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster,J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Van-houcke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden,M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “Tensor-Flow: Large-scale machine learning on heterogeneous sys-tems,” 2015. Software available from tensorflow.org.[45] D. P. Kingma and J. Ba, “Adam: A method for stochasticoptimization,” arXiv preprint arXiv:1412.6980 , 2014.[46] “Nano-bio-optics lab – yoav shechtman.” http://nanobiooptics.net.technion.ac.il/ .[47] D. Sage, H. Kirshner, T. Pengo, N. Stuurman, J. Min, S. Man-ley, and M. Unser, “Quantitative evaluation of softwarepackages for single-molecule localization microscopy,”
Na-ture Methods , vol. 12, no. 8, pp. 717–724, 2015.
Funding Information
E.N. is supported by a Google research award. L.E.W. and Y.S.are supported by the Zuckerman Foundation, Y.S. is supportedin part by a Career Advancement Chairship from the Technion,Israel Institute of Technology. T.M. is supported in part by theOllendorff Foundation, the Taub Foundation (through a Horevfellowship), an Alon Fellowship, and the Israel Science Founda-tion (grant No. 852/17). We gratefully acknowledge the supportof NVIDIA Corporation with the donation of the Titan Xp GPUused for this research.
Acknowledgements
The authors thank Dr. Daniel Freedman of Google Research forfruitful discussions.