Microscopy and Microanalysis | 2019

Two-stage Neural Architecture Search for Microscopy Image Segmentation

 
 
 

Abstract


Modern microscopy hardware is capable of acquiring gigabytes or terabytes of 2D, 3D, and 4D data in short spans of time [1]. The rapid growth in dataset size necessitates the automation of common image analysis tasks in order to fully leverage the information that the data provides. One fundamental image analysis task is segmentation the grouping of pixels into regions corresponding to image content. In the past decade, automated segmentation algorithms based on deep neural networks have made significant progress on automating difficult microscopy segmentation problems, such as those encountered in largescale biological electron microscopy using imaging platforms such as serial block-face scanning electron microscopy (SBF-SEM) [1]. However, challenges remain for bolstering performance to levels of accuracy required for practical, turn-key solutions for biomedical research. One challenge is identifying neural network architectures best suited for a given segmentation problem. Large neural networks such as those used for image segmentation may require dozens of design choices to fully specify a computational module, and multiple modules may be combined to form a final network architecture. Searching through the combinatorial collection of hyperparameters which quantify these design choices is computationally expensive, even with a cluster of GPU-enabled compute nodes available. Naive search strategies such as grid search or random search at the level of individual network layers may fail to discover architectures that are more useful than those designed by hand via human intuition.

Volume 25
Pages 188-189
DOI 10.1017/S1431927619001673
Language English
Journal Microscopy and Microanalysis

Full Text