Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael P. Majurski is active.

Publication


Featured researches published by Michael P. Majurski.


BMC Bioinformatics | 2014

FogBank: a single cell segmentation across multiple cell lines and image modalities

Joe Chalfoun; Michael P. Majurski; Alden A. Dima; Christina H. Stuelten; Adele P. Peskin; Mary Brady

BackgroundMany cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies.ResultsWe present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation.First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce.We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images.ConclusionsFogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.


Journal of Microscopy | 2015

Empirical Gradient Threshold Technique for Automated Segmentation across Image Modalities and Cell Lines

Joe Chalfoun; Michael P. Majurski; Adele P. Peskin; Catherine Breen; Peter Bajcsy; Mary Brady

New microscopy technologies are enabling image acquisition of terabyte‐sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21 000×21 000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user‐set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re‐adjustment with time (requirement 5).


BMC Bioinformatics | 2015

Survey statistics of automated segmentations applied to optical imaging of mammalian cells

Peter Bajcsy; Antonio Cardone; Joe Chalfoun; Michael Halter; Derek Juba; Marcin Kociolek; Michael P. Majurski; Adele P. Peskin; Carl G. Simon; Mylene Simon; Antoine Vandecreme; Mary Brady

BackgroundThe goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements.MethodsWe define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories.ResultsThe survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue.ConclusionsThe novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.


IEEE Computer | 2016

Enabling Stem Cell Characterization from Large Microscopy Images

Peter Bajcsy; Antoine Vandecreme; Julien M. Amelot; Joe Chalfoun; Michael P. Majurski; Mary Brady

Microscopy could be an important tool for characterizing stem cell products if quantitative measurements could be collected over multiple spatial and temporal scales. With the cells changing states over time and being several orders of magnitude smaller than cell products, modern microscopes are already capable of imaging large spatial areas, repeat imaging over time, and acquiring images over several spectra. However, characterizing stem cell products from such large image collections is challenging because of data size, required computations, and lack of interactive quantitative measurements needed to determine release criteria. We present a measurement web system consisting of available algorithms, extensions to a client-server framework using Deep Zoom, and the configuration know-how to provide the information needed for inspecting the quality of a cell product. The cell and other data sets are accessible via the prototype web-based system at http://isg.nist.gov/deepzoomweb.Microscopes can now cover large spatial areas and capture stem cell behavior over time. However, without discovering statistically reliable quantitative stem cell quality measures, products cannot be released to market. A Web-based measurement system overcomes desktop limitations by leveraging cloud and cluster computing for offline computations and by using Deep Zoom extensions for interactive viewing and measurement.


Journal of Microscopy | 2015

Background intensity correction for terabyte-sized time-lapse images

Joe Chalfoun; Michael P. Majurski; Kiran Bhadriraju; Steven P. Lund; Peter Bajcsy; Mary Brady

Several computational challenges associated with large‐scale background image correction of terabyte‐sized fluorescent images are discussed and analysed in this paper. Dark current, flat‐field and background correction models are applied over a mosaic of hundreds of spatially overlapping fields of view (FOVs) taken over the course of several days, during which the background diminishes as cell colonies grow. The motivation of our work comes from the need to quantify the dynamics of OCT‐4 gene expression via a fluorescent reporter in human stem cell colonies. Our approach to background correction is formulated as an optimization problem over two image partitioning schemes and four analytical correction models. The optimization objective function is evaluated in terms of (1) the minimum root mean square (RMS) error remaining after image correction, (2) the maximum signal‐to‐noise ratio (SNR) reached after downsampling and (3) the minimum execution time. Based on the analyses with measured dark current noise and flat‐field images, the most optimal GFP background correction is obtained by using a data partition based on forming a set of submosaic images with a polynomial surface background model. The resulting image after correction is characterized by an RMS of about 8, and an SNR value of a 4 × 4 downsampling above 5 by Rose criterion. The new technique generates an image with half RMS value and double SNR value when compared to an approach that assumes constant background throughout the mosaic. We show that the background noise in terabyte‐sized fluorescent image mosaics can be corrected computationally with the optimized triplet (data partition, model, SNR driven downsampling) such that the total RMS value from background noise does not exceed the magnitude of the measured dark current noise. In this case, the dark current noise serves as a benchmark for the lowest noise level that an imaging system can achieve. In comparison to previous work, the past fluorescent image background correction methods have been designed for single FOV and have not been applied to terabyte‐sized images with large mosaic FOVs, low SNR and diminishing access to background information over time as cell colonies span entirely multiple FOVs. The code is available as open‐source from the following link https://isg.nist.gov/.


Scientific Reports | 2017

MIST: Accurate and Scalable Microscopy Image Stitching Tool with Stage Modeling and Error Minimization

Joe Chalfoun; Michael P. Majurski; Tim Blattner; Kiran Bhadriraju; Walid Keyrouz; Peter Bajcsy; Mary Brady

Automated microscopy can image specimens larger than the microscope’s field of view (FOV) by stitching overlapping image tiles. It also enables time-lapse studies of entire cell cultures in multiple imaging modalities. We created MIST (Microscopy Image Stitching Tool) for rapid and accurate stitching of large 2D time-lapse mosaics. MIST estimates the mechanical stage model parameters (actuator backlash, and stage repeatability ‘r’) from computed pairwise translations and then minimizes stitching errors by optimizing the translations within a (4r)2 square area. MIST has a performance-oriented implementation utilizing multicore hybrid CPU/GPU computing resources, which can process terabytes of time-lapse multi-channel mosaics 15 to 100 times faster than existing tools. We created 15 reference datasets to quantify MIST’s stitching accuracy. The datasets consist of three preparations of stem cell colonies seeded at low density and imaged with varying overlap (10 to 50%). The location and size of 1150 colonies are measured to quantify stitching accuracy. MIST generated stitched images with an average centroid distance error that is less than 2% of a FOV. The sources of these errors include mechanical uncertainties, specimen photobleaching, segmentation, and stitching inaccuracies. MIST produced higher stitching accuracy than three open-source tools. MIST is available in ImageJ at isg.nist.gov.


computer vision and pattern recognition | 2016

Methodology for Increasing the Measurement Accuracy of Image Features

Michael P. Majurski; Joe Chalfoun; Steven P. Lund; Peter Bajcsy; Mary Brady

We present an optimization methodology for improving the measurement accuracy of image features for low signal to noise ratio (SNR) images. By superimposing known background noise with high quality images in various proportions, we produce a degraded image set spanning a range of SNRs with reference feature values established from the unmodified high quality images. We then experiment with a variety of image processing spatial filters applied to the degraded images and identify which filter produces an image whose feature values most closely correspond to the reference values. When using the best combination of three filters and six kernel sizes for each feature, the average correlation of feature values between the degraded and high quality images increased from 0:6 (without filtering) to 0:92 (with feature-specific filters), a 53% improvement. Selecting a single filter is more practical than having a separate filter per feature. However, this results in a 1:95% reduction in correlation and a 10% increase in feature residual root mean square error compared to selecting the optimal filter and kernel size per feature. We quantified the tradeoff between a practical solution for all features and featurespecific solution to support decision making.


Scientific Reports | 2016

Lineage mapper: A versatile cell and particle tracker.

Joe Chalfoun; Michael P. Majurski; Alden A. Dima; Michael Halter; Kiran Bhadriraju; Mary Brady

The ability to accurately track cells and particles from images is critical to many biomedical problems. To address this, we developed Lineage Mapper, an open-source tracker for time-lapse images of biological cells, colonies, and particles. Lineage Mapper tracks objects independently of the segmentation method, detects mitosis in confluence, separates cell clumps mistakenly segmented as a single cell, provides accuracy and scalability even on terabyte-sized datasets, and creates division and/or fusion lineages. Lineage Mapper has been tested and validated on multiple biological and simulated problems. The software is available in ImageJ and Matlab at isg.nist.gov.


bioinformatics and biomedicine | 2015

MIST: Microscopy Image Stitching Tool

Joe Chalfoun; Michael P. Majurski; Timothy Blattner; Walid Keyrouz; Peter Bajcsy; Mary Brady

Summary form only given. Motivation: Automated microscopy enables scientists to image an area of an experimental sample that is much larger than the microscopes Field of View (FOV) and to carry out time-lapse studies of cell cultures. An automated microscope acquires these images by generating a grid of partially overlapping images. This process generates hundreds to hundreds of thousands of image tiles that need to be stitched into a wide image. We address the problem of creating image mosaics from a grid of overlapping tiles constrained to only translational offsets. The challenges of creating a large mosaic image are: (1) sensitivity to image features in the overlapping regions of adjacent tiles (e.g., during the early period of cell colony growth), (2) computational requirements needed to assemble the resulting mosaic image, and (3) absence of ground truth needed for evaluating the accuracy of a stitching method. Results: This paper describes a stitching method called MIST (Microscopy Image Stitching Tool) with minimized translational uncertainty for large collections of grid-based microscopy tiles. The method improves tile translations computed using a registration method, such as the Fourier transform based phase correlation, by optimizing the normalized cross correlation between the overlap of adjacent tiles. The optimization incorporates mechanical properties of a microscope stage to filter translations with high errors. We estimate the microscope stage repeatability from the computed translations of the grid-based image tiles and then improve all translations using constrained Hill Climbing restricted to searching a square area of 4 times the stage repeatability per side. We also present a methodology for evaluating stitching accuracy based on creating reference centroid distance and area measurements of regions of interests that fit inside one FOV. The regions of interests (ROI) are segmented first and their mutual centroid distances and areas are measured using the microscope stage coordinates. The stitching accuracy is quantified by comparing the reference measurements to the measurements obtained by stitching a set of grid-based tiles by means of four NIST -derived metrics: false positive (added ROIs), false negative (undetected ROI), centroid distance error and area error. Following this methodology, we prepared three large reference datasets of stem cell colonies with low colony seeding which result in high uncertainty associated with the translation offsets. MIST generated a stitched image with an average colony centroid distance error less than 2 % that of a field of view and an average area error of 5 %. The sources of these errors include mechanical uncertainties, sample photobleaching, segmentation and stitching. We also show that the area error is mainly due to photobleaching and not stitching. We compared MIST stitching to the top five popular methods used in the literature. MIST produced the most accurate stitching result among all methods. Conclusions: MIST is an accurate stitching tool that can be applied to grid-based tiles with unknown translational offsets. Its performance-oriented implementation yields a fast execution time that makes the algorithm suitable for creating large mosaics (up to TBs in size). The evaluation methodology for stitching accuracy along with NIST -derived four performance metrics provides a general approach to characterize stitching algorithm performance. The application of the methodology in our case generated three reusable reference datasets with cell colonies. Availability: MIST is available as a Matlab executable or an ImageJ plugin. MIST ImageJ plugin has a CPU and a GPU implementation. All the information regarding this tool and its source code can be found at the following link: https://isg.nist.gov/.


Microscopy Today | 2017

From Image Tiles to Web-Based Interactive Measurements in One Stop

Antoine Vandecreme; Michael P. Majurski; Joe Chalfoun; Keana C. Scott; John Henry J. Scott; Mary Brady; Peter Bajcsy

Collaboration


Dive into the Michael P. Majurski's collaboration.

Top Co-Authors

Avatar

Joe Chalfoun

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mary Brady

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Bajcsy

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Antoine Vandecreme

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Adele P. Peskin

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Halter

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Steven P. Lund

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Walid Keyrouz

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Alden A. Dima

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Julien M. Amelot

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge