Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mary Brady is active.

Publication


Featured researches published by Mary Brady.


Cytometry Part A | 2011

Comparison of Segmentation Algorithms For Fluorescence Microscopy Images of Cells

Alden A. Dima; John T. Elliott; James J. Filliben; Michael Halter; Adele P. Peskin; Javier Bernal; Marcin Kociolek; Mary Brady; Hai C. Tang; Anne L. Plant

The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold‐based segmentation techniques are less accurate than k‐means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley‐Liss, Inc.


Journal of Microscopy | 2013

Segmenting time-lapse phase contrast images of adjacent NIH 3T3 cells

Joe Chalfoun; M. Kociolek; Alden A. Dima; Michael Halter; Antonio Cardone; Adele P. Peskin; Peter Bajcsy; Mary Brady

We present a new method for segmenting phase contrast images of NIH 3T3 fibroblast cells that is accurate even when cells are physically in contact with each other. The problem of segmentation, when cells are in contact, poses a challenge to the accurate automation of cell counting, tracking and lineage modelling in cell biology. The segmentation method presented in this paper consists of (1) background reconstruction to obtain noise‐free foreground pixels and (2) incorporation of biological insight about dividing and nondividing cells into the segmentation process to achieve reliable separation of foreground pixels defined as pixels associated with individual cells. The segmentation results for a time‐lapse image stack were compared against 238 manually segmented images (8219 cells) provided by experts, which we consider as reference data. We chose two metrics to measure the accuracy of segmentation: the ‘Adjusted Rand Index’ which compares similarities at a pixel level between masks resulting from manual and automated segmentation, and the ‘Number of Cells per Field’ (NCF) which compares the number of cells identified in the field by manual versus automated analysis. Our results show that the automated segmentation compared to manual segmentation has an average adjusted rand index of 0.96 (1 being a perfect match), with a standard deviation of 0.03, and an average difference of the two numbers of cells per field equal to 5.39% with a standard deviation of 4.6%.


BMC Bioinformatics | 2014

FogBank: a single cell segmentation across multiple cell lines and image modalities

Joe Chalfoun; Michael P. Majurski; Alden A. Dima; Christina H. Stuelten; Adele P. Peskin; Mary Brady

BackgroundMany cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies.ResultsWe present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation.First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce.We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images.ConclusionsFogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.


Journal of Microscopy | 2015

Empirical Gradient Threshold Technique for Automated Segmentation across Image Modalities and Cell Lines

Joe Chalfoun; Michael P. Majurski; Adele P. Peskin; Catherine Breen; Peter Bajcsy; Mary Brady

New microscopy technologies are enabling image acquisition of terabyte‐sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21 000×21 000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user‐set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re‐adjustment with time (requirement 5).


BMC Bioinformatics | 2015

Survey statistics of automated segmentations applied to optical imaging of mammalian cells

Peter Bajcsy; Antonio Cardone; Joe Chalfoun; Michael Halter; Derek Juba; Marcin Kociolek; Michael P. Majurski; Adele P. Peskin; Carl G. Simon; Mylene Simon; Antoine Vandecreme; Mary Brady

BackgroundThe goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements.MethodsWe define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories.ResultsThe survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue.ConclusionsThe novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.


international conference on big data | 2013

Terabyte-sized image computations on Hadoop cluster platforms

Peter Bajcsy; Antoine Vandecreme; Julien M. Amelot; Phuong T. Nguyen; Joe Chalfoun; Mary Brady

We present a characterization of four basic Terabyte-sized image computations on a Hadoop cluster in terms of their relative efficiency according to the modified Amdahls law. The work is motivated by the lack of standard benchmarks and stress tests for big image processing operations on a Hadoop computer cluster platform. Our benchmark design and evaluations were performed on one of the three microscopy image sets, each consisting of over one half Terabyte. All image processing benchmarks executed on the NIST Raritan cluster with Hadoop were compared against baseline measurements, such as the Terasort/Teragen designed for Hadoop testing previously, image processing executions on a multiprocessor desktop and on NIST Raritan cluster using Java Remote Method Invocation (RMI) with multiple configurations. By applying our methodology to assessing efficiencies of computations on computer cluster configurations, we could rank computation configurations and aid scientists in measuring the benefits of running image processing on a Hadoop cluster.


international conference on parallel processing | 2014

A Hybrid CPU-GPU System for Stitching Large Scale Optical Microscopy Images

Timothy Blattner; Walid Keyrouz; Joe Chalfoun; Bertrand C. Stivalet; Mary Brady; Shujia Zhou

Researchers in various fields are using optical microscopy to acquire very large images, 10000 - 200000 of pixels per side. Optical microscopes acquire these images as grids of overlapping partial images (thousands of pixels per side) that are then stitched together via software. Composing such large images is a compute and data intensive task even for modern machines. Researchers compound this difficulty further by obtaining time-series, volumetric, or multiple channel images with the resulting data sets now having or approaching terabyte sizes. We present a scalable hybrid CPU-GPU implementation of image stitching that processes large image sets at near interactive rates. Our implementation scales well with both image sizes and the number of CPU cores and GPU cards in a machine. It processes a grid of 42 × 59 tiles into a 17 k × 22 k pixels image in 43 s (end-to-end execution times) when using one NVIDIA Tesla C2070 card and two Intel Xeon E-5620 quad-core CPUs, and in 29 s when using two Tesla C2070 cards and the same two CPUs. It also composes and renders the composite image without saving it in 15 s. In comparison, ImageJ/Fiji, which is widely used by biologists, has an image stitching plugin that takes > 3.6 h for the same workload despite being multithreaded and executing the same mathematical operators, it composes and saves the large image in an additional 1.5 h. This implementation takes advantage of coarse-grain parallelism. It organizes the computation into a pipeline architecture that spans CPU and GPU resources and overlaps computation with data motion. The implementation achieves a nearly 10× performance improvement over our optimized non-pipeline GPU implementation and demonstrates near-linear speedup when increasing CPU thread count and increasing number of GPUs.


Applied Ontology | 2012

Ontology for Big Systems: The Ontology Summit 2012 Communiqué

Todd Schneider; Ali B. Hashemi; Mike Bennett; Mary Brady; Cory Casanave; Henson Graves; Michael Gruninger; Nicola Guarino; Anatoly Levenchuk; Ernie Lucier; Leo Obrst; Steve Ray; Ram D. Sriram; Amanda Vizedom; Matthew West; Trish Whetzel; Peter Yim

The Ontology Summit 2012 explored the current and potential uses of ontology, its methods and paradigms, in big systems and big data: How ontology can be used to design, develop, and operate such systems. The systems addressed were not just software systems, although software systems are typically core and necessary components, but more complex systems that include multiple kinds and levels of human and community interaction with physical-software systems, systems of systems, and the socio-technical environments for those systems which can include cultural, legal, and economic components. The focus themes used for this exploration were Big Systems Engineering, Big Data Challenge, Large Scale Domain Applications, and cross-cutting aspects Ontology Quality, and Federation and Integration of Systems.The Ontology Summit 2012 consisted of over three months of intensive virtual collaborative elaboration of these issues in presentations, panels, and group email. The culmination of these activities was a face-to-face Symposium at the US National Institute of Standards and Technology NIST, Gaithersburg, MD, USA, 12--13 April 2012. The primary product of this Ontology Summit is the communique reported here. But there are other products, some continuing as collaborative, more specifically focused analysis and modeling efforts aligned with various open standards activities.Behind all of these particular products, of course, is the real overriding purpose of the Ontology Summit 2012, which was: the joint collaboration of three distinct communities, the ontology, systems engineering and big systems stakeholder communities, who came together to address common problems, create common understanding and propose common solutions.


IEEE Computer | 2016

Enabling Stem Cell Characterization from Large Microscopy Images

Peter Bajcsy; Antoine Vandecreme; Julien M. Amelot; Joe Chalfoun; Michael P. Majurski; Mary Brady

Microscopy could be an important tool for characterizing stem cell products if quantitative measurements could be collected over multiple spatial and temporal scales. With the cells changing states over time and being several orders of magnitude smaller than cell products, modern microscopes are already capable of imaging large spatial areas, repeat imaging over time, and acquiring images over several spectra. However, characterizing stem cell products from such large image collections is challenging because of data size, required computations, and lack of interactive quantitative measurements needed to determine release criteria. We present a measurement web system consisting of available algorithms, extensions to a client-server framework using Deep Zoom, and the configuration know-how to provide the information needed for inspecting the quality of a cell product. The cell and other data sets are accessible via the prototype web-based system at http://isg.nist.gov/deepzoomweb.Microscopes can now cover large spatial areas and capture stem cell behavior over time. However, without discovering statistically reliable quantitative stem cell quality measures, products cannot be released to market. A Web-based measurement system overcomes desktop limitations by leveraging cloud and cluster computing for offline computations and by using Deep Zoom extensions for interactive viewing and measurement.


Journal of Microscopy | 2015

Background intensity correction for terabyte-sized time-lapse images

Joe Chalfoun; Michael P. Majurski; Kiran Bhadriraju; Steven P. Lund; Peter Bajcsy; Mary Brady

Several computational challenges associated with large‐scale background image correction of terabyte‐sized fluorescent images are discussed and analysed in this paper. Dark current, flat‐field and background correction models are applied over a mosaic of hundreds of spatially overlapping fields of view (FOVs) taken over the course of several days, during which the background diminishes as cell colonies grow. The motivation of our work comes from the need to quantify the dynamics of OCT‐4 gene expression via a fluorescent reporter in human stem cell colonies. Our approach to background correction is formulated as an optimization problem over two image partitioning schemes and four analytical correction models. The optimization objective function is evaluated in terms of (1) the minimum root mean square (RMS) error remaining after image correction, (2) the maximum signal‐to‐noise ratio (SNR) reached after downsampling and (3) the minimum execution time. Based on the analyses with measured dark current noise and flat‐field images, the most optimal GFP background correction is obtained by using a data partition based on forming a set of submosaic images with a polynomial surface background model. The resulting image after correction is characterized by an RMS of about 8, and an SNR value of a 4 × 4 downsampling above 5 by Rose criterion. The new technique generates an image with half RMS value and double SNR value when compared to an approach that assumes constant background throughout the mosaic. We show that the background noise in terabyte‐sized fluorescent image mosaics can be corrected computationally with the optimized triplet (data partition, model, SNR driven downsampling) such that the total RMS value from background noise does not exceed the magnitude of the measured dark current noise. In this case, the dark current noise serves as a benchmark for the lowest noise level that an imaging system can achieve. In comparison to previous work, the past fluorescent image background correction methods have been designed for single FOV and have not been applied to terabyte‐sized images with large mosaic FOVs, low SNR and diminishing access to background information over time as cell colonies span entirely multiple FOVs. The code is available as open‐source from the following link https://isg.nist.gov/.

Collaboration


Dive into the Mary Brady's collaboration.

Top Co-Authors

Avatar

Joe Chalfoun

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Bajcsy

University of Illinois at Urbana–Champaign

View shared research outputs
Top Co-Authors

Avatar

Michael P. Majurski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Alden A. Dima

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Antoine Vandecreme

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Ram D. Sriram

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mylene Simon

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Walid Keyrouz

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Carl G. Simon

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Adele P. Peskin

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge