Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Bajcsy is active.

Publication


Featured researches published by Peter Bajcsy.


Journal of Microscopy | 2013

Segmenting time-lapse phase contrast images of adjacent NIH 3T3 cells

Joe Chalfoun; M. Kociolek; Alden A. Dima; Michael Halter; Antonio Cardone; Adele P. Peskin; Peter Bajcsy; Mary Brady

We present a new method for segmenting phase contrast images of NIH 3T3 fibroblast cells that is accurate even when cells are physically in contact with each other. The problem of segmentation, when cells are in contact, poses a challenge to the accurate automation of cell counting, tracking and lineage modelling in cell biology. The segmentation method presented in this paper consists of (1) background reconstruction to obtain noise‐free foreground pixels and (2) incorporation of biological insight about dividing and nondividing cells into the segmentation process to achieve reliable separation of foreground pixels defined as pixels associated with individual cells. The segmentation results for a time‐lapse image stack were compared against 238 manually segmented images (8219 cells) provided by experts, which we consider as reference data. We chose two metrics to measure the accuracy of segmentation: the ‘Adjusted Rand Index’ which compares similarities at a pixel level between masks resulting from manual and automated segmentation, and the ‘Number of Cells per Field’ (NCF) which compares the number of cells identified in the field by manual versus automated analysis. Our results show that the automated segmentation compared to manual segmentation has an average adjusted rand index of 0.96 (1 being a perfect match), with a standard deviation of 0.03, and an average difference of the two numbers of cells per field equal to 5.39% with a standard deviation of 4.6%.


Journal of Microscopy | 2015

Empirical Gradient Threshold Technique for Automated Segmentation across Image Modalities and Cell Lines

Joe Chalfoun; Michael P. Majurski; Adele P. Peskin; Catherine Breen; Peter Bajcsy; Mary Brady

New microscopy technologies are enabling image acquisition of terabyte‐sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21 000×21 000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user‐set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re‐adjustment with time (requirement 5).


BMC Bioinformatics | 2015

Survey statistics of automated segmentations applied to optical imaging of mammalian cells

Peter Bajcsy; Antonio Cardone; Joe Chalfoun; Michael Halter; Derek Juba; Marcin Kociolek; Michael P. Majurski; Adele P. Peskin; Carl G. Simon; Mylene Simon; Antoine Vandecreme; Mary Brady

BackgroundThe goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements.MethodsWe define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories.ResultsThe survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue.ConclusionsThe novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.


Stem Cell Research | 2016

Large-scale time-lapse microscopy of Oct4 expression in human embryonic stem cell colonies.

Kiran Bhadriraju; Michael Halter; Julien M. Amelot; Peter Bajcsy; Joe Chalfoun; Antoine Vandecreme; Barbara S. Mallon; Kyeyoon Park; Subhash Sista; John T. Elliott; Anne L. Plant

Identification and quantification of the characteristics of stem cell preparations is critical for understanding stem cell biology and for the development and manufacturing of stem cell based therapies. We have developed image analysis and visualization software that allows effective use of time-lapse microscopy to provide spatial and dynamic information from large numbers of human embryonic stem cell colonies. To achieve statistically relevant sampling, we examined >680 colonies from 3 different preparations of cells over 5 days each, generating a total experimental dataset of 0.9 terabyte (TB). The 0.5 Giga-pixel images at each time point were represented by multi-resolution pyramids and visualized using the Deep Zoom Javascript library extended to support viewing Giga-pixel images over time and extracting data on individual colonies. We present a methodology that enables quantification of variations in nominally-identical preparations and between colonies, correlation of colony characteristics with Oct4 expression, and identification of rare events.


Clinical Proteomics | 2014

Quantifying CD4 receptor protein in two human CD4+ lymphocyte preparations for quantitative flow cytometry

Meiyao Wang; Martin Misakian; Hua-Jun He; Peter Bajcsy; Fatima Abbasi; Jeffrey M. Davis; Kenneth D. Cole; Illarion V. Turko; Lili Wang

BackgroundIn our previous study that characterized different human CD4+ lymphocyte preparations, it was found that both commercially available cryopreserved peripheral blood mononuclear cells (PBMC) and a commercially available lyophilized PBMC (Cyto-Trol™) preparation fulfilled a set of criteria for serving as biological calibrators for quantitative flow cytometry. However, the biomarker CD4 protein expression level measured for T helper cells from Cyto-Trol was about 16% lower than those for cryopreserved PBMC and fresh whole blood using flow cytometry and mass cytometry. A primary reason was hypothesized to be due to steric interference in anti- CD4 antibody binding to the smaller sized lyophilized control cells.MethodTargeted multiple reaction monitoring (MRM) mass spectrometry (MS) is used to quantify the copy number of CD4 receptor protein per CD4+ lymphocyte. Scanning electron microscopy (SEM) is utilized to assist searching the underlying reasons for the observed difference in CD4 receptor copy number per cell determined by MRM MS and CD4 expression measured previously by flow cytometry.ResultsThe copy number of CD4 receptor proteins on the surface of the CD4+ lymphocyte in cryopreserved PBMCs and in lyophilized control cells is determined to be (1.45 ± 0.09) × 105 and (0.85 ± 0.11) × 105, respectively, averaged over four signature peptides using MRM MS. In comparison with cryopreserved PBMCs, there are more variations in the CD4 copy number in lyophilized control cells determined based on each signature peptide. SEM images of CD4+ lymphocytes from lyophilized control cells are very different when compared to the CD4+ T cells from whole blood and cryopreserved PBMC.ConclusionBecause of the lyophilization process applied to Cyto-Trol control cells, a lower CD4 density value, defined as the copy number of CD4 receptors per CD4+ lymphocyte, averaged over three different production lots is most likely explained by the loss of the CD4 receptors on damaged and/or broken microvilli where CD4 receptors reside. Steric hindrance of antibody binding and the association of CD4 receptors with other biomolecules likely contribute significantly to the nearly 50% lower CD4 receptor density value for cryopreserved PBMC determined from flow cytometry compared to the value obtained from MRM MS.


IEEE Computer | 2016

Enabling Stem Cell Characterization from Large Microscopy Images

Peter Bajcsy; Antoine Vandecreme; Julien M. Amelot; Joe Chalfoun; Michael P. Majurski; Mary Brady

Microscopy could be an important tool for characterizing stem cell products if quantitative measurements could be collected over multiple spatial and temporal scales. With the cells changing states over time and being several orders of magnitude smaller than cell products, modern microscopes are already capable of imaging large spatial areas, repeat imaging over time, and acquiring images over several spectra. However, characterizing stem cell products from such large image collections is challenging because of data size, required computations, and lack of interactive quantitative measurements needed to determine release criteria. We present a measurement web system consisting of available algorithms, extensions to a client-server framework using Deep Zoom, and the configuration know-how to provide the information needed for inspecting the quality of a cell product. The cell and other data sets are accessible via the prototype web-based system at http://isg.nist.gov/deepzoomweb.Microscopes can now cover large spatial areas and capture stem cell behavior over time. However, without discovering statistically reliable quantitative stem cell quality measures, products cannot be released to market. A Web-based measurement system overcomes desktop limitations by leveraging cloud and cluster computing for offline computations and by using Deep Zoom extensions for interactive viewing and measurement.


Journal of Microscopy | 2015

Background intensity correction for terabyte-sized time-lapse images

Joe Chalfoun; Michael P. Majurski; Kiran Bhadriraju; Steven P. Lund; Peter Bajcsy; Mary Brady

Several computational challenges associated with large‐scale background image correction of terabyte‐sized fluorescent images are discussed and analysed in this paper. Dark current, flat‐field and background correction models are applied over a mosaic of hundreds of spatially overlapping fields of view (FOVs) taken over the course of several days, during which the background diminishes as cell colonies grow. The motivation of our work comes from the need to quantify the dynamics of OCT‐4 gene expression via a fluorescent reporter in human stem cell colonies. Our approach to background correction is formulated as an optimization problem over two image partitioning schemes and four analytical correction models. The optimization objective function is evaluated in terms of (1) the minimum root mean square (RMS) error remaining after image correction, (2) the maximum signal‐to‐noise ratio (SNR) reached after downsampling and (3) the minimum execution time. Based on the analyses with measured dark current noise and flat‐field images, the most optimal GFP background correction is obtained by using a data partition based on forming a set of submosaic images with a polynomial surface background model. The resulting image after correction is characterized by an RMS of about 8, and an SNR value of a 4 × 4 downsampling above 5 by Rose criterion. The new technique generates an image with half RMS value and double SNR value when compared to an approach that assumes constant background throughout the mosaic. We show that the background noise in terabyte‐sized fluorescent image mosaics can be corrected computationally with the optimized triplet (data partition, model, SNR driven downsampling) such that the total RMS value from background noise does not exceed the magnitude of the measured dark current noise. In this case, the dark current noise serves as a benchmark for the lowest noise level that an imaging system can achieve. In comparison to previous work, the past fluorescent image background correction methods have been designed for single FOV and have not been applied to terabyte‐sized images with large mosaic FOVs, low SNR and diminishing access to background information over time as cell colonies span entirely multiple FOVs. The code is available as open‐source from the following link https://isg.nist.gov/.


Scientific Reports | 2017

MIST: Accurate and Scalable Microscopy Image Stitching Tool with Stage Modeling and Error Minimization

Joe Chalfoun; Michael P. Majurski; Tim Blattner; Kiran Bhadriraju; Walid Keyrouz; Peter Bajcsy; Mary Brady

Automated microscopy can image specimens larger than the microscope’s field of view (FOV) by stitching overlapping image tiles. It also enables time-lapse studies of entire cell cultures in multiple imaging modalities. We created MIST (Microscopy Image Stitching Tool) for rapid and accurate stitching of large 2D time-lapse mosaics. MIST estimates the mechanical stage model parameters (actuator backlash, and stage repeatability ‘r’) from computed pairwise translations and then minimizes stitching errors by optimizing the translations within a (4r)2 square area. MIST has a performance-oriented implementation utilizing multicore hybrid CPU/GPU computing resources, which can process terabytes of time-lapse multi-channel mosaics 15 to 100 times faster than existing tools. We created 15 reference datasets to quantify MIST’s stitching accuracy. The datasets consist of three preparations of stem cell colonies seeded at low density and imaged with varying overlap (10 to 50%). The location and size of 1150 colonies are measured to quantify stitching accuracy. MIST generated stitched images with an average centroid distance error that is less than 2% of a FOV. The sources of these errors include mechanical uncertainties, specimen photobleaching, segmentation, and stitching inaccuracies. MIST produced higher stitching accuracy than three open-source tools. MIST is available in ImageJ at isg.nist.gov.


Journal of Microscopy | 2015

A method for the evaluation of thousands of automated 3D stem cell segmentations

Peter Bajcsy; Mylene Simon; Stephen J. Florczyk; Carl G. Simon; Derek Juba; Mary Brady

There is no segmentation method that performs perfectly with any dataset in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of three‐dimensional (3D) image volumes because of the amount of computation and manual inputs needed.


international conference on big data | 2014

Spatial computations over terabyte-sized images on hadoop platforms

Peter Bajcsy; Phuong T. Nguyen; Antoine Vandecreme; Mary Brady

Our objective is to lower the barrier of executing spatial image computations in a computer cluster/cloud environment instead of in a desktop/laptop computing environment. We research two related problems encountered during an execution of spatial computations over terabyte-sized images using Apache Hadoop running on distributed computing resources. The two problems address (a) detection of spatial computations and their parameter estimation from a library of image processing functions, and (b) partitioning of image data for spatial image computations on Hadoop cluster/cloud computing platforms in order to minimize network data transfer. The first problem is solved by designing an iterative estimation methodology. The second problem is formulated as an optimization over three partitioning schemas (physical, logical without overlap and logical with overlap), and evaluated over several system configuration parameters. Our experimental results for the two problems demonstrate 100% accuracy in detecting spatial computations in the Java Advanced Imaging and ImageJ libraries, a speed-up of 5.36 between the default Hadoop physical partitioning and developed logical image partitioning with overlap, and 3.14 times faster execution of logical partitioning with overlap than the one without overlap. The novelty of our work is in designing an extension to Apache Hadoop to run a class of spatial image processing operations efficiently on a distributed computing resource.

Collaboration


Dive into the Peter Bajcsy's collaboration.

Top Co-Authors

Avatar

Joe Chalfoun

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mary Brady

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mylene Simon

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael P. Majurski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Antoine Vandecreme

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Halter

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Adele P. Peskin

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Carl G. Simon

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Julien M. Amelot

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Anne L. Plant

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge