Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joe Chalfoun is active.

Publication


Featured researches published by Joe Chalfoun.


Journal of Microscopy | 2013

Segmenting time-lapse phase contrast images of adjacent NIH 3T3 cells

Joe Chalfoun; M. Kociolek; Alden A. Dima; Michael Halter; Antonio Cardone; Adele P. Peskin; Peter Bajcsy; Mary Brady

We present a new method for segmenting phase contrast images of NIH 3T3 fibroblast cells that is accurate even when cells are physically in contact with each other. The problem of segmentation, when cells are in contact, poses a challenge to the accurate automation of cell counting, tracking and lineage modelling in cell biology. The segmentation method presented in this paper consists of (1) background reconstruction to obtain noise‐free foreground pixels and (2) incorporation of biological insight about dividing and nondividing cells into the segmentation process to achieve reliable separation of foreground pixels defined as pixels associated with individual cells. The segmentation results for a time‐lapse image stack were compared against 238 manually segmented images (8219 cells) provided by experts, which we consider as reference data. We chose two metrics to measure the accuracy of segmentation: the ‘Adjusted Rand Index’ which compares similarities at a pixel level between masks resulting from manual and automated segmentation, and the ‘Number of Cells per Field’ (NCF) which compares the number of cells identified in the field by manual versus automated analysis. Our results show that the automated segmentation compared to manual segmentation has an average adjusted rand index of 0.96 (1 being a perfect match), with a standard deviation of 0.03, and an average difference of the two numbers of cells per field equal to 5.39% with a standard deviation of 4.6%.


BMC Bioinformatics | 2014

FogBank: a single cell segmentation across multiple cell lines and image modalities

Joe Chalfoun; Michael P. Majurski; Alden A. Dima; Christina H. Stuelten; Adele P. Peskin; Mary Brady

BackgroundMany cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies.ResultsWe present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation.First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce.We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images.ConclusionsFogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.


Journal of Microscopy | 2015

Empirical Gradient Threshold Technique for Automated Segmentation across Image Modalities and Cell Lines

Joe Chalfoun; Michael P. Majurski; Adele P. Peskin; Catherine Breen; Peter Bajcsy; Mary Brady

New microscopy technologies are enabling image acquisition of terabyte‐sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21 000×21 000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user‐set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re‐adjustment with time (requirement 5).


BMC Bioinformatics | 2015

Survey statistics of automated segmentations applied to optical imaging of mammalian cells

Peter Bajcsy; Antonio Cardone; Joe Chalfoun; Michael Halter; Derek Juba; Marcin Kociolek; Michael P. Majurski; Adele P. Peskin; Carl G. Simon; Mylene Simon; Antoine Vandecreme; Mary Brady

BackgroundThe goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements.MethodsWe define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories.ResultsThe survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue.ConclusionsThe novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.


international conference on big data | 2013

Terabyte-sized image computations on Hadoop cluster platforms

Peter Bajcsy; Antoine Vandecreme; Julien M. Amelot; Phuong T. Nguyen; Joe Chalfoun; Mary Brady

We present a characterization of four basic Terabyte-sized image computations on a Hadoop cluster in terms of their relative efficiency according to the modified Amdahls law. The work is motivated by the lack of standard benchmarks and stress tests for big image processing operations on a Hadoop computer cluster platform. Our benchmark design and evaluations were performed on one of the three microscopy image sets, each consisting of over one half Terabyte. All image processing benchmarks executed on the NIST Raritan cluster with Hadoop were compared against baseline measurements, such as the Terasort/Teragen designed for Hadoop testing previously, image processing executions on a multiprocessor desktop and on NIST Raritan cluster using Java Remote Method Invocation (RMI) with multiple configurations. By applying our methodology to assessing efficiencies of computations on computer cluster configurations, we could rank computation configurations and aid scientists in measuring the benefits of running image processing on a Hadoop cluster.


Cytometry Part A | 2011

Cell cycle dependent TN‐C promoter activity determined by live cell imaging

Michael Halter; Daniel R. Sisan; Joe Chalfoun; Benjamin L. Stottrup; Antonio Cardone; Alden A. Dima; Alessandro Tona; Anne L. Plant; John T. Elliott

The extracellular matrix protein tenascin‐C plays a critical role in development, wound healing, and cancer progression, but how it is controlled and how it exerts its physiological responses remain unclear. By quantifying the behavior of live cells with phase contrast and fluorescence microscopy, the dynamic regulation of TN‐C promoter activity is examined. We employ an NIH 3T3 cell line stably transfected with the TN‐C promoter ligated to the gene sequence for destabilized green fluorescent protein (GFP). Fully automated image analysis routines, validated by comparison with data derived from manual segmentation and tracking of single cells, are used to quantify changes in the cellular GFP in hundreds of individual cells throughout their cell cycle during live cell imaging experiments lasting 62 h. We find that individual cells vary substantially in their expression patterns over the cell cycle, but that on average TN‐C promoter activity increases during the last 40% of the cell cycle. We also find that the increase in promoter activity is proportional to the activity earlier in the cell cycle. This work illustrates the application of live cell microscopy and automated image analysis of a promoter‐driven GFP reporter cell line to identify subtle gene regulatory mechanisms that are difficult to uncover using population averaged measurements. Published 2011 Wiley‐Liss, Inc.


international conference on parallel processing | 2014

A Hybrid CPU-GPU System for Stitching Large Scale Optical Microscopy Images

Timothy Blattner; Walid Keyrouz; Joe Chalfoun; Bertrand C. Stivalet; Mary Brady; Shujia Zhou

Researchers in various fields are using optical microscopy to acquire very large images, 10000 - 200000 of pixels per side. Optical microscopes acquire these images as grids of overlapping partial images (thousands of pixels per side) that are then stitched together via software. Composing such large images is a compute and data intensive task even for modern machines. Researchers compound this difficulty further by obtaining time-series, volumetric, or multiple channel images with the resulting data sets now having or approaching terabyte sizes. We present a scalable hybrid CPU-GPU implementation of image stitching that processes large image sets at near interactive rates. Our implementation scales well with both image sizes and the number of CPU cores and GPU cards in a machine. It processes a grid of 42 × 59 tiles into a 17 k × 22 k pixels image in 43 s (end-to-end execution times) when using one NVIDIA Tesla C2070 card and two Intel Xeon E-5620 quad-core CPUs, and in 29 s when using two Tesla C2070 cards and the same two CPUs. It also composes and renders the composite image without saving it in 15 s. In comparison, ImageJ/Fiji, which is widely used by biologists, has an image stitching plugin that takes > 3.6 h for the same workload despite being multithreaded and executing the same mathematical operators, it composes and saves the large image in an additional 1.5 h. This implementation takes advantage of coarse-grain parallelism. It organizes the computation into a pipeline architecture that spans CPU and GPU resources and overlaps computation with data motion. The implementation achieves a nearly 10× performance improvement over our optimized non-pipeline GPU implementation and demonstrates near-linear speedup when increasing CPU thread count and increasing number of GPUs.


Stem Cell Research | 2016

Large-scale time-lapse microscopy of Oct4 expression in human embryonic stem cell colonies.

Kiran Bhadriraju; Michael Halter; Julien M. Amelot; Peter Bajcsy; Joe Chalfoun; Antoine Vandecreme; Barbara S. Mallon; Kyeyoon Park; Subhash Sista; John T. Elliott; Anne L. Plant

Identification and quantification of the characteristics of stem cell preparations is critical for understanding stem cell biology and for the development and manufacturing of stem cell based therapies. We have developed image analysis and visualization software that allows effective use of time-lapse microscopy to provide spatial and dynamic information from large numbers of human embryonic stem cell colonies. To achieve statistically relevant sampling, we examined >680 colonies from 3 different preparations of cells over 5 days each, generating a total experimental dataset of 0.9 terabyte (TB). The 0.5 Giga-pixel images at each time point were represented by multi-resolution pyramids and visualized using the Deep Zoom Javascript library extended to support viewing Giga-pixel images over time and extracting data on individual colonies. We present a methodology that enables quantification of variations in nominally-identical preparations and between colonies, correlation of colony characteristics with Oct4 expression, and identification of rare events.


IEEE Computer | 2016

Enabling Stem Cell Characterization from Large Microscopy Images

Peter Bajcsy; Antoine Vandecreme; Julien M. Amelot; Joe Chalfoun; Michael P. Majurski; Mary Brady

Microscopy could be an important tool for characterizing stem cell products if quantitative measurements could be collected over multiple spatial and temporal scales. With the cells changing states over time and being several orders of magnitude smaller than cell products, modern microscopes are already capable of imaging large spatial areas, repeat imaging over time, and acquiring images over several spectra. However, characterizing stem cell products from such large image collections is challenging because of data size, required computations, and lack of interactive quantitative measurements needed to determine release criteria. We present a measurement web system consisting of available algorithms, extensions to a client-server framework using Deep Zoom, and the configuration know-how to provide the information needed for inspecting the quality of a cell product. The cell and other data sets are accessible via the prototype web-based system at http://isg.nist.gov/deepzoomweb.Microscopes can now cover large spatial areas and capture stem cell behavior over time. However, without discovering statistically reliable quantitative stem cell quality measures, products cannot be released to market. A Web-based measurement system overcomes desktop limitations by leveraging cloud and cluster computing for offline computations and by using Deep Zoom extensions for interactive viewing and measurement.


Journal of Microscopy | 2015

Background intensity correction for terabyte-sized time-lapse images

Joe Chalfoun; Michael P. Majurski; Kiran Bhadriraju; Steven P. Lund; Peter Bajcsy; Mary Brady

Several computational challenges associated with large‐scale background image correction of terabyte‐sized fluorescent images are discussed and analysed in this paper. Dark current, flat‐field and background correction models are applied over a mosaic of hundreds of spatially overlapping fields of view (FOVs) taken over the course of several days, during which the background diminishes as cell colonies grow. The motivation of our work comes from the need to quantify the dynamics of OCT‐4 gene expression via a fluorescent reporter in human stem cell colonies. Our approach to background correction is formulated as an optimization problem over two image partitioning schemes and four analytical correction models. The optimization objective function is evaluated in terms of (1) the minimum root mean square (RMS) error remaining after image correction, (2) the maximum signal‐to‐noise ratio (SNR) reached after downsampling and (3) the minimum execution time. Based on the analyses with measured dark current noise and flat‐field images, the most optimal GFP background correction is obtained by using a data partition based on forming a set of submosaic images with a polynomial surface background model. The resulting image after correction is characterized by an RMS of about 8, and an SNR value of a 4 × 4 downsampling above 5 by Rose criterion. The new technique generates an image with half RMS value and double SNR value when compared to an approach that assumes constant background throughout the mosaic. We show that the background noise in terabyte‐sized fluorescent image mosaics can be corrected computationally with the optimized triplet (data partition, model, SNR driven downsampling) such that the total RMS value from background noise does not exceed the magnitude of the measured dark current noise. In this case, the dark current noise serves as a benchmark for the lowest noise level that an imaging system can achieve. In comparison to previous work, the past fluorescent image background correction methods have been designed for single FOV and have not been applied to terabyte‐sized images with large mosaic FOVs, low SNR and diminishing access to background information over time as cell colonies span entirely multiple FOVs. The code is available as open‐source from the following link https://isg.nist.gov/.

Collaboration


Dive into the Joe Chalfoun's collaboration.

Top Co-Authors

Avatar

Mary Brady

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Bajcsy

University of Illinois at Urbana–Champaign

View shared research outputs
Top Co-Authors

Avatar

Michael P. Majurski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mylene Simon

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Halter

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Alden A. Dima

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Antoine Vandecreme

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Adele P. Peskin

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

John T. Elliott

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Walid Keyrouz

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge