Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alden A. Dima is active.

Publication


Featured researches published by Alden A. Dima.


Cytometry Part A | 2011

Comparison of Segmentation Algorithms For Fluorescence Microscopy Images of Cells

Alden A. Dima; John T. Elliott; James J. Filliben; Michael Halter; Adele P. Peskin; Javier Bernal; Marcin Kociolek; Mary Brady; Hai C. Tang; Anne L. Plant

The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold‐based segmentation techniques are less accurate than k‐means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley‐Liss, Inc.


Journal of Microscopy | 2013

Segmenting time-lapse phase contrast images of adjacent NIH 3T3 cells

Joe Chalfoun; M. Kociolek; Alden A. Dima; Michael Halter; Antonio Cardone; Adele P. Peskin; Peter Bajcsy; Mary Brady

We present a new method for segmenting phase contrast images of NIH 3T3 fibroblast cells that is accurate even when cells are physically in contact with each other. The problem of segmentation, when cells are in contact, poses a challenge to the accurate automation of cell counting, tracking and lineage modelling in cell biology. The segmentation method presented in this paper consists of (1) background reconstruction to obtain noise‐free foreground pixels and (2) incorporation of biological insight about dividing and nondividing cells into the segmentation process to achieve reliable separation of foreground pixels defined as pixels associated with individual cells. The segmentation results for a time‐lapse image stack were compared against 238 manually segmented images (8219 cells) provided by experts, which we consider as reference data. We chose two metrics to measure the accuracy of segmentation: the ‘Adjusted Rand Index’ which compares similarities at a pixel level between masks resulting from manual and automated segmentation, and the ‘Number of Cells per Field’ (NCF) which compares the number of cells identified in the field by manual versus automated analysis. Our results show that the automated segmentation compared to manual segmentation has an average adjusted rand index of 0.96 (1 being a perfect match), with a standard deviation of 0.03, and an average difference of the two numbers of cells per field equal to 5.39% with a standard deviation of 4.6%.


BMC Bioinformatics | 2014

FogBank: a single cell segmentation across multiple cell lines and image modalities

Joe Chalfoun; Michael P. Majurski; Alden A. Dima; Christina H. Stuelten; Adele P. Peskin; Mary Brady

BackgroundMany cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies.ResultsWe present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation.First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce.We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images.ConclusionsFogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.


Cytometry Part A | 2011

Cell cycle dependent TN‐C promoter activity determined by live cell imaging

Michael Halter; Daniel R. Sisan; Joe Chalfoun; Benjamin L. Stottrup; Antonio Cardone; Alden A. Dima; Alessandro Tona; Anne L. Plant; John T. Elliott

The extracellular matrix protein tenascin‐C plays a critical role in development, wound healing, and cancer progression, but how it is controlled and how it exerts its physiological responses remain unclear. By quantifying the behavior of live cells with phase contrast and fluorescence microscopy, the dynamic regulation of TN‐C promoter activity is examined. We employ an NIH 3T3 cell line stably transfected with the TN‐C promoter ligated to the gene sequence for destabilized green fluorescent protein (GFP). Fully automated image analysis routines, validated by comparison with data derived from manual segmentation and tracking of single cells, are used to quantify changes in the cellular GFP in hundreds of individual cells throughout their cell cycle during live cell imaging experiments lasting 62 h. We find that individual cells vary substantially in their expression patterns over the cell cycle, but that on average TN‐C promoter activity increases during the last 40% of the cell cycle. We also find that the increase in promoter activity is proportional to the activity earlier in the cell cycle. This work illustrates the application of live cell microscopy and automated image analysis of a promoter‐driven GFP reporter cell line to identify subtle gene regulatory mechanisms that are difficult to uncover using population averaged measurements. Published 2011 Wiley‐Liss, Inc.


Radiology | 2015

Evaluation of Low-Contrast Detectability of Iterative Reconstruction across Multiple Institutions, CT Scanner Manufacturers, and Radiation Exposure Levels.

Ganesh Saiprasad; James J. Filliben; Adele P. Peskin; Eliot L. Siegel; Joseph J. Chen; Christopher Trimble; Z Yang; O Christianson; Ehsan Samei; Elizabeth Krupinski; Alden A. Dima

PURPOSE To compare image resolution from iterative reconstruction with resolution from filtered back projection for low-contrast objects on phantom computed tomographic (CT) images across vendors and exposure levels. MATERIALS AND METHODS Randomized repeat scans of an American College of Radiology CT accreditation phantom (module 2, low contrast) were performed for multiple radiation exposures, vendors, and vendor iterative reconstruction algorithms. Eleven volunteers were presented with 900 images by using a custom-designed graphical user interface to perform a task created specifically for this reader study. Results were analyzed by using statistical graphics and analysis of variance. RESULTS Across three vendors (blinded as A, B, and C) and across three exposure levels, the mean correct classification rate was higher for iterative reconstruction than filtered back projection (P < .01): 87.4% iterative reconstruction and 81.3% filtered back projection at 20 mGy, 70.3% iterative reconstruction and 63.9% filtered back projection at 12 mGy, and 61.0% iterative reconstruction and 56.4% filtered back projection at 7.2 mGy. There was a significant difference in mean correct classification rate between vendor B and the other two vendors. Across all exposure levels, images obtained by using vendor Bs scanner outperformed the other vendors, with a mean correct classification rate of 74.4%, while the mean correct classification rate for vendors A and C was 68.1% and 68.3%, respectively. Across all readers, the mean correct classification rate for iterative reconstruction (73.0%) was higher compared with the mean correct classification rate for filtered back projection (67.0%). CONCLUSION The potential exists to reduce radiation dose without compromising low-contrast detectability by using iterative reconstruction instead of filtered back projection. There is substantial variability across vendor reconstruction algorithms.


international symposium on visual computing | 2010

Modeling clinical tumors to create reference data for tumor volume measurement

Adele P. Peskin; Alden A. Dima

Expanding on our previously developed method for inserting synthetic objects into clinical computed tomography (CT) data, we model a set of eight clinical tumors that span a range of geometries and locations within the lung. The goal is to create realistic but synthetic tumor data, with known volumes. The set of data we created can be used as ground truth data to compare volumetric methods, particularly for lung tumors attached to vascular material in the lung or attached to lung walls, where ambiguities for volume measurement occur. In the process of creating these data sets, we select a sample of often seen lung tumor shapes and locations in the lung, and show that for this sample a large fraction of the voxels representing tumors in the gridded data are partially filled voxels. This points out the need for volumetric methods that handle partial volumes accurately.


Academic Radiology | 2016

Algorithm Variability in the Estimation of Lung Nodule Volume From Phantom CT Scans: Results of the QIBA 3A Public Challenge

Maria Athelogou; Hyun J. Kim; Alden A. Dima; Nancy A. Obuchowski; Adele P. Peskin; Marios A. Gavrielides; Nicholas Petrick; Ganesh Saiprasad; Dirk Colditz Colditz; Hubert Beaumont; Estanislao Oubel; Yongqiang Tan; Binsheng Zhao; Jan Martin Kuhnigk; Jan Hendrik Moltz; Guillaume Orieux; Robert J. Gillies; Yuhua Gu; Ninad Mantri; Gregory Goldmacher; Luduan Zhang; Emilio Vega; Michael C. Bloom; Rudresh Jarecha; Grzegorz Soza; Christian Tietjen; Tomoyuki Takeguchi; Hitoshi Yamagata; Sam Peterson; Osama Masoud

RATIONALE AND OBJECTIVES Quantifying changes in lung tumor volume is important for diagnosis, therapy planning, and evaluation of response to therapy. The aim of this study was to assess the performance of multiple algorithms on a reference data set. The study was organized by the Quantitative Imaging Biomarker Alliance (QIBA). MATERIALS AND METHODS The study was organized as a public challenge. Computed tomography scans of synthetic lung tumors in an anthropomorphic phantom were acquired by the Food and Drug Administration. Tumors varied in size, shape, and radiodensity. Participants applied their own semi-automated volume estimation algorithms that either did not allow or allowed post-segmentation correction (type 1 or 2, respectively). Statistical analysis of accuracy (percent bias) and precision (repeatability and reproducibility) was conducted across algorithms, as well as across nodule characteristics, slice thickness, and algorithm type. RESULTS Eighty-four percent of volume measurements of QIBA-compliant tumors were within 15% of the true volume, ranging from 66% to 93% across algorithms, compared to 61% of volume measurements for all tumors (ranging from 37% to 84%). Algorithm type did not affect bias substantially; however, it was an important factor in measurement precision. Algorithm precision was notably better as tumor size increased, worse for irregularly shaped tumors, and on the average better for type 1 algorithms. Over all nodules meeting the QIBA Profile, precision, as measured by the repeatability coefficient, was 9.0% compared to 18.4% overall. CONCLUSION The results achieved in this study, using a heterogeneous set of measurement algorithms, support QIBA quantitative performance claims in terms of volume measurement repeatability for nodules meeting the QIBA Profile criteria.


Journal of Research of the National Institute of Standards and Technology | 2009

Overlap-Based Cell Tracker.

Joe Chalfoun; Antonio Cardone; Alden A. Dima; Daniel P. Allen; Michael Halter

In order to facilitate the extraction of quantitative data from live cell image sets, automated image analysis methods are needed. This paper presents an introduction to the general principle of an overlap cell tracking software developed by the National Institute of Standards and Technology (NIST). This cell tracker has the ability to track cells across a set of time lapse images acquired at high rates based on the amount of overlap between cellular regions in consecutive frames. It is designed to be highly flexible, requires little user parameterization, and has a fast execution time.


international symposium on visual computing | 2009

A Quality Pre-processor for Biological Cell Images

Adele P. Peskin; Karen Kafadar; Alden A. Dima

We have developed a method to rapidly test the quality of a biological image, to identify appropriate segmentation methods that will render high quality segmentations for cells within that image. The key contribution is the development of a measure of the clarity of an individual biological cell within an image that can be quickly and directly used to select a segmentation method during a high content screening process. This method is based on the gradient of the pixel intensity field at cell edges and on the distribution of pixel intensities just inside cell edges. We have developed a technique to synthesize biological cell images with varying qualities to create standardized images for testing segmentation methods. Differences in quality indices reflect observed differences in resulting masks of the same cell imaged under a variety of conditions.


international symposium on visual computing | 2010

Predicting segmentation accuracy for biological cell images

Adele P. Peskin; Alden A. Dima; Joe Chalfoun; John T. Elliott

We have performed segmentation procedures on a large number of images from two mammalian cell lines that were seeded at low density, in order to study trends in the segmentation results and make predictions about cellular features that affect segmentation accuracy. By comparing segmentation results from approximately 40000 cells, we find a linear relationship between the highest segmentation accuracy seen for a given cell and the fraction of pixels in the neighborhood of the edge of that cell. This fraction of pixels is at greatest risk for error when cells are segmented. We call the ratio of the size of this pixel fraction to the size of the cell the extended edge neighborhood and this metric can predict segmentation accuracy of any isolated cell.

Collaboration


Dive into the Alden A. Dima's collaboration.

Top Co-Authors

Avatar

Adele P. Peskin

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James J. Filliben

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mary Brady

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Joe Chalfoun

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Halter

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Z Yang

University of Maryland

View shared research outputs
Researchain Logo
Decentralizing Knowledge