Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Sadowski is active.

Publication


Featured researches published by Thomas Sadowski.


visual information processing conference | 2007

Comparison of thresholding techniques on nanoparticle images

John S. DaPonte; Thomas Sadowski; Christine Broadbridge; D. Day; A. Lehman; D. Krishna; L. Marinella; P. Munhutu; M. Sawicki

Thresholding is an image processing procedure used to convert an image consisting of gray level pixels into a black and white binary image. One application of thresholding is particle analysis. Once foreground objects are separated from the background, a quantitative analysis that characterizes the number, size and shape of particles is obtained which can then be used to evaluate a series of nanoparticle samples. Numerous thresholding techniques exist differing primarily in how they deal with variations in noise, illumination and contrast. In this paper, several popular thresholding algorithms are qualitatively and quantitatively evaluated on transmission electron microscopy (TEM) and atomic force microscopy (AFM) images. Initially, six thresholding algorithms were investigated: Otsu, Riddler-Calvard, Kittler, Entropy, Tsai and Maximum Likelihood. The Riddler-Calvard algorithm was not included in the quantitative analysis because it did not produce acceptable qualitative results for the images in the series. Two quantitative measures were used to evaluate these algorithms. One is based on comparing object area the other on diameter before and after thresholding. For AFM images the Kittler algorithm yielded the best results followed by the Entropy and Maximum Likelihood techniques. The Tsai algorithm yielded the top results for TEM images followed by the Entropy and Kittler methods.


visualization and data analysis | 2009

Computer assisted analysis of microscopy images

M. Sawicki; P. Munhutu; John S. DaPonte; Christine Caragianis-Broadbridge; Ann Lehman; Thomas Sadowski; E. Garcia; C. Heyden; L. Mirabelle; P. Benjamin

The use of Transmission Electron Microscopy (TEM) to characterize the microstructure of a material continues to grow in importance as technological advancements become increasingly more dependent on nanotechnology1 . Since nanoparticle properties such as size (diameter) and size distribution are often important in determining potential applications, a particle analysis is often performed on TEM images. Traditionally done manually, this has the potential to be labor intensive, time consuming, and subjective2. To resolve these issues, automated particle analysis routines are becoming more widely accepted within the community3. When using such programs, it is important to compare their performance, in terms of functionality and cost. The primary goal of this study was to apply one such software package, ImageJ to grayscale TEM images of nanoparticles with known size. A secondary goal was to compare this popular open-source general purpose image processing program to two commercial software packages. After a brief investigation of performance and price, ImageJ was identified as the software best suited for the particle analysis conducted in the study. While many ImageJ functions were used, the ability to break agglomerations that occur in specimen preparation into separate particles using a watershed algorithm was particularly helpful4.


visual information processing conference | 2007

Application of particle analysis to transmission electron microscopy (TEM)

John S. DaPonte; Thomas Sadowski; Christine Broadbridge; D. Day; A. Lehman; D. Krishna; L. Marinella; P. Munhutu; M. Sawicki

Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their size may give them properties that are very different from bulk materials. Further advancement of nanotechnology cannot be obtained without an increased understanding of nanoparticle properties such as size (diameter) and size distribution frequently evaluated using transmission electron microscopy (TEM). In the past, these parameters have been obtained from digitized TEM images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive. More recently, computer imaging particle analysis has emerged as an objective alternative by counting and measuring objects in a binary image. This paper will describe the procedures used to preprocess a set of gray scale TEM images so that they could be correctly thresholded into binary images. This allows for a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Several preprocessing methods including pseudo flat field correction and rolling ball background correction were investigated with the rolling ball algorithm yielding the best results. Examples of particle analysis will be presented for different types of materials and different magnifications. In addition, a method based on the results of particle analysis for identifying and removing small noise particles will be discussed. This filtering technique is based on identifying the location of small particles in the binary image and removing them without affecting the size of other larger particles.


Nanomaterials Synthesis, Interfacing, and Integrating in Devices, Circuits, and Systems II | 2007

Characterization of nanoparticles by computer imaging particle analysis

John S. DaPonte; Thomas Sadowski; Christine Broadbridge; P. Munhutu; Ann Lehman; D. Krishnamoorthy; E. Garcia; M. Sawicki; C. Heyden; L. Mirabelle; P. Benjamin

Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their decreased size may give rise to certain properties that are very different from those exhibited by bulk materials. Further advancement of nanotechnology cannot be realized without an increased understanding of nanoparticle properties such as size (diameter) and size distribution. Frequently, these parameters are evaluated using numerous imaging modalities including transmission electron microscopy (TEM) and atomic force microscopy (AFM). In the past, these parameters have been obtained from digitized images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive. Recently, computer imaging particle analysis routines that count and measure objects in a binary image1 have emerged as an objective and rapid alternative to manual techniques. In this paper a procedure is described that can be used to preprocess a set of gray scale images so that they are correctly thresholded into binary images prior to a particle analysis ultimately resulting in a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Particle analysis was performed on two types of calibration samples imaged using AFM and TEM. Additionally, results of particle analysis can be used for identifying and removing small noise particles from the image. This filtering technique is based on identifying the location of small particles in the binary image, assessing their size, and removing them without affecting the size of other larger particles.


visual information processing conference | 2005

Visual enhancement of micro CT bone density images

John S. DaPonte; Michael Clark; Megan Damon; Rebecca Kamins; Thomas Sadowski; Charles Tirrell

The primary goal of this research was to provide image processing support to aid in the identification of those subjects most affected by bone loss when exposed to weightlessness and provide insight into the causes for large variability. Past research has demonstrated that genetically distinct strains of mice exhibit different degrees of bone loss when subjected to simulated weightlessness. Bone loss is quantified by in vivo computed tomography (CT) imaging. The first step in evaluating bone density is to segment gray scale images into separate regions of bone and background. Two of the most common methods for implementing image segmentation are thresholding and edge detection. Thresholding is generally considered the simplest segmentation process which can be obtained by having a user visually select a threshold using a sliding scale. This is a highly subjective process with great potential for variation from one observer to another. One way to reduce inter-observer variability is to have several users independently set the threshold and average their results but this is a very time consuming process. A better approach is to apply an objective adaptive technique such as the Riddler / Calvard method. In our study we have concluded that thresholding was better than edge detection and pre-processing these images with an iterative deconvolution algorithm prior to adaptive thresholding yields superior visualization when compared with images that have not been pre-processed or images that have been pre-processed with a filter.


Microscopy and Microanalysis | 2014

Microscopy and Team-based Interdisciplinary Materials Research to Achieve 21 st Century Skills

Christine Broadbridge; Thomas Sadowski; Jacquelynn Garofano; John S. DaPonte

The Education and Outreach (EO) program is an essential part of the National Science Foundation funded Materials Research Science and Engineering Centers (MRSEC) program. The Center for Research on Interface Structures and Phenomena (CRISP) is a MRSEC housed at Yale and Southern Connecticut State University (SCSU). The overarching goal of CRISP EO is to use interdisciplinary science (e.g. materials science) as a vehicle for enhancing the education of future scientists, educators, K-12 students, parents and the general public. The educational goals and resulting signature programs were designed to optimize integration of the research and educational strengths of CRISP through high impact EO activities. One such program is the MRSEC Initiative for Multidisciplinary Education and Research (MIMER) [1]. The MIMER program provides opportunities for team-based interdisciplinary research experiences to students and teachers by integrating the CRISP research experiences for undergraduates (REU), teachers (RET) and high school fellowship programs. A MIMER team assembles researchers with different backgrounds including a faculty member/CRISP researcher, graduate students and/or post-docs, undergraduates, teachers and high school students. The collaborative and interdisciplinary nature of the MIMER team encourages synergy and fosters the formation of mentoring relationships among team members.


Microscopy and Microanalysis | 2014

Use of the Gabor Filter for Edge Detection in the Analysis of Zinc Oxide Nanowire Images

B. E. Scanley; Thomas Sadowski; Candice Pelligra; M. E. Kreider; Chinedum O. Osuji; Christine Broadbridge

Semi-automated processing of microscopy images can significantly enhance the capacity for objective nanostructure characterization. However, background complexity, intricate object features and overlapping of objects can make this a difficult task. Here we describe use of the Gabor filter [1, 2] to facilitate the identification and outlines of the top surfaces of nanowires of zinc oxide (ZnO), imaged with scanning electron microscopy (SEM). The Gabor filter is a sinusoidal function multiplied by a Gaussian envelope. The sinusoidal shape makes the filter sensitive to spatial frequencies and the Gaussian envelope limits the frequency sensitivity to localized areas of the image. In particular, the Gabor filter enhances lines and edges in a direction perpendicular to the direction of the sinusoid. This sensitivity to linear parts of an image make it particularly well suited to analysis of the top surfaces of the ZnO nanowires which have a regular hexagonal shape and bright edges in the SEM images.


visual information processing conference | 2006

Animating climate model data

John S. DaPonte; Thomas Sadowski; Paul Thomas

This paper describes a collaborative project conducted by the Computer Science Department at Southern Connecticut State University and NASAs Goddard Institute for Space Science (GISS). Animations of output from a climate simulation math model used at GISS to predict rainfall and circulation have been produced for West Africa from June to September 2002. These early results have assisted scientists at GISS in evaluating the accuracy of the RM3 climate model when compared to similar results obtained from satellite imagery. The results presented below will be refined to better meet the needs of GISS scientists and will be expanded to cover other geographic regions for a variety of time frames.


visual information processing conference | 2006

Quantitative confirmation of visual improvements to micro-CT bone density images

John S. DaPonte; Michael Clark; Paul Nelson; Thomas Sadowski; Elizabeth Wood

The primary goal of this research was to investigate the ability of quantitative variables to confirm qualitative improvements of the deconvolution algorithm as a preprocessing step in evaluating micro CT bone density images. The analysis of these types of images is important because they are necessary to evaluate various countermeasures used to reduce or potentially reverse bone loss experienced by some astronauts when exposed to extended weightlessness during space travel. Nine low resolution (17.5 microns) CT bone density image sequences, ranging from between 85 to 88 images per sequence, were processed with three preprocessing treatment groups consisting of no preprocessing, preprocessing with a deconvolution algorithm and preprocessing with a Gaussian filter. The quantitative parameters investigated consisted of Bone Volume to Total Volume Ratio, the Structured Model Index, Fractal Dimension, Bone Area Ratio, Bone Thickness Ratio, Eulers Number and the Measure of Enhancement. Trends found in these quantitative variables appear to corroborate the visual improvements observed in the past and suggest which quantitative parameters may be capable of distinguishing between groups that experience bone loss and others that do not..


northeast bioengineering conference | 2005

An approach to microCT image processing

Michael Clark; John S. DaPonte; Thomas Sadowski

In scientific imaging, it is crucial to obtain precise images to facilitate accurate observations for the given application. However, often times the imaging equipment used to acquire such images introduces error into the observed image. Therefore, there is a fundamental need to remove the error associated with these images in order to facilitate accurate observations. This study investigates the effectiveness of an image processing technique utilizing an iterative deconvolution algorithm to remove error from microCT images. This technique is applied to several sets of in-vivo microCT scans of mice, and its effectiveness is evaluated by qualitative comparison of the resultant thresholded binary images to thresholded binary images produced by more conventional image processing techniques; namely Gaussian filtering and straight thresholding. Results for this study suggest that iterative deconvolution as a preprocessing step produces superior qualitative results as compared to the more conventional methods tested. The groundwork for future quantitative verification Is motivated.

Collaboration


Dive into the Thomas Sadowski's collaboration.

Top Co-Authors

Avatar

John S. DaPonte

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

Christine Broadbridge

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

M. Sawicki

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

P. Munhutu

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

Michael Clark

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

A. Lehman

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

C. Heyden

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

D. Day

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

D. Krishna

Southern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

E. Garcia

Southern Connecticut State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge