Dmitry B. Goldgof
University of South Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dmitry B. Goldgof.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009
Rangachar Kasturi; Dmitry B. Goldgof; Padmanabhan Soundararajan; Vasant Manohar; John S. Garofolo; Rachel Bowers; Matthew Boonstra; Valentina N. Korzhova; Jing Zhang
Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.
Artificial Intelligence in Medicine | 2001
Lynn M. Fletcher-Heath; Lawrence O. Hall; Dmitry B. Goldgof; F. Reed Murtagh
Tumor segmentation from magnetic resonance (MR) images may aid in tumor treatment by tracking the progress of tumor growth and/or shrinkage. In this paper we present the first automatic segmentation method which separates non-enhancing brain tumors from healthy tissues in MR images to aid in the task of tracking tumor size over time. The MR feature images used for the segmentation consist of three weighted images (T1, T2 and proton density (PD)) for each axial slice through the head. An initial segmentation is computed using an unsupervised fuzzy clustering algorithm. Then, integrated domain knowledge and image processing techniques contribute to the final tumor segmentation. They are applied under the control of a knowledge-based system. The system knowledge was acquired by training on two patient volumes (14 images). Testing has shown successful tumor segmentations on four patient volumes (31 images). Our results show that we detected all six non-enhancing brain tumors, located tumor tissue in 35 of the 36 ground truth (radiologist labeled) slices containing tumor and successfully separated tumor regions from physically connected CSF regions in all the nine slices. Quantitative measurements are promising as correspondence ratios between ground truth and segmented tumor regions ranged between 0.368 and 0.871 per volume, with percent match ranging between 0.530 and 0.909 per volume.
IEEE Transactions on Intelligent Transportation Systems | 2010
Joshua Candamo; Matthew Shreve; Dmitry B. Goldgof; Deborah Sapper; Rangachar Kasturi
Visual surveillance is an active research topic in image processing. Transit systems are actively seeking new or improved ways to use technology to deter and respond to accidents, crime, suspicious activities, terrorism, and vandalism. Human behavior-recognition algorithms can be used proactively for prevention of incidents or reactively for investigation after the fact. This paper describes the current state-of-the-art image-processing methods for automatic-behavior-recognition techniques, with focus on the surveillance of human activities in the context of transit applications. The main goal of this survey is to provide researchers in the field with a summary of progress achieved to date and to help identify areas where further research is needed. This paper provides a thorough description of the research on relevant human behavior-recognition methods for transit surveillance. Recognition methods include single person (e.g., loitering), multiple-person interactions (e.g., fighting and personal attacks), person-vehicle interactions (e.g., vehicle vandalism), and person-facility/location interactions (e.g., object left behind and trespassing). A list of relevant behavior-recognition papers is presented, including behaviors, data sets, implementation details, and results. In addition, algorithms weaknesses, potential research directions, and contrast with commercial capabilities as advertised by manufacturers are discussed. This paper also provides a summary of literature surveys and developments of the core technologies (i.e., low-level processing techniques) used in visual surveillance systems, including motion detection, classification of moving objects, and tracking.
IEEE Transactions on Medical Imaging | 1993
Chunlin Li; Dmitry B. Goldgof; Lawrence O. Hall
Presents a knowledge-based approach to automatic classification and tissue labeling of 2D magnetic resonance (MR) images of the human brain. The system consists of 2 components: an unsupervised clustering algorithm and an expert system. MR brain data is initially segmented by the unsupervised algorithm, then the expert system locates a landmark tissue or cluster and analyzes it by matching it with a model or searching in it for an expected feature. The landmark tissue location and its analysis are repeated until a tumor is found or all tissues are labeled. The knowledge base contains information on cluster distribution in feature space and tissue models. Since tissue shapes are irregular, their models and matching are specially designed: 1) qualitative tissue models are defined for brain tissues such as white matter; 2) default reasoning is used to match a model with an MR image; that is, if there is no mismatch between a model and an image, they are taken as matched. The system has been tested with 53 slices of MR images acquired at different times by 2 different scanners. It accurately identifies abnormal slices and provides a partial labeling of the tissues. It provides an accurate complete labeling of all normal tissues in the absence of large amounts of data nonuniformity, as verified by radiologists. Thus the system can be used to provide automatic screening of slices for abnormality. It also provides a first step toward the complete description of abnormal images for use in automatic tumor volume determination.
Fuzzy Sets and Systems | 1998
Tai Wai Cheng; Dmitry B. Goldgof; Lawrence O. Hall
This paper presents a multistage random sampling fuzzy c-means-based clustering algorithm, which significantly reduces the computation time required to partition a data set into c classes. A series of subsets of the full data set are used to create initial cluster centers in order to provide an approximation to the final cluster centers. The quality of the final partitions is equivalent to those created by fuzzy c-means. The speed-up is normally a factor of 2–3 times, which is especially significant for high-dimensional spaces and large data sets. Examples of the improved speed of the algorithm in two multi-spectral domains, magnetic resonance image segmentation and satellite image segmentation, are given. The results are compared with fuzzy c-means in terms of both the time required and the final resulting partition. Significant speedup is shown in each example presented in the paper. Further, the convergence properties of fuzzy c-means are preserved.
ieee international conference on fuzzy systems | 2007
Prodip Hore; Lawrence O. Hall; Dmitry B. Goldgof
Recently several algorithms for clustering large data sets or streaming data sets have been proposed. Most of them address the crisp case of clustering, which cannot be easily generalized to the fuzzy case. In this paper, we propose a simple single pass (through the data) fuzzy c means algorithm that neither uses any complicated data structure nor any complicated data compression techniques, yet produces data partitions comparable to fuzzy c means. We also show our simple single pass fuzzy c means clustering algorithm when compared to fuzzy c means produces excellent speed-ups in clustering and thus can be used even if the data can be fully loaded in memory. Experimental results using five real data sets are provided.
Journal of Burn Care & Rehabilitation | 1999
Pauline S. Powers; Sudeep Sarkar; Dmitry B. Goldgof; Cruse Cw; Leonid V. Tsap
Current problems in the assessment of scars are discussed. The concept of subjective and objective aspects of scar assessment is introduced. The patients own view of the scar (the subjective component) can currently be assessed and may be very influential in determining the patients quality of life, irrespective of the actual physical characteristics of the scar. The objective aspects of the scar, including size, shape, texture, and pliability, are currently difficult to measure. Although the Vancouver Scar Scale has been used as the standard for objective measurements, there are problems with both the validity and reliability of this instrument. Various imaging techniques may permit more reliable and accurate methods for measuring the quantitative aspects of scars.
Computer Vision and Image Understanding | 2001
Min C. Shin; Dmitry B. Goldgof; Kevin W. Bowyer
This paper presents an empirical evaluation methodology for edge detectors. Edge detector performance is measured using a particular edge-based object recognition algorithm as a “higher-level” task. A detectors performance is ranked according to the object recognition performance that it generates. We have used a challenging train and test dataset containing 110 images of jeep-like images. Six edge detectors are compared and results suggest that (1) the SUSAN edge detector performs best and (2) the ranking of various edge detectors is different from that found in other evaluations.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001
Mark W. Powell; Sudeep Sarkar; Dmitry B. Goldgof
We present a methodology for calibrating multiple light source locations in 3D from images. The procedure involves the use of a novel calibration object that consists of three spheres at known relative positions. The process uses intensity images to find the positions of the light sources. We conducted experiments to locate light sources in 51 different positions in a laboratory setting. Our data shows that the vector from a point in the scene to a light source can be measured to within 2.7/spl plusmn/4/spl deg/ at /spl alpha/=.05 (6 percent relative) of its true direction and within 0.13/spl plusmn/.02 m at /spl alpha/=.05 (9 percent relative) of its true magnitude compared to empirically measured ground truth. Finally, we demonstrate how light source information is used for color correction.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1995
Art Matheny; Dmitry B. Goldgof
The use of spherical harmonics for rigid and nonrigid shape representation is well known. This paper extends the method to surface harmonics defined on domains other than the sphere and to four-dimensional spherical harmonics. These harmonics enable us to represent shapes which cannot be represented as a global function in spherical coordinates, but can be in other coordinate systems. Prolate and oblate spheroidal harmonics and cylindrical harmonics are examples of surface harmonics which we find useful. Nonrigid shapes are represented as functions of space and time either by including the time-dependence as a separate factor or by using four-dimensional spherical harmonics. This paper compares the errors of fitting various surface harmonics to an assortment of synthetic and real data samples, both rigid and nonrigid. In all cases we use a linear least-squares approach to find the best fit to given range data. It is found that for some shapes there is a variation among geometries in the number of harmonics functions needed to achieve a desired accuracy. In particular, it was found that four-dimensional spherical harmonics provide an improved model of the motion of the left ventricle of the heart. >