Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Noel C. F. Codella.
international conference on machine learning | 2015
Noel C. F. Codella; Junjie Cai; Mani Abedini; Rahil Garnavi; Alan Halpern; John R. Smith
This work presents an approach for melanoma recognition in dermoscopy images that combines deep learning, sparse coding, and support vector machine SVM learning algorithms. One of the beneficial aspects of the proposed approach is that unsupervised learning within the domain, and feature transfer from the domain of natural photographs, eliminates the need of annotated data in the target task to learn good features. The applied feature transfer also allows the system to draw analogies between observations in dermoscopic images and observations in the natural world, mimicking the process clinical experts themselves employ to describe patterns in skin lesions. To evaluate the methodology, performance is measured on a dataset obtained from the International Skin Imaging Collaboration, containing 2624 clinical cases of melanoma 334, atypical nevi 144, and benign lesions 2146. The approach is compared to the prior state-of-art method on this dataset. Two-fold cross-validation is performed 20 times for evaluation 40 total experiments, and two discrimination tasks are examined: 1 melanoma vs. all non-melanoma lesions, and 2 melanoma vs. atypical lesions only. The presented approach achieves an accuracy of 93.1% 94.9% sensitivity, and 92.8% specificity for the first task, and 73.9% accuracy 73.8% sensitivity, and 74.3% specificity for the second task. In comparison, prior state-of-art ensemble modeling approaches alone yield 91.2% accuracy 93.0% sensitivity, and 91.0% specificity first the first task, and 71.5% accuracy 72.7% sensitivity, and 68.9% specificity for the second. Differences in performance were statistically significant p
Journal of The American Academy of Dermatology | 2018
Michael A. Marchetti; Noel C. F. Codella; Stephen W. Dusza; David A. Gutman; Brian Helba; Aadi Kalloo; Nabin K. Mishra; Cristina Carrera; M. Emre Celebi; Jennifer DeFazio; Natalia Jaimes; Ashfaq A. Marghoob; Elizabeth A. Quigley; Alon Scope; Oriol Yélamos; Allan C. Halpern
Ibm Journal of Research and Development | 2015
Mani Abedini; Noel C. F. Codella; Jonathan H. Connell; Rahil Garnavi; Michele Merler; Sharath Pankanti; John R. Smith; Tanveer Fathima Syeda-Mahmood
<
international conference on multimedia and expo | 2012
Noel C. F. Codella; Apostol Natsev; Gang Hua; Matthew L. Hill; Liangliang Cao; Leiguang Gong; John R. Smith
international conference on information and communication security | 2011
Noel C. F. Codella; Gang Hua; Apostol Natsev; John R. Smith
0.05, suggesting the proposed approach is an effective improvement over prior state-of-art.
Applications of Artificial Intelligence II | 1985
Noel C. F. Codella; John R. Smith
Background Computer vision may aid in melanoma detection. Objective We sought to compare melanoma diagnostic accuracy of computer algorithms to dermatologists using dermoscopic images. Methods We conducted a cross‐sectional study using 100 randomly selected dermoscopic images (50 melanomas, 44 nevi, and 6 lentigines) from an international computer vision melanoma challenge dataset (n = 379), along with individual algorithm results from 25 teams. We used 5 methods (nonlearned and machine learning) to combine individual automated predictions into “fusion” algorithms. In a companion study, 8 dermatologists classified the lesions in the 100 images as either benign or malignant. Results The average sensitivity and specificity of dermatologists in classification was 82% and 59%. At 82% sensitivity, dermatologist specificity was similar to the top challenge algorithm (59% vs. 62%, P = .68) but lower than the best‐performing fusion algorithm (59% vs. 76%, P = .02). Receiver operating characteristic area of the top fusion algorithm was greater than the mean receiver operating characteristic area of dermatologists (0.86 vs. 0.71, P = .001). Limitations The dataset lacked the full spectrum of skin lesions encountered in clinical practice, particularly banal lesions. Readers and algorithms were not provided clinical data (eg, age or lesion history/symptoms). Results obtained using our study design cannot be extrapolated to clinical practice. Conclusion Deep learning computer vision systems classified melanoma dermoscopy images with accuracy that exceeded some but not all dermatologists.
medical image computing and computer assisted intervention | 2014
Noel C. F. Codella; Jonathan H. Connell; Sharath Pankanti; Michele Merler; John R. Smith
In this work, we study the performance of a two-stage ensemble visual machine learning framework for classification of medical images. In the first stage, models are built for subsets of features and data, and in the second stage, models are combined. We demonstrate the performance of this framework in four contexts: 1) The public ImageCLEF (Cross Language Evaluation Forum) 2013 medical modality recognition benchmark, 2) echocardiography view and mode recognition, 3) dermatology disease recognition across two datasets, and 4) a broad medical image dataset, merged from multiple data sources into a collection of 158 categories covering both general and specific medical concepts—including modalities, body regions, views, and disease states. In the first context, the presented system achieves state-of-art performance of 82.2% multiclass accuracy. In the second context, the system attains 90.48% multiclass accuracy. In the third, state-of-art performance of 90% specificity and 90% sensitivity is obtained on a small standardized dataset of 200 images using a leave-one-out strategy. For a larger dataset of 2,761 images, 95% specificity and 98% sensitivity is obtained on a 20% held-out test set. Finally, in the fourth context, the system achieves sensitivity and specificity of 94.7% and 98.4%, respectively, demonstrating the ability to generalize over domains.
international conference on multimedia and expo | 2013
Liangliang Cao; Leiguang Gong; John R. Kender; Noel C. F. Codella; John R. Smith
In this study, we present a system for video event classification that generates a temporal pyramid of static visual semantics using minimum-value, maximum-value, and average-value aggregation techniques. Kernel optimization and model subspace boosting are then applied to customize the pyramid for each event. SVM models are independently trained for each level in the pyramid using kernel selection according to 3-fold cross-validation. Kernels that both enforce static temporal order and permit temporal alignment are evaluated. Model subspace boosting is used to select the best combination of pyramid levels and aggregation techniques for each event. The NIST TRECVID Multimedia Event Detection (MED) 2011 dataset was used for evaluation. Results demonstrate that kernel optimizations using both temporally static and dynamic kernels together achieves better performance than any one particular method alone. In addition, model sub-space boosting reduces the size of the model by 80%, while maintaining 96% of the performance gain.
Proceedings of SPIE | 2016
Noel C. F. Codella; Mehdi Moradi; Matt Matasar; Tanveer Sveda-Mahmood; John R. Smith
The entire Earth surface has been documented with satellite imagery. The amount of data continues to grow as higher resolutions and temporal information become available. With this increasing amount of surface and temporal data, recognition, segmentation, and event detection in satellite images with a highly scalable system becomes more and more desirable. In this paper, a semantic taxonomy is constructed for the land-cover classification of satellite images. Both the training and running of the classifiers are implemented in a distributed Hadoop computing platform. Publicly available high resolution datasets were collected and divided into tiles of fixed dimensions as training data. The training data was manually indexed into the semantic taxonomy categories, such as ”Vegetation”, ”Building”, and ”Pavement”. A scalable modeling system implemented in the Hadoop MapReduce framework is used for training the classifiers and performing subsequent image classification. A separate larger test dataset of the San Diego region, acquired from Microsoft BING Maps, was used to demonstrate the efficacy of our system at large scale. The presented methodology of land-cover recognition provides a scalable solution for automatic satellite imagery analysis, especially when GIS data is not readily available, or surface change may occur due to catastrophic events such as flooding, hurricane, and snow storm, etc.
acm multimedia | 2014
Felix X. Yu; Liangliang Cao; Michele Merler; Noel C. F. Codella; Tao Chen; John R. Smith; Shih-Fu Chang
There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. These techniques can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid linkage region growing schemes, centroid linkage region growing schemes, spatial clustering schemes, and split-and-merge schemes. In this paper, we define each of the major classes of image segmentation techniques and describe several specific examples of each class of algorithm. We illustrate some of the techniques with examples of segmentations performed on real images.