Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samuel F. Dodge is active.

Publication


Featured researches published by Samuel F. Dodge.


quality of multimedia experience | 2016

Understanding how image quality affects deep neural networks

Samuel F. Dodge; Lina J. Karam

Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions.


international conference on computer communications and networks | 2017

A Study and Comparison of Human and Deep Learning Recognition Performance under Visual Distortions

Samuel F. Dodge; Lina J. Karam

Deep neural networks (DNNs) achieve excellent performance on standard classification tasks. However, under image quality distortions such as blur and noise, classification accuracy becomes poor. In this work, we compare the performance of DNNs with human subjects on distorted images. We show that, although DNNs perform better than or on par with humans on good quality images, DNN performance is still much lower than human performance on distorted images. We additionally find that there is little correlation in errors between DNNs and human subjects. This could be an indication that the internal representation of images are different between DNNs and the human visual system. These comparisons with human performance could be used to guide future development of more robust DNNs.


IET Biometrics | 2018

Unconstrained ear recognition using deep neural networks

Samuel F. Dodge; Jinane Mounsef; Lina J. Karam

The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature-extraction models with fine-tuned networks. However, because the datasets are limited in size, a fine-tuned network tends to over-fit. They propose a deep learning-based averaging ensemble to reduce the effect of over-fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature-extraction based models and single fine-tuned models.


international conference on machine vision | 2017

Parsing floor plan images

Samuel F. Dodge; Jiu Xu; Björn Stenger

This paper introduces a method for analyzing floor plan images using wall segmentation, object detection, and optical character recognition. We introduce a challenging new real-estate floor plan dataset, R-FP, evaluate different wall segmentation methods, and propose fully convolutional networks (FCN) for this task. We explore architectures with different pixel-stride values and more compact ones with skipped pooling layers. An FCN-2s with a 2-pixel stride layer achieves state-of-the-art performance, obtaining a mean Intersection-over-Union score of 89.9% on R-FP, and 94.4% on the public CVC-FP data set. Using OCR and object detection, we estimate room sizes. Finally, we show applications in automatic 3D model building and interactive furniture fitting.


international conference on image processing | 2016

Visual attention quality database for benchmarking performance evaluation metrics

Milind S. Gide; Samuel F. Dodge; Lina J. Karam

With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed. These models are evaluated by using performance evaluation metrics that measure how well a predicted map matches eye-tracking data obtained from human observers. Though there are a number of existing performance evaluation metrics, there is no clear consensus on which evaluation metric is the best. This work proposes a subjective study that uses ratings from human observers to evaluate saliency maps computed by existing VA models based on comparing the maps visually with ground-truth maps obtained from eye-tracking data. The subjective ratings are correlated with the scores obtained from existing as well as a proposed objective VA performance evaluation metric using several correlation measures. The correlation results show that the proposed objective VA metric outperforms the existing metrics.


international conference on image processing | 2012

Attentive gesture recognition

Samuel F. Dodge; Lina J. Karam

This paper presents a novel method for static gesture recognition based on visual attention. Our proposed method makes use of a visual attention model to automatically select points that correspond to fixation points of the human eye. Gesture recognition is then performed using the determined visual attention fixation points. For this purpose, shape context descriptors are used to compare the sparse fixation points of gestures for classification. Simulation results are presented in order to illustrate the performance of the proposed perceptual-based attentive gesture recognition method. The proposed method not only helps in the development of more natural user-centric interactive interfaces but is also able to achieve a 96.42% classification accuracy on the Triesch database of hand postures, which is superior to other methods presented in the literature.


Proceedings of SPIE | 2014

An evaluation of attention models for use in SLAM

Samuel F. Dodge; Lina J. Karam

In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.


IEEE Transactions on Image Processing | 2018

Visual Saliency Prediction Using a Mixture of Deep Neural Networks

Samuel F. Dodge; Lina J. Karam


international conference on computer vision | 2017

Can the Early Human Visual System Compete with Deep Neural Networks

Samuel F. Dodge; Lina J. Karam


arXiv: Computer Vision and Pattern Recognition | 2016

The Effect of Distortions on the Prediction of Visual Attention

Milind S. Gide; Samuel F. Dodge; Lina J. Karam

Collaboration


Dive into the Samuel F. Dodge's collaboration.

Top Co-Authors

Avatar

Lina J. Karam

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Milind S. Gide

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Jinane Mounsef

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge