Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Shreve is active.

Publication


Featured researches published by Matthew Shreve.


IEEE Transactions on Intelligent Transportation Systems | 2010

Understanding Transit Scenes: A Survey on Human Behavior-Recognition Algorithms

Joshua Candamo; Matthew Shreve; Dmitry B. Goldgof; Deborah Sapper; Rangachar Kasturi

Visual surveillance is an active research topic in image processing. Transit systems are actively seeking new or improved ways to use technology to deter and respond to accidents, crime, suspicious activities, terrorism, and vandalism. Human behavior-recognition algorithms can be used proactively for prevention of incidents or reactively for investigation after the fact. This paper describes the current state-of-the-art image-processing methods for automatic-behavior-recognition techniques, with focus on the surveillance of human activities in the context of transit applications. The main goal of this survey is to provide researchers in the field with a summary of progress achieved to date and to help identify areas where further research is needed. This paper provides a thorough description of the research on relevant human behavior-recognition methods for transit surveillance. Recognition methods include single person (e.g., loitering), multiple-person interactions (e.g., fighting and personal attacks), person-vehicle interactions (e.g., vehicle vandalism), and person-facility/location interactions (e.g., object left behind and trespassing). A list of relevant behavior-recognition papers is presented, including behaviors, data sets, implementation details, and results. In addition, algorithms weaknesses, potential research directions, and contrast with commercial capabilities as advertised by manufacturers are discussed. This paper also provides a summary of literature surveys and developments of the core technologies (i.e., low-level processing techniques) used in visual surveillance systems, including motion detection, classification of moving objects, and tracking.


Face and Gesture 2011 | 2011

Macro- and micro-expression spotting in long videos using spatio-temporal strain

Matthew Shreve; Sridhar Godavarthy; Dmitry B. Goldgof; Sudeep Sarkar

We propose a method for the automatic spotting (temporal segmentation) of facial expressions in long videos comprising of macro- and micro-expressions. The method utilizes the strain impacted on the facial skin due to the non-rigid motion caused during expressions. The strain magnitude is calculated using the central difference method over the robust and dense optical flow field observed in several regions (chin, mouth, cheek, forehead) on each subjects face. This new approach is able to successfully detect and distinguish between large expressions (macro) and rapid and localized expressions (micro). Extensive testing was completed on a dataset containing 181 macro-expressions and 124 micro-expressions. The dataset consists of 56 videos collected at USF, 6 videos from the Canal-9 political debates, and 3 low quality videos found on the internet. A spotting accuracy of 85% was achieved for macro-expressions and 74% of all micro-expressions were spotted.


workshop on applications of computer vision | 2009

Towards macro- and micro-expression spotting in video using strain patterns

Matthew Shreve; Sridhar Godavarthy; Vasant Manohar; Dmitry B. Goldgof; Sudeep Sarkar

This paper presents a novel method for automatic spotting (temporal segmentation) of facial expressions in long videos comprising of continuous and changing expressions. The method utilizes the strain impacted on the facial skin due to the non-rigid motion caused during expressions. The strain magnitude is calculated using the central difference method over the robust and dense optical flow field of each subjects face. Testing has been done on 2 datasets (which includes 100 macro-expressions) and promising results have been obtained. The method is robust to several common drawbacks found in automatic facial expression segmentation including moderate in-plane and out-of-plane motion. Additionally, the method has also been modified to work with videos containing micro-expressions. Micro-expressions are detected utilizing their smaller spatial and temporal extent. A subjects face is divided in to sub-regions (mouth, cheeks, forehead, and eyes) and facial strain is calculated for each of these regions. Strain patterns in individual regions are used to identify subtle changes which facilitate the detection of micro-expressions.


Image and Vision Computing | 2014

Automatic expression spotting in videos

Matthew Shreve; Jesse Brizzi; Sergiy Fefilatyev; Timur Luguev; Dmitry B. Goldgof; Sudeep Sarkar

In this paper, we propose a novel solution for the problem of segmenting macro- and micro-expression frames (or retrieving the expression intervals) in video sequences, which is a prior step for many expression recognition algorithms. The proposed method exploits the non-rigid facial motion that occurs during facial expressions by capturing the optical strain corresponding to the elastic deformation of facial skin tissue. The method is capable of spotting both macro-expressions which are typically associated with expressed emotions and rapid micro- expressions which are typically associated with semi-suppressed macro-expressions. We test our algorithm on several datasets, including a newly released hour-long video with two subjects recorded in a natural setting that includes spontaneous facial expressions. We also report results on a dataset that contains 75 feigned macro-expressions and 37 feigned micro-expressions. We achieve over a 75% true positive rate with a 1% false positive rate for macro-expressions, and a nearly 80% true positive rate for spotting micro-expressions with a .3% false positive rate.


international conference on biometrics theory applications and systems | 2007

Face Recognition by Multi-Frame Fusion of Rotating Heads in Videos

Shaun J. Canavan; Michael P. Kozak; Yong Zhang; John R. Sullins; Matthew Shreve; Dmitry B. Goldgof

This paper presents a face recognition study that implicitly utilizes the 3D information in 2D video sequences through multi-sample fusion. The approach is based on the hypothesis that continuous and coherent intensity variations in video frames caused by a rotating head can provide information similar to that of explicit shapes or range images. The fusion was done on the image level to prevent information loss. Experiments were carried out using a data set of over 100 subjects and promising results have been obtained: (1) under regular indoor lighting conditions, rank one recognition rate increased from 91% using a single frame to 100% using 7-frame fusion; (2) under strong shadow conditions, rank one recognition rate increased from 63% using a single frame to 85% using 7-frame fusion.


IEEE Transactions on Intelligent Transportation Systems | 2017

Segmentation- and Annotation-Free License Plate Recognition With Deep Localization and Failure Identification

Orhan Bulan; Vladimir Kozitsky; Palghat S. Ramesh; Matthew Shreve

Automated license plate recognition (ALPR) is essential in several roadway imaging applications. For ALPR systems deployed in the United States, variation between jurisdictions on character width, spacing, and the existence of noise sources (e.g., heavy shadows, non-uniform illumination, various optical geometries, poor contrast, and so on) present in LP images makes it challenging for the recognition accuracy and scalability of ALPR systems. Font and plate-layout variation across jurisdictions further adds to the difficulty of proper character segmentation and increases the level of manual annotation required for training classifiers for each state, which can result in excessive operational overhead and cost. In this paper, we propose a new ALPR workflow that includes novel methods for segmentation- and annotation-free ALPR, as well as improved plate localization and automation for failure identification. Our proposed workflow begins with localizing the LP region in the captured image using a two-stage approach that first extracts a set of candidate regions using a weak sparse network of winnows classifier and then filters them using a strong convolutional neural network (CNN) classifier in the second stage. Images that fail a primary confidence test for plate localization are further classified to identify localization failures, such as LP not present, LP too bright, LP too dark, or no vehicle found. In the localized plate region, we perform segmentation and optical character recognition (OCR) jointly by using a probabilistic inference method based on hidden Markov models (HMMs) where the most likely code sequence is determined by applying the Viterbi algorithm. In order to reduce manual annotation required for training classifiers for OCR, we propose the use of either artificially generated synthetic LP images or character samples acquired by trained ALPR systems already operating in other sites. The performance gap due to differences between training and target domain distributions is minimized using an unsupervised domain adaptation. We evaluated the performance of our proposed methods on LP images captured in several US jurisdictions under realistic conditions.


Pattern Recognition | 2016

Active cleaning of label noise

Rajmadhan Ekambaram; Sergiy Fefilatyev; Matthew Shreve; Kurt Kramer; Lawrence O. Hall; Dmitry B. Goldgof; Rangachar Kasturi

Mislabeled examples in the training data can severely affect the performance of supervised classifiers. In this paper, we present an approach to remove any mislabeled examples in the dataset by selecting suspicious examples as targets for inspection. We show that the large margin and soft margin principles used in support vector machines (SVM) have the characteristic of capturing the mislabeled examples as support vectors. Experimental results on two character recognition datasets show that one-class and two-class SVMs are able to capture around 85% and 99% of label noise examples, respectively, as their support vectors. We propose another new method that iteratively builds two-class SVM classifiers on the non-support vector examples from the training data followed by an expert manually verifying the support vectors based on their classification score to identify any mislabeled examples. We show that this method reduces the number of examples to be reviewed, as well as providing parameter independence of this method, through experimental results on four data sets. So, by (re-)examining the labels of the selective support vectors, most noise can be removed. This can be quite advantageous when rapidly building a labeled data set. HighlightsNovel method for label noise removal from data is introduced.It significantly reduces the required number of examples to be reviewed.Support vectors of SVM classifier can capture around 99% of label noise examples.Two-class SVM captures more label noise examples than one-class SVM classifierCombination of one-class and two-class SVM produces a marginal improvement.


international conference on pattern recognition | 2008

Finite element modeling of facial deformation in videos for computing strain pattern

Vasant Manohar; Matthew Shreve; Dmitry B. Goldgof; Sudeep Sarkar

We present a finite element modeling based approach to compute strain patterns caused by facial deformation during expressions in videos. A sparse motion field computed through a robust optical flow method drives the FE model. While the geometry of the model is generic, the material constants associated with an individualpsilas facial skin are learned at a coarse level sufficient for accurate strain map computation. Experimental results using the computational strategy presented in this paper emphasize the uniqueness and stability of strain maps across adverse data conditions (shadow lighting and face camouflage) making it a promising feature for image analysis tasks that can benefit from such auxiliary information.


computer analysis of images and patterns | 2011

Evaluation of facial reconstructive surgery on patients with facial palsy using optical strain

Matthew Shreve; Neeha Jain; Dmitry B. Goldgof; Sudeep Sarkar; Walter G. Kropatsch; Chieh-Han John Tzou; Manfred Frey

We explore marker-less tracking methods for the purpose of evaluating the efficacy of facial re-constructive surgery on patients with facial palsies. After experimenting with several optical flow methods, we choose an approach that results in less than 2 pixels in tracking error for 15 markers tracked on the face. A novel method is presented that utilizes the non-rigid deformation observed on facial skin tissue to visualize the severity of facial paralysis. Results are given on a dataset that contains three videos of an individual recorded using a standard definition camera both before and after undergoing facial reconstructive surgery over a period of three years.


international conference on biometrics theory applications and systems | 2010

Face recognition under camouflage and adverse illumination

Matthew Shreve; Vasant Manohar; Dmitry B. Goldgof; Sudeep Sarkar

This paper presents a method for face identification under adverse conditions by combining regular, frontal face images with facial strain maps using score-level fusion. Strain maps are generated by calculating the central difference method of the optical flow field obtained from each subjects face during the open mouth expression. Subjects were recorded with and without camouflage under three lighting conditions: normal lighting, low lighting, and strong shadow. Experimental results demonstrate that strain maps are a useful supplemental biométrie in all three adverse conditions, especially in the camouflage condition, where a 30% increase in rank 1 recognition is observed over a baseline PCA-based algorithm.

Collaboration


Dive into the Matthew Shreve's collaboration.

Top Co-Authors

Avatar

Dmitry B. Goldgof

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Sudeep Sarkar

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Sergiy Fefilatyev

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Rangachar Kasturi

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Vasant Manohar

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Deborah Sapper

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Joshua Candamo

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Jesse Brizzi

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Kurt Kramer

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Lawrence O. Hall

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge