Matthew Adam Shreve
Xerox
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew Adam Shreve.
international conference on image processing | 2016
Matthew Adam Shreve; Edgar A. Bernal; Qun Li; Jayant Kumar; Raja Bala
This paper investigates the discriminative capabilities of facial action units (AUs) exhibited by an individual while performing a task on a tablet computer in a semi-unconstrained environment. To that end, AUs are measured on a frame-by-frame basis from videos of 96 different subjects participating in a game-show-like quiz game that included a prize incentive. We propose a method that leverages the activation characteristics, as well as the temporal dynamics of facial behavior. In order to demonstrate the discriminative capabilities of the proposed approach, we perform identity matching across all subject pairs. Overall, the rank-1 matching performance of our algorithm ranges from 55% and up to 85%, on scenarios where the emotional disparity between the reference and query samples is largest and smallest, respectively. We believe these results represent a significant improvement relative to existing work relying on the use of AUs for human identification, in particular because the experimental settings guarantee that the facial expressions involved are spontaneous.
international conference on intelligent transportation systems | 2015
Matthew Adam Shreve; Edgar A. Bernal; Qun Li; Robert P. Loce
Occlusions present a challenge in surveillance and traffic monitoring applications where person and/or vehicle tracking are required. Video-based object tracking is a process where the location of a given object of interest in a video sequence is determined across a range of frames. A key step in typical tracking operations is forming a feature representation of an object being tracked and solving a correspondence problem to find the location of the best-matching set of those features between video frames. The best-matching feature set is usually found via optimization algorithms across regions in subsequent frames near and around the location of the object in a current frame. The features used to solve the correspondence problem are usually appearance-based, and may include color, texture and shape descriptors. Consequently, the track can be lost when a view of the tracked object is occluded by objects in the scene because the appearance of the occluded object may not sufficiently resemble the appearance of the unoccluded object. In this paper, we present a method for determining the location of static occlusions in a scene at the pixel level, and utilize the knowledge of the location of the occlusions to boost the performance of well-known video-based object tracking algorithms. We demonstrate via experimental testing that the proposed method is effective in improving the performance of tracking algorithms, particularly when the motion in the scene is highly regularized, as is the case in cameras performing transportation monitoring tasks.
Archive | 2012
Matthew Adam Shreve; Michael C. Mongeon; Robert P. Loce; Edgar A. Bernal; Wencheng Wu
Archive | 2013
Matthew Adam Shreve; Michael C. Mongeon; Robert P. Loce; Edgar A. Bernal
arXiv: Machine Learning | 2017
Kumar Sricharan; Raja Bala; Matthew Adam Shreve; Hui Ding; Kumar Saketh; Jin Sun
Archive | 2015
Michael C. Mongeon; Robert P. Loce; Matthew Adam Shreve
Archive | 2014
Qun Li; Edgar A. Bernal; Matthew Adam Shreve; Robert P. Loce
Archive | 2014
Qun Li; Edgar A. Bernal; Matthew Adam Shreve
Archive | 2014
Matthew Adam Shreve; Edgar A. Bernal; Qun Li; Robert P. Loce
Archive | 2013
Michael C. Mongeon; Matthew Adam Shreve; Robert P. Loce