Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott McCloskey is active.

Publication


Featured researches published by Scott McCloskey.


machine vision applications | 2014

Multimedia event detection with multimodal feature fusion and temporal concept localization

Sangmin Oh; Scott McCloskey; Ilseo Kim; Arash Vahdat; Kevin J. Cannons; Hossein Hajimirsadeghi; Greg Mori; A. G. Amitha Perera; Megha Pandey; Jason J. Corso

We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.


international conference on computer vision | 2009

Incremental Multiple Kernel Learning for object recognition

Aniruddha Kembhavi; Behjat Siddiquie; Roland Miezianko; Scott McCloskey; Larry S. Davis

A good training dataset, representative of the test images expected in a given application, is critical for ensuring good performance of a visual categorization system. Obtaining task specific datasets of visual categories is, however, far more tedious than obtaining a generic dataset of the same classes. We propose an Incremental Multiple Kernel Learning (IMKL) approach to object recognition that initializes on a generic training database and then tunes itself to the classification task at hand. Our system simultaneously updates the training dataset as well as the weights used to combine multiple information sources. We demonstrate our system on a vehicle classification problem in a video stream overlooking a traffic intersection. Our system updates itself with images of vehicles in poses more commonly observed in the scene, as well as with image patches of the background, leading to an increase in performance. A considerable change in the kernel combination weights is observed as the system gathers scene specific training data over time. The system is also seen to adapt itself to the illumination change in the scene as day transitions to night.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Design and Estimation of Coded Exposure Point Spread Functions

Scott McCloskey; Yuanyuan Ding; Jingyi Yu

We address the problem of motion deblurring using coded exposure. This approach allows for accurate estimation of a sharp latent image via well-posed deconvolution and avoids lost image content that cannot be recovered from images acquired with a traditional shutter. Previous work in this area has used either manual user input or alpha matting approaches to estimate the coded exposure Point Spread Function (PSF) from the captured image. In order to automate deblurring and to avoid the limitations of matting approaches, we propose a Fourier-domain statistical approach to coded exposure PSF estimation that allows us to estimate the latent image in cases of constant velocity, constant acceleration, and harmonic motion. We further demonstrate that previously used criteria to choose a coded exposure PSF do not produce one with optimal reconstruction error, and that an additional 30 percent reduction in Root Mean Squared Error (RMSE) of the latent image estimate can be achieved by incorporating natural image statistics.


european conference on computer vision | 2010

Velocity-dependent shutter sequences for motion deblurring

Scott McCloskey

We address the problem of high-quality image capture of fast-moving objects in moderate light environments. In such cases, the use of a traditional shutter is known to yield non-invertible motion blur due to the loss of certain spatial frequencies. We extend the flutter shutter method of Raskar et al. to fast-moving objects by first demonstrating that no coded exposure sequence yields an invertible point spread function for all velocities. Based on this, we argue that the shutter sequence must be dependent on object velocity, and propose a method for computing such velocity-dependent sequences. We demonstrate improved image quality from velocity-dependent sequences on fast-moving objects, as compared to sequences found using the existing sampling method.


workshop on applications of computer vision | 2011

2D Barcode localization and motion deblurring using a flutter shutter camera

Wei Xu; Scott McCloskey

We describe a system for localizing and deblurring motion-blurred 2D barcodes. Previous work on barcode detection and deblurring has mainly focused on 1D barcodes, and has employed traditional image acquisition which is not robust to motion blur. Our solution is based on coded exposure imaging which, as we show, enables well-posed de-convolution and decoding over a wider range of velocities. To serve this solution, we developed a simple and effective approach for 2D barcode localization under motion blur, a metric for evaluating the quality of the deblurred 2D barcodes, and an approach for motion direction estimation in coded exposure images. We tested our system on real camera images of three popular 2D barcode symbologies: Data Matrix, PDF417 and Aztec Code.


european conference on computer vision | 2010

Analysis of motion blur with a flutter shutter camera for non-linear motion

Yuanyuan Ding; Scott McCloskey; Jingyi Yu

Motion blurs confound many computer vision problems. The fluttered shutter (FS) camera [1] tackles the motion deblurring problem by emulating invertible broadband blur kernels. However, existing FS methods assume known constant velocity motions, e.g., via user specifications. In this paper, we extend the FS technique to general 1D motions and develop an automatic motion-from-blur framework by analyzing the image statistics under the FS. We first introduce a fluttered-shutter point-spread-function (FS-PSF) to uniformly model the blur kernel under general motions. We show that many commonly used motions have closed-form FS-PSFs. To recover the FS-PSF from the blurred image, we present a new method by analyzing image power spectrum statistics. We show that the Modulation Transfer Function of the 1D FS-PSF is statistically correlated to the blurred image power spectrum along the motion direction. We then recover the FS-PSF by finding the motion parameters that maximize the correlation. We demonstrate our techniques on a variety of motions including constant velocity, constant acceleration, and harmonic rotation. Experimental results show that our method can automatically and accurately recover the motion from the blurs captured under the fluttered shutter.


international conference on computer vision | 2011

Temporally coded flash illumination for motion deblurring

Scott McCloskey

We use temporally sequenced flash illumination to capture coded exposure images of fast-moving objects in low light environments. These coded flash images allow for accurate estimation of blur-free latent images in the presence of object motion. By distributing flashes over a window of time, we lessen eye safety concerns associated with powerful all-at-once flashes. We show how our flash-based coded exposure system has better robustness to increasing object velocity than shutter-based exposure coding, thereby obviating the need for pre-exposure velocity estimation. We also show that the quality of the estimated sharp image is robust to varying levels of ambient illumination. This and other benefits of our coded flash system are demonstrated with real images acquired using prototype hardware.


international conference on biometrics theory applications and systems | 2010

Iris capture from moving subjects using a fluttering shutter

Scott McCloskey; Wing Au; Jan Jelinek

We address the problem of sharp iris image capture from moving subjects for the purpose of biometric identification. We utilize recent research from the field of computational photography, and capture images using the flutter shutter technique of [12]. Instead of capturing an image with traditional motion blur, we open and close the shutter several times during capture in order to effect invertible motion blur in the captured image. After applying automated blur estimation and de-blurring, we employ de-convolution to estimate a sharp image from the captured image. Through black-box testing with an existing iris matcher, we demonstrate the improved utility of these images for biometrics.


international conference on computational photography | 2011

Motion invariance and custom blur from lens motion

Scott McCloskey; Kelly P. Muldoon; Sharath Venkatesha

We demonstrate that image stabilizing hardware included in many camera lenses can be used to implement motion invariance and custom blur effects. Motion invariance is intended to capture images where objects within a range of velocities appear defocused with the same point spread function, obviating the need for blur estimation in advance of de-blurring. We show that the necessary parabolic motion can be implemented with stabilizing lens motion, but that the range of velocities to which capture is invariant decreases with increasing exposure time. We also show that, when that range is expanded through increased lens displacement, lens motion becomes less repeatable. In addition to motion invariance, we demonstrate that stabilizing lens motion can be used to design custom defocus kernels for aesthetic purposes, and can replace lens accessories.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Removal of Partial Occlusion from Single Images

Scott McCloskey; Michael S. Langer; Kaleem Siddiqi

This paper examines large partial occlusions in an image which occur near depth discontinuities when the foreground object is severely out of focus. We model these partial occlusions using matting, with the alpha value determined by the convolution of the blur kernel with a pinhole projection of the occluder. The main contribution is a method for removing the image contribution of the foreground occluder in regions of partial occlusion, which improves the visibility of the background scene. The method consists of three steps. First, the region of complete occlusion is estimated using a curve evolution method. Second, the alpha value at each pixel in the partly occluded region is estimated. Third, the intensity contribution of the foreground occluder is removed in regions of partial occlusion. Experiments demonstrate the methods ability to remove the effects of partial occlusion in single images with minimal user input.

Collaboration


Dive into the Scott McCloskey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jingyi Yu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Ilseo Kim

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge