Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vitaly Ablavsky is active.

Publication


Featured researches published by Vitaly Ablavsky.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Learning a Family of Detectors via Multiplicative Kernels

Quan Yuan; Ashwin Thangali; Vitaly Ablavsky; Stan Sclaroff

Object detection is challenging when the object class exhibits large within-class variations. In this work, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly learned in a multiplicative form of two kernel functions. Model training is accomplished via standard SVM learning. When the foreground object masks are provided in training, the detectors can also produce object segmentations. A tracking-by-detection framework to recover foreground state in video sequences is also proposed with our model. The advantages of our method are demonstrated on tasks of object detection, view angle estimation, and tracking. Our approach compares favorably to existing methods on hand and vehicle detection tasks. Quantitative tracking results are given on sequences of moving vehicles and human faces.


AIAA Guidance, Navigation, and Control Conference and Exhibit | 2000

OPTIMAL SEARCH FOR A MOVING TARGET: A GEOMETRIC APPROACH

Vitaly Ablavsky; Magnus Snorrason; Concord Ave; Cambridge Ma

The problem of optimal (or near-optimal) exhaustive search for a moving target is of importance in many civilian and military applications. Search-andrescue in open sea or in sparsely-populated areas and search missions for previously-spotted enemy targets are just a few examples. Yet, few known algorithms exist for solving this problem and none of them combine the optimal allocation of search effort with the actual computation of trajectories that a searcher must (and physically can) follow. We propose a divide-andconquer geometric approach for constructing optimal search paths for arbitrarily-shaped regions of interest. The technique is both generalizable to multiple search agents and extensible in that additional real-life search requirements (maneuverability constraints, additional information about the sensor, etc.) can be incorporated into the existing framework. Another novelty of our approach is the ability to optimally deal with a search platform which, due to design constraints, can only perform detection while moving along straight-line sweeps.


computer vision and pattern recognition | 2008

Layered graphical models for tracking partially-occluded objects

Vitaly Ablavsky; Ashwin Thangali; Stan Sclaroff

We propose a representation for scenes containing relocatable objects that can cause partial occlusions of people in a cameras field of view. In many practical applications, relocatable objects tend to appear often; therefore, models for them can be learned offline and stored in a database. We formulate an occluder-centric representation, called a graphical model layer, where a persons motion in the ground plane is defined as a first-order Markov process on activity zones, while image evidence is aggregated in 2D observation regions that are depth-ordered with respect to the occlusion mask of the relocatable object. We represent real-world scenes as a composition of depth-ordered, interacting graphical model layers, and account for image evidence in a way that handles mutual overlap of the observation regions and their occlusions by the relocatable objects. These layers interact: Proximate ground-plane zones of different model instances are linked to allow a person to move between the layers, and image evidence is shared between the observation regions of these models. We demonstrate our formulation in tracking pedestrians in the vicinity of parked vehicles. Our results compare favorably with a sprite-learning algorithm, with a pedestrian tracker based on deformable contours, and with pedestrian detectors.


AIAA Guidance, Navigation, and Control Conference and Exhibit | 2003

SEARCH PATH OPTIMIZATION FOR UAVS USING STOCHASTIC SAMPLING WITH ABSTRACT PATTERN DESCRIPTORS

Vitaly Ablavsky; Daniel W. Stouch; Magnus Snorrason

The problem of generating the optimal search path for an unmanned aerial vehicle to locate a potentially moving target is of importance in many civilian and military applications. Search-and-rescue in open sea or in sparsely-populated areas and search missions for previously-spotted enemy targets are just a few examples. Few algorithms exist for solving this problem, and our solution is novel in that it combines the optimal allocation of search effort with the actual computation of trajectories that a searcher must (and physically can) follow. Our approach exploits the target’s spatial mobility constraints to derive accurate regions of interest, and then utilizes the geometric properties of a region of interest to decompose the overall search problem into a set of simpler search problems. Our approach involves applying computational geometry methods to partition the complex search region into minimallyoverlapping compact sub-regions. This enables closed-form computation of a flight trajectory for each sub-region, such that path length is min imized while full coverage is guaranteed and constraints of the airframe and sensor are met. The novelty of our solution described in this paper lies in how we optimize the global sequencing of individual trajectories into a complete near-optimal search path that covers the whole complex search region. We use the concept of abstract pattern descriptors to simplify the representation of each search pattern. A stochastic Metropolis sampling approach with Markov random fields is then used in conjunction with a simulated annealing algorithm to derive the near optimal global search path for each isochronal contour.


computer vision and pattern recognition | 2008

Multiplicative kernels: Object detection, segmentation and pose estimation

Quan Yuan; Ashwin Thangali; Vitaly Ablavsky; Stan Sclaroff

Object detection is challenging when the object class exhibits large within-class variations. In this work, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly learned in a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. Detector training can be accomplished via standard SVM learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the foreground parameters are provided in training, the detectors can also produce parameter estimate. When the foreground object masks are provided in training, the detectors can also produce object segmentation. The advantages of our method over past methods are demonstrated on data sets of human hands and vehicles.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Layered Graphical Models for Tracking Partially Occluded Objects

Vitaly Ablavsky; Stan Sclaroff

Partial occlusions are commonplace in a variety of real world computer vision applications: surveillance, intelligent environments, assistive robotics, autonomous navigation, etc. While occlusion handling methods have been proposed, most methods tend to break down when confronted with numerous occluders in a scene. In this paper, a layered image-plane representation for tracking people through substantial occlusions is proposed. An image-plane representation of motion around an object is associated with a pre-computed graphical model, which can be instantiated efficiently during online tracking. A global state and observation space is obtained by linking transitions between layers. A reversible jump Markov chain Monte Carlo approach is used to infer the number of people and track them online. The method outperforms two state-of-the-art methods for tracking over extended occlusions, given videos of a parking lot with numerous vehicles and a laboratory with many desks and workstations.


international conference on document analysis and recognition | 2005

Sequential correction of perspective warp in camera-based documents

Camille Monnier; Vitaly Ablavsky; Steve Holden; Magnus Snorrason

Documents captured with hand-held devices, such as digital cameras often exhibit perspective warp artifacts. These artifacts pose problems for OCR systems which at best can only handle in-plane rotation. We propose a method for recovering the planar appearance of an input document image by examining the vertical rate of change in scale of features in the document. Our method makes fewer assumptions about the document structure than do previously published algorithms.


computer vision and pattern recognition | 2007

Parameter Sensitive Detectors

Quan Yuan; Ashwin Thangali; Vitaly Ablavsky; Stan Sclaroff

Object detection can be challenging when the object class exhibits large variations. One commonly-used strategy is to first partition the space of possible object variations and then train separate classifiers for each portion. However, with continuous spaces the partitions tend to be arbitrary since there are no natural boundaries (for example, consider the continuous range of human body poses). In this paper, a new formulation is proposed, where the detectors themselves are associated with continuous parameters, and reside in a parameterized function space. There are two advantages of this strategy. First, a-priori partitioning of the parameter space is not needed; the detectors themselves are in a parameterized space. Second, the underlying parameters for object variations can be learned from training data in an unsupervised manner. In profile face detection experiments, at a fixed false alarm number of 90, our method attains a detection rate of 75% vs. 70% for the method of Viola-Jones. In hand shape detection, at a false positive rate of 0.1%, our method achieves a detection rate of 99.5% vs. 98% for partition based methods. In pedestrian detection, our method reduces the miss detection rate by a factor of three at a false positive rate of 1%, compared with the method of Dalal-Triggs.


Proceedings of SPIE, the International Society for Optical Engineering | 2007

Video surveillance of pedestrians and vehicles

Daniel Gutchess; Vitaly Ablavsky; Ashwin Thangali; Stan Sclaroff; Magnus Snorrason

In this paper, we focus on the problem of automated surveillance in a parking lot scenario. We call our research system VANESSA, for Video Analysis for Nighttime Surveillance and Situational Awareness. VANESSA is capable of: 1) detecting moving objects via background modeling and false motion suppression, 2) tracking and classifying pedestrians and vehicles, and 3) detecting events such as person entering or exiting a vehicle. Moving object detection utilizes a multi-stage cascading approach to identify pixels that belong to the true objects and reject any spurious motion, (e.g., due to vehicle headlights or moving foliage). Pedestrians and vehicles are tracked using a multiple hypothesis tracker coupled with a particle filter for state estimation and prediction. The space-time trajectory of each tracked object is stored in an SQL database along with sample imagery to support video forensics applications. The detection of pedestrians entering/exiting vehicles is accomplished by first estimating the three-dimensional pose and the corresponding entry and exit points of each tracked vehicle in the scene. A pedestrian activity model is then used to probabilistically assign pedestrian tracks that appear or disappear in the vicinity of these entry/exit points. We evaluate the performance of tracking and pedestrian-vehicle association on an extensive data set collected in a challenging real-world scenario.


AIAA Guidance, Navigation, and Control Conference and Exhibit | 2001

Efficient pursuit of a moving target via spatial constraint exploitation

Steven Holden; Vitaly Ablavsky; Magnus Snorrason

Collaboration


Dive into the Vitaly Ablavsky's collaboration.

Top Co-Authors

Avatar

Magnus Snorrason

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harald Ruda

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Camille Monnier

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar

Daniel Gutchess

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar

Daniel W. Stouch

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar

Steve Holden

Charles River Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge