Özge Can Özcanli
Brown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Özge Can Özcanli.
computer vision and pattern recognition | 2006
Özge Can Özcanli; Amir Tamrakar; Benjamin B. Kimia; Joseph L. Mundy
Shape is an important cue for generic object recognition but can be insufficient without other cues such as object appearance. We explore a number of ways in which the geometric aspects of an object can be augmented with its appearance. The main idea is to construct a dense correspondence between the interior regions of two shapes based on a shape-based correspondence so that the intensity and gradient distributions can be compared, e.g., using a mutual information paradigm. Three methods for regional alignment are suggested and compared here, based on: (i) propagation of correspondences from the silhouette to parallel curves in the interior, (ii) intersection of line segments anchored on corresponding points on the contour, and (iii) correspondence of shape skeletons. These methods have been implemented and applied to vehicle category recognition from aerial videos under known viewing and illumination conditions. We have constructed a photo-realistic synthetic video database to explore the performance of these methods under controlled conditions. We have also tested these algorithms on real video collected for this purpose from a balloon. Our findings indicate that (i) augmenting shape with appearance significantly increases recognition rate, and (ii) the region correspondence induced by the shape skeleton yields the highest performance.
Proceedings of SPIE | 2009
Joseph L. Mundy; Özge Can Özcanli
Over the last several years, a new representation for geometry has been developed, based on a 3-d probability distribution of surface position and appearance. This representation can be constructed from multiple images, using both still and video data. The probability for 3-d surface position is estimated in an on-line algorithm using Bayesian inference. The probability of a point belonging to a surface is updated as to its success in accounting for the intensity of the current image at the projected image location of the point. A Gaussian mixture is used to model image appearance. This update process can be proved to converge under relatively general conditions that are consistent with aerial imagery. There are no explicit surfaces extracted, but only discrete surface probabilities. This paper describes the application of this representation to object recognition, based on Bayesian compositional hierarchies.
british machine vision conference | 2007
Özge Can Özcanli; Benjamin B. Kimia
We propose a new methodology to partition a natural image into regions based on the shock graph of its contour fragments. We show that these regions, or shock patch fragments, are often object fragments, thus effecting a partial segmentation of the image. We utilize shock patch fragments to recognize objects with dominant shape cues eliminating the need to segment out the entire object from the image first. Our preliminary results with minimal training are promising with respect to the state of the art recognition systems.
International Journal of Computer Vision | 2016
Özge Can Özcanli; Yi Dong; Joseph L. Mundy; Helen Webb; Riad I. Hammoud; Victor Tom
Modern satellites tag their images with geolocation information using GPS and star tracking systems. Depending on the quality of the geopositioning equipment, errors may range from a few meters to tens of meters on the ground. At the current state of art, there is no established method to automatically correct these errors limiting the large-scale joint utilization of cross-platform satellite images. In this paper, an automatic geolocation correction framework that corrects images from multiple satellites simultaneously is presented. As a result of the proposed correction process, all the images are effectively registered to the same absolute geodetic coordinate frame. The usability and the quality of the correction framework are demonstrated through a 3-D surface reconstruction application. The 3-D surface models given by original satellite geopositioning metadata, and the corrected metadata, are compared. The quality difference is measured through an entropy-based metric applied to the orthographic height maps given by the 3-D surface models. Measuring the absolute accuracy of the framework is harder due to lack of publicly available high-precision ground surveys. However, the geolocation of images of exemplar satellites from different parts of the globe are corrected, and the road networks given by OpenStreetMap are projected onto the images using original and corrected metadata to demonstrate the improved quality of alignment.
international conference on pattern recognition | 2008
Pradeep Yarlagadda; Özge Can Özcanli; Joseph L. Mundy
This paper introduces a 3-d representation of vehicles as a space of scale and orientation transformations that define the shape of individual vehicle instances. This shape space forms a group, where the similarity of different vehicle observations can be evaluated using a distance measure defined by Lie group theory. A generic class of vehicles (e.g. SUV) is represented by a set of curves on the Lie group manifold, called geodesics. The classification of any given vehicle instance is achieved by finding the class with the smallest Lie distance between the geodesics and the vehicle shape. Vehicle recognition is carried out on 3-d LIDAR point clouds. The performance of the Lie classifier is evaluated against two other approaches and found to provide superior recognition performance, particularly with respect to the ability to generalize from a small number of labeled prototypes.
international conference on pattern recognition | 2010
Özge Can Özcanli; Joseph L. Mundy
Over the last several years, a new probabilistic representation for 3-d volumetric modeling has been developed. The main purpose of the model is to detect deviations from the normal appearance and geometry of the scene, i.e. change detection. In this paper, the model is utilized to characterize changes in the scene as vehicles. In the training stage, a compositional part hierarchy is learned to represent the geometry of Gaussian intensity extrema primitives exhibited by vehicles. In the test stage, the learned compositional model produces vehicle detections. Vehicle recognition performance is measured on low-resolution satellite imagery and detection accuracy is significantly improved over the initial change map given by the 3-d volumetric model. A PCA-based Bayesian recognition algorithm is implemented for comparison, which exhibits worse performance than the proposed method.
computer vision and pattern recognition | 2014
Özge Can Özcanli; Yi Dong; Joseph L. Mundy; Helen Webb; Riad I. Hammoud; Tom Victor
Modern satellites tag their images with geo-location information using GPS and star tracking systems. Depending on the quality of the geo-positioning equipment, geo-location errors may range from a few meters to tens of meters on the ground. At the current state of art, there is not an established method to automatically correct these errors limiting the large-scale utilization of the satellite imagery. In this paper, an automatic geo-location correction framework that corrects multiple satellite images simultaneously is presented. As a result of the proposed correction process, all the images are effectively registered to the same absolute geodetic coordinate frame. The usability and the quality of the correction framework are shown through probabilistic 3-D surface model reconstruction. The models given by original satellite geo-positioning meta-data and the corrected meta-data are compared and the quality difference is measured through an entropy-based metric applied onto the high resolution height maps given by the 3-D models. Measuring the absolute accuracy of the framework is harder due to lack of publicly available high precision ground surveys, however, the geo-location of images of exemplar satellites from different parts of the globe are corrected and the road networks given by OpenStreetMap are projected onto the images using original and corrected meta-data to show the improved quality of alignment.
computer vision and pattern recognition | 2015
Özge Can Özcanli; Yi Dong; Joseph L. Mundy; Helen Webb; Riad I. Hammoud; Victor Tom
High-resolution and accurate Digital Elevation Model (DEM) generation from satellite imagery is a challenging problem. In this work, a stereo 3-D reconstruction framework is outlined that is applicable to nonstereoscopic satellite image pairs that may be captured by different satellites. The orthographic height maps given by stereo reconstruction are compared to height maps given by a multiview approach based on Probabilistic Volumetric Representation (PVR). Height map qualities are measured in comparison to manually prepared ground-truth height maps in three sites from different parts of the world with urban, semi-urban and rural features. The results along with strengths and weaknesses of the two techniques are summarized.
Spie Newsroom | 2012
Özge Can Özcanli; Daniel E. Crispell; Joseph L. Mundy; Vishal Jain; Tom Pollard
The rapid growth of overhead imagery collected from aerial and space platforms has sparked a revolution in mapping and surveillance applications. Geographical information systems (GISs) such as maps and road networks can be updated rapidly, and dynamic events such as natural disasters and military operations can be monitored as they evolve. But the discrepancy between 2D imagery and the 3D nature of the observed phenomena creates an algorithmic challenge that existing technology has yet to adequately address. Substantial manual effort to analyze hours of video footage from various imaging platforms is currently required on a daily basis. Furthermore, the full potential of the imagery, for example, in detecting subtle or rare changes in large volumes of visual data, cannot be fully realized due to inherent human limitations. Only automated processing of overhead imagery can allow effective exploitation of existing collection resources. Automated overhead image processing requires basic algorithmic capabilities such as image registration, change detection, tracking, labeling, and efficient storage. Existing technology provides these functions, but only in the 2D domain of the image. 2D image processing alone is not enough to accurately assess the relative movement of scene elements in the presence of occlusion and 3D relief. A 3D scene representation, on the other hand, provides a complete representation of the scene from any viewpoint. Recent research has focused on reconstructing the 3D surface geometry of a scene from aerial or ground-level imagery.1–3 The majority of these approaches rely on established image-processing techniques that extract 2D image features that are matched across images, assuming that they originate from a common 3D surface element.4, 5 The location of the surface element is then triangulated using the centers and orientations of the cameras, as illustrated in the camera Figure 1. The discrete representation of the scene volume as a regular grid of miniscule volume elements (voxels). The known camera geometry determines the correspondence of image pixels and 3D voxels through projection.
Proceedings of SPIE | 2010
Gil Abramovich; Glen William Brooksby; Stephen F. Bush; Swaminathan Manickam; Özge Can Özcanli; Benjamin D. Garrett
We present four new change detection methods that create an automated change map from a probability map. In this case, the probability map was derived from a 3D model. The primary application of interest is aerial photographic applications, where the appearance, disappearance or change in position of small objects of a selectable class (e.g., cars) must be detected at a high success rate in spite of variations in magnification, lighting and background across the image. The methods rely on an earlier derivation of a probability map. We describe the theory of the four methods, namely Bernoulli variables, Markov Random Fields, connected change, and relaxation-based segmentation, evaluate and compare their performance experimentally on a set probability maps derived from aerial photographs.