Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oncel Tuzel is active.

Publication


Featured researches published by Oncel Tuzel.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Pedestrian Detection via Classification on Riemannian Manifolds

Oncel Tuzel; Fatih Porikli; Peter Meer

We present a new algorithm to detect pedestrian in still images utilizing covariance matrices as object descriptors. Since the descriptors do not form a vector space, well known machine learning techniques are not well suited to learn the classifiers. The space of d-dimensional nonsingular covariance matrices can be represented as a connected Riemannian manifold. The main contribution of the paper is a novel approach for classifying points lying on a connected Riemannian manifold using the geometry of the space. The algorithm is tested on INRIA and DaimlerChrysler pedestrian datasets where superior detection rates are observed over the previous approaches.


computer vision and pattern recognition | 2006

Covariance Tracking using Model Update Based on Lie Algebra

Fatih Porikli; Oncel Tuzel; Peter Meer

We propose a simple and elegant algorithm to track nonrigid objects using a covariance based object description and a Lie algebra based update mechanism. We represent an object window as the covariance matrix of features, therefore we manage to capture the spatial and statistical properties as well as their correlation within the same representation. The covariance matrix enables efficient fusion of different types of features and modalities, and its dimensionality is small. We incorporated a model update algorithm using the Lie group structure of the positive definite matrices. The update mechanism effectively adapts to the undergoing object deformations and appearance changes. The covariance tracking method does not make any assumption on the measurement noise and the motion of the tracked objects, and provides the global optimal solution. We show that it is capable of accurately detecting the nonrigid, moving objects in non-stationary camera sequences while achieving a promising detection rate of 97.4 percent.


computer vision and pattern recognition | 2007

Human Detection via Classification on Riemannian Manifolds

Oncel Tuzel; Fatih Porikli; Peter Meer

We present a new algorithm to detect humans in still images utilizing covariance matrices as object descriptors. Since these descriptors do not lie on a vector space, well known machine learning techniques are not adequate to learn the classifiers. The space of d-dimensional nonsingular covariance matrices can be represented as a connected Riemannian manifold. We present a novel approach for classifying points lying on a Riemannian manifold by incorporating the a priori information about the geometry of the space. The algorithm is tested on INRIA human database where superior detection rates are observed over the previous approaches.


european conference on computer vision | 2006

Region covariance: a fast descriptor for detection and classification

Oncel Tuzel; Fatih Porikli; Peter Meer

We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.


computer vision and pattern recognition | 2011

Entropy rate superpixel segmentation

Ming-Yu Liu; Oncel Tuzel; Srikumar Ramalingam; Rama Chellappa

We propose a new objective function for superpixel segmentation. This objective function consists of two components: entropy rate of a random walk on a graph and a balancing term. The entropy rate favors formation of compact and homogeneous clusters, while the balancing function encourages clusters with similar sizes. We present a novel graph construction for images and show that this construction induces a matroid — a combinatorial structure that generalizes the concept of linear independence in vector spaces. The segmentation is then given by the graph topology that maximizes the objective function under the matroid constraint. By exploiting submodular and mono-tonic properties of the objective function, we develop an efficient greedy algorithm. Furthermore, we prove an approximation bound of ½ for the optimality of the solution. Extensive experiments on the Berkeley segmentation benchmark show that the proposed algorithm outperforms the state of the art in all the standard evaluation metrics.


computer vision and pattern recognition | 2017

Learning from Simulated and Unsupervised Images through Adversarial Training

Ashish Shrivastava; Tomas Pfister; Oncel Tuzel; Joshua Susskind; Wenda Wang; Russell Webb

With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.


computer vision and pattern recognition | 2010

Fast directional chamfer matching

Ming-Yu Liu; Oncel Tuzel; Ashok Veeraraghavan; Rama Chellappa

We study the object localization problem in images given a single hand-drawn example or a gallery of shapes as the object model. Although many shape matching algorithms have been proposed for the problem over the decades, chamfer matching remains to be the preferred method when speed and robustness are considered. In this paper, we significantly improve the accuracy of chamfer matching while reducing the computational time from linear to sublinear (shown empirically). Specifically, we incorporate edge orientation information in the matching algorithm such that the resulting cost function is piecewise smooth and the cost variation is tightly bounded. Moreover, we present a sublinear time algorithm for exact computation of the directional chamfer matching score using techniques from 3D distance transforms and directional integral images. In addition, the smooth cost function allows to bound the cost distribution of large neighborhoods and skip the bad hypotheses within. Experiments show that the proposed approach improves the speed of the original chamfer matching upto an order of 45x, and it is much faster than many state of art techniques while the accuracy is comparable.


computer vision and pattern recognition | 2013

Joint Geodesic Upsampling of Depth Images

Ming-Yu Liu; Oncel Tuzel; Yuichi Taguchi

We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution one. Though this is closely related to the all-pair-shortest-path problem which has O(n2 log n) complexity, we develop a novel approximation algorithm whose complexity grows linearly with the image size and achieve realtime performance. We compare our algorithm with the state of the art on the benchmark dataset and show that our approach provides more accurate depth upsampling with fewer artifacts. In addition, we show that the proposed algorithm is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.


international conference on robotics and automation | 2012

Voting-based pose estimation for robotic assembly using a 3D sensor

Changhyun Choi; Yuichi Taguchi; Oncel Tuzel; Ming-Yu Liu; Srikumar Ramalingam

We propose a voting-based pose estimation algorithm applicable to 3D sensors, which are fast replacing their 2D counterparts in many robotics, computer vision, and gaming applications. It was recently shown that a pair of oriented 3D points, which are points on the object surface with normals, in a voting framework enables fast and robust pose estimation. Although oriented surface points are discriminative for objects with sufficient curvature changes, they are not compact and discriminative enough for many industrial and real-world objects that are mostly planar. As edges play the key role in 2D registration, depth discontinuities are crucial in 3D. In this paper, we investigate and develop a family of pose estimation algorithms that better exploit this boundary information. In addition to oriented surface points, we use two other primitives: boundary points with directions and boundary line segments. Our experiments show that these carefully chosen primitives encode more information compactly and thereby provide higher accuracy for a wide class of industrial parts and enable faster computation. We demonstrate a practical robotic bin-picking system using the proposed algorithm and a 3D sensor.


international conference on computer vision | 2005

Simultaneous multiple 3D motion estimation via mode finding on Lie groups

Oncel Tuzel; Raghav Subbarao; Peter Meer

We propose a new method to estimate multiple rigid motions from noisy 3D point correspondences in the presence of outliers. The method does not require prior specification of number of motion groups and estimates all the motion parameters simultaneously. We start with generating samples from the rigid motion distribution. The motion parameters are then estimated via mode finding operations on the sampled distribution. Since rigid motions do not lie on a vector space, classical statistical methods can not be used for mode finding. We develop a mean shift algorithm which estimates modes of the sampled distribution using the Lie group structure of the rigid motions. We also show that proposed mean shift algorithm is general and can be applied to any distribution having a matrix Lie group structure. Experimental results on synthetic and real image data demonstrate the superior performance of the algorithm.

Collaboration


Dive into the Oncel Tuzel's collaboration.

Top Co-Authors

Avatar

Ming-Yu Liu

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Yuichi Taguchi

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim K. Marks

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amit K. Agrawal

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Changhyun Choi

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge