Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Onur C. Hamsici is active.

Publication


Featured researches published by Onur C. Hamsici.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Bayes Optimality in Linear Discriminant Analysis

Onur C. Hamsici; Aleix M. Martinez

We present an algorithm that provides the one-dimensional subspace, where the Bayes error is minimized for the C class problem with homoscedastic Gaussian distributions. Our main result shows that the set of possible one-dimensional spaces v, for which the order of the projected class means is identical, defines a convex region with associated convex Bayes error function g(v). This allows for the minimization of the error function using standard convex optimization algorithms. Our algorithm is then extended to the minimization of the Bayes error in the more general case of heteroscedastic distributions. This is done by means of an appropriate kernel mapping function. This result is further extended to obtain the d dimensional solution for any given d by iteratively applying our algorithm to the null space of the (d - l)-dimensional solution. We also show how this result can be used to improve upon the outcomes provided by existing algorithms and derive a low-computational cost, linear approximation. Extensive experimental validations are provided to demonstrate the use of these algorithms in classification, data analysis and visualization.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Kernel Optimization in Discriminant Analysis

Di You; Onur C. Hamsici; Aleix M. Martinez

Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results, using a large number of databases and classifiers, demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Rotation Invariant Kernels and Their Application to Shape Analysis

Onur C. Hamsici; Aleix M. Martinez

Shape analysis requires invariance under translation, scale, and rotation. Translation and scale invariance can be realized by normalizing shape vectors with respect to their mean and norm. This maps the shape feature vectors onto the surface of a hypersphere. After normalization, the shape vectors can be made rotational invariant by modeling the resulting data using complex scalar-rotation invariant distributions defined on the complex hypersphere, e.g., using the complex Bingham distribution. However, the use of these distributions is hampered by the difficulty in estimating their parameters and the nonlinear nature of their formulation. In the present paper, we show how a set of kernel functions that we refer to as rotation invariant kernels can be used to convert the original nonlinear problem into a linear one. As their name implies, these kernels are defined to provide the much needed rotation invariance property allowing one to bypass the difficulty of working with complex spherical distributions. The resulting approach provides an easy, fast mechanism for 2D & 3D shape analysis. Extensive validation using a variety of shape modeling and classification problems demonstrates the accuracy of this proposed approach.


european conference on computer vision | 2012

Learning spatially-smooth mappings in non-rigid structure from motion

Onur C. Hamsici; Paulo F. U. Gotardo; Aleix M. Martinez

Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.


international conference on computer vision | 2009

Active Appearance Models with Rotation Invariant Kernels

Onur C. Hamsici; Aleix M. Martinez

2D Active Appearance Models (AAM) and 3D Morphable Models (3DMM) are widely used techniques. AAM provide a fast fitting process, but may represent unwanted 3D transformations unless strictly constrained not to do so. The reverse is true for 3DMM. The two approaches also require of a pre-alignment of their 2D or 3D shapes before the modeling can be carried out which may lead to errors. Furthermore, current models are insufficient to represent nonlinear shape and texture variations. In this paper, we derive a new approach that can model nonlinear changes in examples without the need of a pre-alignment step. In addition, we show how the proposed approach carries the above mentioned advantages of AAM and 3DMM. To achieve this goal, we take advantage of the inherent properties of complex spherical distributions, which provide invariance to translation, scale and rotation. To reduce the complexity of parameter estimation we take advantage of a recent result that shows how to estimate spherical distributions using their Euclidean counterpart, e.g., the Gaussians. This leads to the definition of Rotation Invariant Kernels (RIK) for modeling nonlinear shape changes. We show the superiority of our algorithm to AAM in several face datasets. We also show how the derived algorithm can be used to model complex 3D facial expression changes observed in American Sign Language (ASL).


Pattern Recognition | 2008

Who is LB1? Discriminant analysis for the classification of specimens

Aleix M. Martinez; Onur C. Hamsici

Many problems in paleontology reduce to finding those features that best discriminate among a set of classes. A clear example is the classification of new specimens. However, these classifications are generally challenging because the number of discriminant features and the number of samples are limited. This has been the fate of LB1, a new specimen found in the Liang Bua Cave of Flores. Several authors have attributed LB1 to a new species of Homo, H. floresiensis. According to this hypothesis, LB1 is either a member of the early Homo group or a descendent of an ancestor of the Asian H. erectus. Detractors have put forward an alternate hypothesis, which stipulates that LB1 is in fact a microcephalic modern human. In this paper, we show how we can employ a new Bayes optimal discriminant feature extraction technique to help resolve this type of issues. In this process, we present three types of experiments. First, we use this Bayes optimal discriminant technique to develop a model of morphological (shape) evolution from Australopiths to H. sapiens. LB1 fits perfectly in this model as a member of the early Homo group. Second, we build a classifier based on the available cranial and mandibular data appropriately normalized for size and volume. Again, LB1 is most similar to early Homo. Third, we build a brain endocast classifier to show that LB1 is not within the normal range of variation in H. sapiens. These results combined support the hypothesis of a very early shared ancestor for LB1 and H. erectus, and illustrate how discriminant analysis approaches can be successfully used to help classify newly discovered specimens.


computer vision and pattern recognition | 2015

Adaptive region pooling for object detection

Yi-Hsuan Tsai; Onur C. Hamsici; Ming-Hsuan Yang

Learning models for object detection is a challenging problem due to the large intra-class variability of objects in appearance, viewpoints, and rigidity. We address this variability by a novel feature pooling method that is adaptive to segmented regions. The proposed detection algorithm automatically discovers a diverse set of exemplars and their distinctive parts which are used to encode the region structure by the proposed feature pooling method. Based on each exemplar and its parts, a regression model is learned with samples selected by a coarse region matching scheme. The proposed algorithm performs favorably on the PASCAL VOC 2007 dataset against existing algorithms. We demonstrate the benefits of our feature pooling method when compared to conventional spatial pyramid pooling features. We also show that object information can be transferred through exemplars for detected objects.


Proceedings of SPIE | 2010

Keypoint clustering for robust image matching

Sundeep Vaddadi; Onur C. Hamsici; Yuriy Reznik; John Hyunchul Hong; Chong Lee

A number of popular image matching algorithms such as Scale Invariant Feature Transform (SIFT)1 are based on local image features. They first detect interest points (or keypoints) across an image and then compute descriptors based on patches around them. In this paper, we observe that in textured or feature-rich images, keypoints typically appear in clusters following patterns in the underlying structure. We show that such clustering phenomenon can be used to: 1) enhance recall and precision performance of the descriptor matching process, and 2) improve convergence rate of the RANSAC algorithm used in the geometric verification stage.


international conference on computer vision | 2007

Spherical-Homoscedastic Shapes

Onur C. Hamsici; Aleix M. Martinez

Shape analysis requires invariance under translation, scale and rotation. Translation and scale invariance can be realized by normalizing shape vectors with respect to their mean and norm. This maps the shape feature vectors onto the surface of a hypersphere. After normalization, the shape vectors can be made rotational invariant by modelling the resulting data using complex scalar rotation invariant distributions defined on the complex hypersphere, e.g., using the complex Bingham distribution. However, the use of these distributions is hampered by the difficulty in estimating their parameters, which is shown to be very costly or impossible in most cases. The purpose of this paper is twofold. First, we show under which conditions the classification results obtained with complex Binghams are identical to those obtained with the easy-to-estimate complex Normal distribution. Second, we derive a kernel function which (intrinsically) maps the data into a space where the above conditions are satisfied and, hence, where the normal model can be successfully used. This results in a simple, low-cost algorithm for representing and classifying shapes. We demonstrate the use of this technique in several experimental results for object and face recognition. Comparisons to other statistical shape representation/classification approaches demonstrate the superiority of the proposed algorithms in classification accuracy and computational time.


computer vision and pattern recognition | 2007

Sparse Kernels for Bayes Optimal Discriminant Analysis

Onur C. Hamsici; Aleix M. Martinez

Discriminant Analysis (DA) methods have demonstrated their utility in countless applications in computer vision and other areas of research - especially in the C class classification problem. The most popular approach is linear DA (LDA), which provides the C - 1-dimensional Bayes optimal solution, but only when all the class covariance matrices are identical. This is rarely the case in practice. To alleviate this restriction, Kernel LDA (KLDA) has been proposed. In this approach, we first (intrinsically) map the original nonlinear problem to a linear one and then use LDA to find the C - 1-dimensional Bayes optimal subspace. However, the use of KLDA is hampered by its computational cost, given by the number of training samples available and by the limitedness of LDA in providing a C - 1-dimensional solution space. In this paper, we first extend the definition of LDA to provide subspace of q < C - 1 dimensions where the Bayes error is minimized. Then, to reduce the computational burden of the derived solution, we define a sparse kernel representation, which is able to automatically select the most appropriate sample feature vectors that represent the kernel. We demonstrate the superiority of the proposed approach on several standard datasets. Comparisons are drawn with a large number of known DA algorithms.

Collaboration


Dive into the Onur C. Hamsici's collaboration.

Researchain Logo
Decentralizing Knowledge