Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ürün Dogan is active.

Publication


Featured researches published by Ürün Dogan.


robotics and biomimetics | 2011

Autonomous driving: A comparison of machine learning techniques by means of the prediction of lane change behavior

Ürün Dogan; Johann Edelbrunner; Ioannis Iossifidis

In the presented work we compare machine learning techniques in the context of lane change behavior performed by humans in a semi-naturalistic simulated environment. We evaluate different learning approaches using differing feature combinations in order to identify appropriate feature, best feature combination, and the most appropriate machine learning technique for the described task. Based on the data acquired from human drivers in the traffic simulator NISYS TRS1, we trained a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able to predict lane changes up to 1.5 sec in beforehand.


Statistics and Computing | 2016

Extensions of stability selection using subsamples of observations and covariates

Andre Beinrucker; Ürün Dogan; Gilles Blanchard

We introduce extensions of stability selection, a method to stabilise variable selection methods introduced by Meinshausen and Bühlmann (J R Stat Soc 72:417–473, 2010). We propose to apply a base selection method repeatedly to random subsamples of observations and subsets of covariates under scrutiny, and to select covariates based on their selection frequency. We analyse the effects and benefits of these extensions. Our analysis generalizes the theoretical results of Meinshausen and Bühlmann (J R Stat Soc 72:417–473, 2010) from the case of half-samples to subsamples of arbitrary size. We study, in a theoretical manner, the effect of taking random covariate subsets using a simplified score model. Finally we validate these extensions on numerical experiments on both synthetic and real datasets, and compare the obtained results in detail to the original stability selection method.


Joint DAGM (German Association for Pattern Recognition) and OAGM Symposium | 2012

A Simple Extension of Stability Feature Selection

Andre Beinrucker; Ürün Dogan; Gilles Blanchard

Stability selection [9] is a general principle for performing feature selection. It functions as a meta-layer on top of a “baseline” feature selection method, and consists in repeatedly applying the baseline to random data subsamples of half-size, and finally outputting the features with selection frequency larger than a fixed threshold. In the present work, we suggest and study a simple extension of the original stability selection. It consists in applying the baseline method to random submatrices of the data matrix X of a given size and returning those features having the largest selection frequency. We analyze from a theoretical point of view the effect of this subsampling on the selected variables, in particular the influence of the data subsample size. We report experimental results on large-dimension artificial and real data and identify in which settings stability selection is to be recommended.


PLOS ONE | 2017

Distributed optimization of multi-class SVMs

Maximilian Alber; Julian Zimmert; Ürün Dogan; Marius Kloft

Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations (Lee et al. and Weston and Watkins) that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data.


european conference on machine learning | 2012

A note on extending generalization bounds for binary large-margin classifiers to multiple classes

Ürün Dogan; Tobias Glasmachers; Christian Igel

A generic way to extend generalization bounds for binary large-margin classifiers to large-margin multi-category classifiers is presented. The simple proceeding leads to surprisingly tight bounds showing the same


neural information processing systems | 2015

Multi-class SVMs: from tighter data-dependent generalization bounds to novel algorithms

Yunwen Lei; Ürün Dogan; Alexander Binder; Marius Kloft

\tilde{O}(d^2)


asian conference on machine learning | 2013

Accelerated Coordinate Descent with Adaptive Coordinate Frequencies

Tobias Glasmachers; Ürün Dogan

scaling in the number d of classes as state-of-the-art results. The approach is exemplified by extending a textbook bound based on Rademacher complexity, which leads to a multi-class bound depending on the sum of the margin violations of the classifier.


Archive | 2016

Estimating Bandwidth in a Network

Christoffer Asgaard Rodbro; Philip A. Chou; Ürün Dogan


neural information processing systems | 2015

Theory and Algorithms for the Localized Setting of Learning Kernels

Yunwen Lei; Alexander Binder; Ürün Dogan; Marius Kloft


asian conference on machine learning | 2016

Localized Multiple Kernel Learning—A Convex Approach

Yunwen Lei; Alexander Binder; Ürün Dogan; Marius Kloft

Collaboration


Dive into the Ürün Dogan's collaboration.

Top Co-Authors

Avatar

Marius Kloft

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yunwen Lei

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gyemin Lee

University of Michigan

View shared research outputs
Researchain Logo
Decentralizing Knowledge