Ayse Erkan
New York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ayse Erkan.
intelligent robots and systems | 2008
Raia Hadsell; Ayse Erkan; Pierre Sermanet; Marco Scoffier; Urs Muller; Yann LeCun
We present a learning-based approach for long-range vision that is able to accurately classify complex terrain at distances up to the horizon, thus allowing high-level strategic planning. A deep belief network is trained with unsupervised data and a reconstruction criterion to extract features from an input image, and the features are used to train a realtime classifier to predict traversability. The online supervision is given by a stereo module that provides robust labels for nearby areas up to 12 meters distant. The approach was developed and tested on the LAGR mobile robot.
international conference on machine learning | 2008
Michael Karlen; Jason Weston; Ayse Erkan; Ronan Collobert
We show how the regularizer of Transductive Support Vector Machines (TSVM) can be trained by stochastic gradient descent for linear models and multi-layer architectures. The resulting methods can be trained online, have vastly superior training and testing speed to existing TSVM algorithms, can encode prior knowledge in the network architecture, and obtain competitive error rates. We then go on to propose a natural generalization of the TSVM loss function that takes into account neighborhood and manifold information directly, unifying the two-stage Low Density Separation method into a single criterion, and leading to state-of-the-art results.
robotics science and systems | 2007
Raia Hadsell; Pierre Sermanet; Jan Ben; Ayse Erkan; Jeff Han; Urs Muller; Yann LeCun
We present a solution to the problem of long-range obstacle/path recognition in autonomous robots. The system uses sparse traversability information from a stereo module to train a classifier online. The trained classifier can then predict the traversability of the entire scene. A distance-normalized image pyramid makes it possible to efficiently train on each frame seen by the robot, using large windows that contain contextual information as well as shape, color, and texture. Traversability labels are initially obtained for each target using a stereo module, then propagated to other views of the same target using temporal and spatial concurrences, thus training the classifier to be viewinvariant. A ring buffer simulates short-term memory and ensures that the discriminative learning is balanced and consistent. This long-range obstacle detection system sees obstacles and paths at 30-40 meters, far beyond the maximum stereo range of 12 meters, and adapts very quickly to new environments. Experiments were run on the LAGR robot platform.
intelligent robots and systems | 2010
Ayse Erkan; Oliver Kroemer; Renaud Detry; Yasemin Altun; Justus H. Piater; Jan Peters
This paper addresses the problem of learning and efficiently representing discriminative probabilistic models of object-specific grasp affordances particularly when the number of labeled grasps is extremely limited. The proposed method does not require an explicit 3D model but rather learns an implicit manifold on which it defines a probability distribution over grasp affordances. We obtain hypothetical grasp configurations from visual descriptors that are associated with the contours of an object. While these hypothetical configurations are abundant, labeled configurations are very scarce as these are acquired via time-costly experiments carried out by the robot. Kernel logistic regression (KLR) via joint kernel maps is trained to map the hypothesis space of grasps into continuous class-conditional probability values indicating their achievability. We propose a soft-supervised extension of KLR and a framework to combine the merits of semi-supervised and active learning approaches to tackle the scarcity of labeled grasps. Experimental evaluation shows that combining active and semi-supervised learning is favorable in the existence of an oracle. Furthermore, semi-supervised learning outperforms supervised learning, particularly when the labeled data is very limited.
intelligent robots and systems | 2007
Ayse Erkan; Raia Hadsell; Pierre Sermanet; Jan Ben; Urs Muller; Yann LeCun
A novel probabilistic online learning framework for autonomous off-road robot navigation is proposed. The system is purely vision-based and is particularly designed for predicting traversability in unknown or rapidly changing environments. It uses self-supervised learning to quickly adapt to novel terrains after processing a small number of frames, and it can recognize terrain elements such as paths, man-made structures, and natural obstacles at ranges up to 30 meters. The system is developed on the LAGR mobile robot platform and the performance is evaluated using multiple metrics, including ground truth.
international workshop on machine learning for signal processing | 2010
Ayse Erkan; Gustavo Camps-Valls; Yasemin Altun
Remote sensing image segmentation requires multi-category classification typically with limited number of labeled training samples. While semi-supervised learning (SSL) has emerged as a sub-field of machine learning to tackle the scarcity of labeled samples, most SSL algorithms to date have had trade-offs in terms of scalability and/or applicability to multi-categorical data. In this paper, we evaluate semi-supervised logistic regression (SLR), a recent information theoretic semi-supervised algorithm, for remote sensing image classification problems. SLR is a probabilistic discriminative classifier and a specific instance of the generalized maximum entropy framework with a convex loss function. Moreover, the method is inherently multi-class and easy to implement. These characteristics make SLR a strong alternative to the widely used semi-supervised variants of SVM for the segmentation of remote sensing images. We demonstrate the competitiveness of SLR in multispectral, hyperspectral and radar image classification.
IFAC Proceedings Volumes | 2007
Pierre Sermanet; Raia Hadsell; Jan Ben; Ayse Erkan; Beat Flepp; Urs Muller; Yann LeCun
Abstract The performance of vision-based navigation systems for off-road mobile robots depends crucially on the resolution of the camera, the sophistication of the visual processing, the latency between image and sensor capture to actuator control, and the period of the control loop. One particularly important design question is whether one should increase the resolution of the camera images, and the range of the obstacle detection algorithms, at the expense of latency and control loop period. We first report experimental results on the resolution-period trade-off with a stereo vision-based navigation system implemented on the LAGR mobile robot platform. We propose a multi-agent perception and control architecture that combines a sophisticated long-range path detection method operating at high resolution and low frame rate, with a simple stereo-based obstacle detection method operating at low resolution, high frame rate, and low latency. The system combines the advantages of the long-range module for strategic path planning, with the advantages of the short-range module for tactical driving.
Journal of Field Robotics | 2009
Raia Hadsell; Pierre Sermanet; Jan Ben; Ayse Erkan; Marco Scoffier; Koray Kavukcuoglu; Urs Muller; Yann LeCun
Archive | 2003
Cem Keskin; Ayse Erkan; Lale Akarun
international conference on artificial intelligence and statistics | 2010
Yann LeCun; Ayse Erkan