Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alper Aydemir is active.

Publication


Featured researches published by Alper Aydemir.


international conference on robotics and automation | 2006

Geckobot: a gecko inspired climbing robot using elastomer adhesives

Ozgur Unver; Ali Uneri; Alper Aydemir; Metin Sitti

In this paper, the design, analysis, and fabrication of a gecko-inspired climbing robot are discussed. The robot has kinematics similar to a geckos climbing gait. It uses peeling and steering mechanisms and an active tail for robust and agile climbing as a novelty. The advantage of this legged robot is that it can explore irregular terrains more robustly. Novel peeling mechanism of the elastomer adhesive pads, as well as steering and stable climbing using an active tail are explored. The design, fabrication, analysis and test of the robot are reported. Experimental results of walking and climbing up to 85deg sloped acrylic surfaces as well as successful steering and peeling mechanism tests are demonstrated. The potential applications foreseen for this kind of robots are inspection, repair, cleaning, and exploration


international conference on robotics and automation | 2011

Search in the real world: Active visual object search based on spatial relations

Alper Aydemir; Kristoffer Sjöö; John Folkesson; Andrzej Pronobis; Patric Jensfelt

Objects are integral to a robots understanding of space. Various tasks such as semantic mapping, pick-and-carry missions or manipulation involve interaction with objects. Previous work in the field largely builds on the assumption that the object in question starts out within the ready sensory reach of the robot. In this work we aim to relax this assumption by providing the means to perform robust and large-scale active visual object search. Presenting spatial relations that describe topological relationships between objects, we then show how to use these to create potential search actions. We introduce a method for efficiently selecting search strategies given probabilities for those relations. Finally we perform experiments to verify the feasibility of our approach.


international joint conference on artificial intelligence | 2011

Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour

Marc Hanheide; Charles Gretton; Richard Dearden; Nick Hawes; Jeremy L. Wyatt; Andrzej Pronobis; Alper Aydemir; Moritz Göbelbecker; Hendrik Zender

Robots must perform tasks efficiently and reliably while acting under uncertainty. One way to achieve efficiency is to give the robot common-sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by modelling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first contribution is a probabilistic relational model integrating common-sense knowledge about the world in general, with observations of a particular environment. Our second contribution is a continual planning system which is able to plan in the large problems posed by that model, by automatically switching between decision-theoretic and classical procedures. We evaluate our system on object search tasks in two different real-world indoor environments. By reasoning about the trade-offs between possible courses of action with different informational effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.


IEEE Transactions on Robotics | 2013

Active Visual Object Search in Unknown Environments Using Uncertain Semantics

Alper Aydemir; Andrzej Pronobis; Moritz Göbelbecker; Patric Jensfelt

In this paper, we study the problem of active visual search (AVS) in large, unknown, or partially known environments. We argue that by making use of uncertain semantics of the environment, a robot tasked with finding an object can devise efficient search strategies that can locate everyday objects at the scale of an entire building floor, which is previously unknown to the robot. To realize this, we present a probabilistic model of the search environment, which allows for prioritizing the search effort to those parts of the environment that are most promising for a specific object type. Further, we describe a method for reasoning about the unexplored part of the environment for goal-directed exploration with the purpose of object search. We demonstrate the validity of our approach by comparing it with two other search systems in terms of search trajectory length and time. First, we implement a greedy coverage-based search strategy that is found in previous work. Second, we let human participants search for objects as an alternative comparison for our method. Our results show that AVS strategies that exploit uncertain semantics of the environment are a very promising idea, and our method pushes the state-of-the-art forward in AVS.


IEEE Transactions on Autonomous Mental Development | 2010

Self-Understanding and Self-Extension: A Systems and Representational Approach

Jeremy L. Wyatt; Alper Aydemir; Michael Brenner; Marc Hanheide; Nick Hawes; Patric Jensfelt; Matej Kristan; Geert-Jan M. Kruijff; Pierre Lison; Andrzej Pronobis; Kristoffer Sjöö; Alen Vrečko; Hendrik Zender; Michael Zillich; Danijel Skočaj

There are many different approaches to building a system that can engage in autonomous mental development. In this paper, we present an approach based on what we term self-understanding, by which we mean the explicit representation of and reasoning about what a system does and does not know, and how that knowledge changes under action. We present an architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, which we term self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a goal management and planning system for setting and achieving learning goals.


intelligent robots and systems | 2010

Mechanical support as a spatial abstraction for mobile robots

Kristoffer Sjöö; Alper Aydemir; Thomas Mörwald; Kai Zhou; Patric Jensfelt

Motivated by functional interpretations of spatial language terms, and the need for cognitively plausible and practical abstractions for mobile service robots, we present a spatial representation based on the physical support of one object by another, corresponding to the preposition “on”. A perceptual model for evaluating this relation is suggested, and experiments - simulated as well as using a real robot - are presented. We indicate how this model can be used for important tasks such as communication of spatial knowledge, abstract reasoning and learning, taking as an example direct and indirect visual search. We also demonstrate the model experimentally and show that it produces intuitively feasible results from visual scene analysis as well as synthetic distributions that can be put to a number of uses.


intelligent robots and systems | 2012

What can we learn from 38,000 rooms? Reasoning about unexplored space in indoor environments

Alper Aydemir; Patric Jensfelt; John Folkesson

Many robotics tasks require the robot to predict what lies in the unexplored part of the environment. Although much work focuses on building autonomous robots that operate indoors, indoor environments are neither well understood nor analyzed enough in the literature. In this paper, we propose and compare two methods for predicting both the topology and the categories of rooms given a partial map. The methods are motivated by the analysis of two large annotated floor plan data sets corresponding to the buildings of the MIT and KTH campuses. In particular, utilizing graph theory, we discover that local complexity remains unchanged for growing global complexity in real-world indoor environments, a property which we exploit. In total, we analyze 197 buildings, 940 floors and over 38,000 real-world rooms. Such a large set of indoor places has not been investigated before in the previous work. We provide extensive experimental results and show the degree of transferability of spatial knowledge between two geographically distinct locations. We also contribute the KTH data set and the software tools to with it.


intelligent robots and systems | 2012

Exploiting and modeling local 3D structure for predicting object locations

Alper Aydemir; Patric Jensfelt

In this paper, we argue that there is a strong correlation between local 3D structure and object placement in everyday scenes. We call this the 3D context of the object. In previous work, this is typically hand-coded and limited to flat horizontal surfaces. In contrast, we propose to use a more general model for 3D context and learn the relationship between 3D context and different object classes. This way, we can capture more complex 3D contexts without implementing specialized routines. We present extensive experiments with both qualitative and quantitative evaluations of our method for different object classes. We show that our method can be used in conjunction with an object detection algorithm to reduce the rate of false positives. Our results support that the 3D structure surrounding objects in everyday scenes is a strong indicator of their placement and that it can give significant improvements in the performance of, for example, an object detection system. For evaluation, we have collected a large dataset of Microsoft Kinect frames from five different locations, which we also make publicly available.


intelligent autonomous systems | 2010

Representing Spatial Knowledge in Mobile Cognitive Systems

Andrzej Pronobis; Kristoffer Sjöö; Alper Aydemir; Adrian N. Bishop; Patric Jensfelt

A cornerstone for cognitive mobile agents is to represent the vast body of knowledge about space in which they operate. In order to be robust and efficient, such representation must address require ...


international conference on intelligent autonomous systems | 2010

Object search on a mobile robot using relational spatial information

Alper Aydemir; Kristoffer Sjöö; Patric Jensfelt

We present a method for utilising knowledge of qualitative spatial relations between objects in order to facilitate efficient visual search for those objects. A computational model for the relation is used to sample a probability distribution that guides the selection of camera views. Specifically we examine the spatial relation “on”, in the sense of physical support, and show its usefulness in search experiments on a real robot. We also experimentally compare different search strategies and verify the efficiency of so-called indirect search.

Collaboration


Dive into the Alper Aydemir's collaboration.

Top Co-Authors

Avatar

Patric Jensfelt

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kristoffer Sjöö

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrzej Pronobis

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nick Hawes

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Folkesson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge