Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Baris Akgun is active.

Publication


Featured researches published by Baris Akgun.


human-robot interaction | 2012

Trajectories and keyframes for kinesthetic teaching: a human-robot interaction perspective

Baris Akgun; Maya Cakmak; Jae Wook Yoo; Andrea Lockerd Thomaz

Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robots trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill. We present a user-study (n = 34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and keyframes in a single demonstration.


International Journal of Social Robotics | 2012

Keyframe-based Learning from Demonstration Method and Evaluation

Baris Akgun; Maya Cakmak; Karl Jiang; Andrea Lockerd Thomaz

We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a Human–Robot Interaction perspective. Our approach—Keyframe-based Learning from Demonstration (KLfD)—takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.


intelligent robots and systems | 2011

Sampling heuristics for optimal motion planning in high dimensions

Baris Akgun; Mike Stilman

We present a sampling-based motion planner that improves the performance of the probabilistically optimal RRT* planning algorithm. Experiments demonstrate that our planner finds a fast initial path and decreases the cost of this path iteratively. We identify and address the limitations of RRT* in high-dimensional configuration spaces. We introduce a sampling bias to facilitate and accelerate cost decrease in these spaces and a simple node-rejection criteria to increase efficiency. Finally, we incorporate an existing bi-directional approach to search which decreases the time to find an initial path. We analyze our planner on a simple 2D navigation problem in detail to show its properties and test it on a difficult 7D manipulation problem to show its effectiveness. Our results consistently demonstrate improved performance over RRT*.


international symposium on computer and information sciences | 2009

Unsupervised learning of affordance relations on a humanoid robot

Baris Akgun; Nilgun Dag; Tahir Bilal; Ilkay Atil; Erol Sahin

In this paper, we study how the concepts learned by a robot can be linked to verbal concepts that humans use in language. Specifically, we develop a simple tapping behaviour on the iCub humanoid robot simulator and allow the robot to interact with a set of objects of different types and sizes to learn affordance relations in its environment. The robot records its perception, obtained from a range camera, as a feature vector, before and after applying tapping on an object. We compute effect features by subtracting initial features from final features. We cluster the effect features using Kohonen self-organizing maps to generate a set of effect categories in an unsupervised fashion. We analyze the clusters using the types and sizes of objects that fall into the effect clusters, as well as the success/fail labels manually attached to the interactions. The hand labellings and the clusters formed by robot are found to match. We conjecture that this leads to the interpretation that the robot and humans share the same “effect concepts” which could be used in human-robot communication, for example as verbs. Furthermore, we use ReliefF feature extraction method to determine the initial features that are related to clustered effects and train a multi-class support vector machine (SVM) classifier to learn the mapping between the relevant initial features and the effect categories. The results show that, 1) despite the lack of supervision, the effect clusters tend to be homogeneous in terms of success/fail, 2) the relevant features consist mainly of shape, but not size, 3) the number of relevant features remains approximately constant with respect to the number of effect clusters formed, and 4) the SVM classifier can successfully learn the effect categories using the relevant


robot and human interactive communication | 2016

Grounding action parameters from demonstration

Kalesha Bullard; Baris Akgun; Sonia Chernova; Andrea Lockerd Thomaz

When a robot is deployed to a new setting, it must reason about how to accomplish the goals of domain-appropriate tasks within the environment it is situated. We investigate the problem of enabling robots to interactively learn how to perform known tasks in new environments. Each task is composed of a sequence of parameterized actions, which we assume are given to the robot in the form of a task recipe. In order to learn how to ground the task in a new environment, our learner builds classifiers to model each of the parameters (i.e. all unique objects and semantic locations) associated with the task. In evaluation for two tasks across three different environments, our results show that these groundings are both (1) capable of being learned efficiently from demonstrations, and (2) necessary to learn for each new environment.


international conference on case-based reasoning | 2015

Visual Case Retrieval for Interpreting Skill Demonstrations

Tesca Fitzgerald; Keith McGreggor; Baris Akgun; Andrea Lockerd Thomaz; Ashok K. Goel

Imitation is a well known method for learning. Case-based reasoning is an important paradigm for imitation learning; thus, case retrieval is a necessary step in case-based interpretation of skill demonstrations. In the context of a case-based robot that learns by imitation, each case may represent a demonstration of a skill that a robot has previously observed. Before it may reuse a familiar, source skill demonstration to address a new, target problem, the robot must first retrieve from its case memory the most relevant source skill demonstration. We describe three techniques for visual case retrieval in this context: feature matching, feature transformation matching, and feature transformation matching using fractal representations. We found that each method enables visual case retrieval under a different set of conditions pertaining to the nature of the skill demonstration.


intelligent robots and systems | 2015

An evaluation of GUI and kinesthetic teaching methods for constrained-keyframe skills

Andrey Kurenkov; Baris Akgun; Andrea Lockerd Thomaz

Keyframe-based Learning from Demonstration has been shown to be an effective method for allowing end-users to teach robots skills. We propose a method for using multiple keyframe demonstrations to learn skills as sequences of positional constraints (c-keyframes) which can be planned between for skill execution. We also introduce an interactive GUI which can be used for displaying the learned c-keyframes to the teacher, for altering aspects of the skill after it has been taught, or for specifying a skill directly without providing kinesthetic demonstrations. We compare 3 methods of teaching c-keyframe skills: kinesthetic teaching, GUI teaching, and kinesthetic teaching followed by GUI editing of the learned skill (K-GUI teaching). Based on user evaluation, the K-GUI method of teaching is found to be the most preferred, and the GUI to be the least preferred. Kinesthetic teaching is also shown to result in more robust constraints than GUI teaching, and several use cases of K-GUI teaching are discussed to show how the GUI can be used to improve the results of kinesthetic teaching.


International Journal of Materials Research | 2013

Mechanochemical and combustion synthesis of CeB6

Baris Akgun; Naci Sevinç; H. Erdem Çamurlu; Yavuz Topkaya

Abstract CeB6 powder was prepared via combustion synthesis (CS) and mechanochemical processing (MCP) methods starting from CeO2, B2O3 and Mg powder mixtures. In CS, reactant mixtures were ignited in a preheated pot furnace under argon atmosphere. Products contained CeB6, MgO and Mg3B2O6, as revealed by X-ray diffraction analysis. After leaching in 1 M HCl for 15 h, MgO was removed but Mg3B2O6 could not be removed from the products. Ball milling of products in ethanol prior to leaching made the removal of Mg3B2O6 possible by leaching. Yield of CeB6 was 68.6 % in CS. MCP was performed in a stainless steel vial with a planetary ball mill at 300 rpm for 30 h. MCP products contained CeB6, MgO and small amount of Fe. Leaching in 1 M HCl for 30 min was sufficient to remove MgO. Yield of CeB6 was 84.4 % in MCP. According to scanning electron microscopy examinations, particles of CeB6 prepared by CS and MCP had submicrometer size. Average particle sizes were determined as 290 nm and 240 nm, respectively.


International Journal of Materials & Product Technology | 2017

Volume combustion and mechanochemical syntheses of LaB 6

Baris Akgun; Naci Sevinç; H. Erdem Çamurlu; Yavuz Topkaya

LaB6 powder was produced by volume combustion synthesis (VCS) and mechanochemical synthesis (MCS) methods, through magnesiothermic reduction of La2O3 and B2O3 powders. VCS was achieved by rapid heating of the reactant mixture in argon, whereas MCS was performed via high energy ball milling. All the products were subjected to XRD, SEM, gravimetric and particle size distribution analyses. MCS resulted in the expected products LaB6 and MgO. In addition to these, VCS products contained LaBO3 and Mg3B2O6. In order to obtain pure LaB6, 30 min leach in 1 M HCl was found to be enough for MCS products, whereas Mg3B2O6 could not be removed from the VCS products even after 15 h leach. Wet milling of VCS products before leach facilitated obtaining of pure LaB6. Average particle sizes of the LaB6 powder produced by VCS and MCS were 290 and 180 nm, respectively.


intelligent robots and systems | 2015

Self-improvement of learned action models with learned goal models

Baris Akgun; Andrea Lockerd Thomaz

We introduce a new method for robots to further improve upon skills acquired through Learning from Demonstration. Previously, we have introduced a method to learn both an action model to execute the skill and a goal model to monitor the execution of the skill. In this paper we show how to use the learned goal models to improve the learned action models autonomously, without further user interaction. Trajectories are sampled from the action model and executed on the robot. The goal model then labels them as success or failure and the successful ones are used to update the action model. We introduce an adaptive sampling method to speed up convergence. We show through both simulation and real robot experiments that our method can fix a failed action model.

Collaboration


Dive into the Baris Akgun's collaboration.

Top Co-Authors

Avatar

Andrea Lockerd Thomaz

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Naci Sevinç

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Yavuz Topkaya

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Ashok K. Goel

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kaushik Subramanian

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Keith McGreggor

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Maya Cakmak

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Tesca Fitzgerald

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Erol Sahin

Middle East Technical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge