Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Batu Akan is active.

Publication


Featured researches published by Batu Akan.


international conference on robotics and automation | 2011

Intuitive industrial robot programming through incremental multimodal language and augmented reality

Batu Akan; Afshin Ameri; Baran Çürüklü; Lars Asplund

Developing easy to use, intuitive interfaces is crucial to introduce robotic automation to many small medium sized enterprises (SMEs). Due to their continuously changing product lines, reprogramming costs exceed installation costs by a large margin. In addition, traditional programming methods for industrial robots is too complex for an inexperienced robot programmer, thus external assistance is often needed. In this paper a new incremental multimodal language, which uses augmented reality (AR) environment, is presented. The proposed language architecture makes it possible to manipulate, pick or place the objects in the scene. This approach shifts the focus of industrial robot programming from coordinate based programming paradigm, to object based programming scheme. This makes it possible for non-experts to program the robot in an intuitive way, without going through rigorous training in robot programming.


emerging technologies and factory automation | 2009

Object selection using a spatial language for flexible assembly

Batu Akan; Baran Çürüklü; Giacomo Spampinato; Lars Asplund

In this paper we present a new simplified natural language that makes use of spatial relations between the objects in scene to navigate an industrial robot for simple pick and place applications. Developing easy to use, intuitive interfaces is crucial to introduce robotic automation to many small medium sized enterprises (SMEs). Due to their continuously changing product lines, reprogramming costs are far more higher than installation costs. In order to hide the complexities of robot programming we propose a natural language where the use can control and jog the robot based on reference objects in the scene. We used Gaussian kernels to represent spatial regions, such as left or above. Finally we present some dialogues between the user and robot to demonstrate the usefulness of the proposed system.


international conference on multimodal interfaces | 2011

A general framework for incremental processing of multimodal inputs

Afshin Ameri Ekhtiarabadi; Batu Akan; Baran Çürüklü; Lars Asplund

Humans employ different information channels (modalities) such as speech, pictures and gestures in their communication. It is believed that some of these modalities are more error-prone to some specific type of data and therefore multimodality can help to reduce ambiguities in the interaction. There have been numerous efforts in implementing multimodal interfaces for computers and robots. Yet, there is no general standard framework for developing them. In this paper we propose a general framework for implementing multimodal interfaces. It is designed to perform natural language understanding, multi- modal integration and semantic analysis with an incremental pipeline and includes a multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.


Archive | 2009

Real-World Data Collection with UYANIK

Hüseyin Abut; Hakan Erdogan; Aytül Erçil; Baran Çürüklü; Hakkı Can Koman; Fatih Taş; Ali Özgür Argunşah; Serhan Cosar; Batu Akan; Harun Karabalkan; Emrecan Çökelek; Rahmi Fıçıcı; Volkan Sezer; Serhan Danis; Mehmet Karaca; Mehmet Abbak; Mustafa Gökhan Uzunbas; Kayhan Eritmen; Mümin Imamoğlu; Cagatay Karabat

In this chapter, we present data collection activities and preliminary research findings from the real-world database collected with “UYANIK,” a passenger car instrumented with several sensors, CAN-Bus data logger, cameras, microphones, data acquisitions systems, computers, and support systems. Within the shared frameworks of Drive-Safe Consortium (Turkey) and the NEDO (Japan) International Collaborative Research on Driving Behavior Signal Processing, close to 16 TB of driver behavior, vehicular, and road data have been collected from more than 100 drivers on a 25 km route consisting of both city roads and The Trans-European Motorway (TEM) in Istanbul, Turkey. Challenge of collecting data in a metropolis with around 12 million people and famous with extremely limited infrastructure yet driving behavior defying all rules and regulations bordering madness could not be “painless.” Both the experience gained and the preliminary results from still on-going studies using the database are very encouraging and give comfort.


human-robot interaction | 2010

Towards robust human robot collaboration in industrial environments

Batu Akan; Baran Çürüklü; Giacomo Spampinato; Lars Asplund

In this paper a system, which is driven through natural language, that allows operators to select and manipulate objects in the environment using an industrial robot is proposed. In order to hide the complexities of robot programming we propose a natural language where the user can control and jog the robot based on reference objects in the scene. We used semantic networks to relate different types of objects in the scene.


emerging technologies and factory automation | 2008

Gesture recognition using evolution strategy neural network

Johan Hägg; Baran Çürüklü; Batu Akan; Lars Asplund

A new approach to interact with an industrial robot using hand gestures is presented. System proposed here can learn a first time userpsilas hand gestures rapidly. This improves product usability and acceptability. Artificial neural networks trained with the evolution strategy technique are found to be suited for this problem. The gesture recognition system is an integrated part of a larger project for addressing intelligent human-robot interaction using a novel multi-modal paradigm. The goal of the overall project is to address complexity issues related to robot programming by providing a multi-modal user friendly interacting system that can be used by SMEs.


emerging technologies and factory automation | 2014

Scheduling for multiple type objects using POPStar planner

Batu Akan; E. Afshin Ameri; Baran Çürüklü

In this paper, scheduling of robot cells that produce multiple object types in low volumes are considered. The challenge is to maximize the number of objects produced in a given time window as well as to adopt the schedule for changing object types. Proposed algorithm, POPStar, is based on a partial order planner which is guided by best-first search algorithm and landmarks. The best-first search, uses heuristics to help the planner to create complete plans while minimizing the makespan. The algorithm takes landmarks, which are extracted from users instructions given in structured English as input. Using different topologies for the landmark graphs, we show that it is possible to create schedules for changing object types, which will be processed in different stages in the robot cell. Results show that the POPStar algorithm can create and adapt schedules for robot cells with changing product types in low volume production.


emerging technologies and factory automation | 2013

Scheduling POP-Star for automatic creation of robot cell programs

Batu Akan; Baran Çürüklü; Lars Asplund

Typical pick and place, and machine tending applications often require an industrial robot to be embedded in a cell and to communicate with other devices in the cell. Programming the program logic is a tedious job, requiring expert programming knowledge, and it can take more time than programming the specific robot movements itself. We propose a new system, which takes in the description of the whole manufacturing process in natural language as input, fills in the implicit actions, and plans the sequence of actions to accomplish the task described in minimal makespan using a modified partial planning algorithm. Finally we demonstrate that the proposed system can come up with a sensible plan for the given instructions.


emerging technologies and factory automation | 2010

Incremental Multimodal Interface for Human Robot Interaction

E. Afshin Ameri; Batu Akan; Baran Çürüklü

Face-to-face human communication is a multimodal and incremental process. An intelligent robot that operates in close relation with humans should have the ability to communicate with its human colleagues in such manner. The process of understanding and responding to multimodal inputs has been an interesting field of research and resulted in advancements in areas such as syntactic and semantic analysis, modality fusion and dialogue management. Some approaches in syntactic and semantic analysis take incremental nature of human interaction into account. Our goal is to unify syntactic/semantic analysis, modality fusion and dialogue management processes into an incremental multimodal interaction manager. We believe that this approach will lead to a more robust system which can perform faster than todays systems.


Archive | 2007

Data Collection with UYANIK: Too Much Pain; But Gains are Coming

Hüseyin Abut; Aytül Erçil; Hakan Erdogan; Baran Çürüklü; Hakkı Can Koman; Fatih Taş; Ali Özgür Argunşah; Serhan Cosar; Batu Akan; Harun Karabalkan; Emre Cökelek; Rahmi Fıçıcı; Volkan Sezer; Serhan Danis; Mehmet Karaca; Mehmet Abbak; Mustafa Gökhan Uzunbas; Kayhan Eritmen; Caglar Kalaycıoglu; Mümin Imamoğlu; Cagatay Karabat; Merve Peyic; Burak Arslan

Collaboration


Dive into the Batu Akan's collaboration.

Top Co-Authors

Avatar

Baran Çürüklü

Mälardalen University College

View shared research outputs
Top Co-Authors

Avatar

Lars Asplund

Mälardalen University College

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E. Afshin Ameri

Mälardalen University College

View shared research outputs
Top Co-Authors

Avatar

Giacomo Spampinato

Mälardalen University College

View shared research outputs
Top Co-Authors

Avatar

Johan Hägg

Mälardalen University College

View shared research outputs
Top Co-Authors

Avatar

Hüseyin Abut

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge