Mustafa Ersen
Istanbul Technical University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mustafa Ersen.
IEEE Transactions on Computational Intelligence and Ai in Games | 2015
Mustafa Ersen; Sanem Sariel
In this paper, we introduce an automated reasoning system for learning object behaviors and interactions through the observation of event sequences. We use an existing system to learn the models of objects and further extend it to model more complex behaviors. Furthermore, we propose a spatio-temporal reasoning based learning method for reasoning about interactions among objects. Experience gained through learning is to be used for achieving goals by these objects. We take The Incredible Machine game (TIM) as the main testbed to analyze our system. Tutorials of the game are used to train the system. We analyze the results of our reasoning system on four different input types: a knowledge base of relations; spatial information; temporal information; and spatio-temporal information from the environment. Our analysis reveals that if a knowledge base about relations is provided, most of the interactions can be learned. We have also demonstrated that our learning method which incorporates both spatial and temporal information gives close results to that of the knowledge-based approach. This is promising as gathering spatio-temporal information does not require prior knowledge about relations. Our second analysis of the spatio-temporal reasoning method in the Electric Box computer game domain verifies the success of our approach.
signal processing and communications applications conference | 2012
Mustafa Ersen; Sanem Sariel-Talay
In this study, we present how interactions among objects are learned from a given set of actions without any intermediate information about the states of objects. We have used The Incredible Machine game as a suitable test bed to analyze these types of interactions. When a knowledge base about relations among objects is provided, the interactions to devise new plans are learned to a desired extent. Moreover, using spatial information of objects or temporal information of actions makes it feasible to learn the effects of objects on each other. Integrating spatial and temporal data in a spatio-temporal learning approach gives closer results to that of the knowledge-based approach. This is promising because gathering spatio-temporal information does not require great amount of knowledge.
international symposium on computers and communications | 2012
Mustafa Ersen; Sanem Sariel-Talay
We propose a method for learning interactions among objects when intermediate state information is not available. Learning is accomplished by observing a given sequence of actions on different objects. We have selected the Incredible Machine game as a suitable domain for analyzing and learning object interactions. We first present how behaviors are represented by finite state machines using the given input. Then, we analyze the impact of the knowledge about relations on the overall performance. Our analysis includes four different types of input: a knowledge base including part relations; spatial information; temporal information; and spatio-temporal information. We show that if a knowledge base about relations is provided, learning is accomplished to a desired extent. Our analysis also indicates that the spatio-temporal approach is superior to the spatial and the temporal approaches and gives close results to that of the knowledge-based approach.
IEEE Robotics & Automation Magazine | 2017
Mustafa Ersen; Erhan Oztop; Sanem Sariel
Service robots are expected to play an important role in our daily lives as our companions in home and work environments in the near future. An important requirement for fulfilling this expectation is to equip robots with skills to perform everyday manipulation tasks, the success of which is crucial for most home chores, such as cooking, cleaning, and shopping. Robots have been used successfully for manipulation tasks in wellstructured and controlled factory environments for decades. Designing skills for robots working in uncontrolled human environments raises many potential challenges in various subdisciplines, such as computer vision, automated planning, and human-robot interaction. In spite of the recent progress in these fields, there are still challenges to tackle. This article outlines problems in different research areas related to mobile manipulation from the cognitive perspective, reviews recently published works and the state-of-the-art approaches to address these problems, and discusses open problems to be solved to realize robot assistants that can be used in manipulation tasks in unstructured human environments.
signal processing and communications applications conference | 2014
Melodi Deniz Ozturk; Mustafa Ersen; Mehmet Biberci; Sanem Sariel; Hulya Yalcin
In this paper, a scene interpretation system is proposed for cognitive robots to detect failures during their action executions. This system combines object recognition and segmentation results to maintain a consistent model of the world. Objects in the scene are recognized by using both color and depth information, and the unknown objects are segmented by using Euclidean clustering on the depth values. In addition to the locations of the objects, the world model includes some useful spatial relations for a tabletop object manipulation scenario: on, on_table, clear and near. The results of the conducted experiments by using the information gathered from the onboard RGB-D sensors of our Pioneer 3-AT and Pioneer 3-DX robots show that the proposed system can be successfully used to create a consistent world model including spatial relations in an object manipulation scenario.
signal processing and communications applications conference | 2013
Sertac Karapinar; Mustafa Ersen; Melis Kapotoglu; Petek Yildiz; Sanem Sariel-Talay; Hulya Yalcin
A cognitive robot may face several types of failures during the execution of its actions in the physical world. In this paper, we investigate how robots can ensure robustness by gaining experience on action executions, and we propose a lifelong experimental learning method to derive new hypotheses. Our proposed learning process takes into account the actions, the objects in interest and their relations to guide the robots future decisions. We use Inductive Logic Programming as the learning method to frame hypotheses for both efficient execution types and failure situations. ILP learning provides first-order logical representations of the derived hypotheses that are useful for reasoning and planning processes. Experience gained through incremental learning is used as a guide to the future decisions of the robot for robust execution. In the experiments, the performance of ILP learning is analysed on a Pioneer 3DX robot with comparison to attribute-based learners. The results reveal that the hypotheses framed for failure cases are sound and ensure safety in future tasks of the robot.
national conference on artificial intelligence | 2014
Melodi Deniz Ozturk; Mustafa Ersen; Melis Kapotoglu; Cagatay Koc; Sanem Sariel-Talay; Hulya Yalcin
KIK@KI | 2013
Mustafa Ersen; Sanem Sariel Talay; Hulya Yalcin
national conference on artificial intelligence | 2013
Sertac Karapinar; Sanem Sariel-Talay; Petek Yildiz; Mustafa Ersen
IEEE Transactions on Systems, Man, and Cybernetics | 2018
Arda Inceoglu; Cagatay Koc; Besim Ongun Kanat; Mustafa Ersen; Sanem Sariel