Alen Vrečko
University of Ljubljana
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alen Vrečko.
IEEE Transactions on Autonomous Mental Development | 2010
Jeremy L. Wyatt; Alper Aydemir; Michael Brenner; Marc Hanheide; Nick Hawes; Patric Jensfelt; Matej Kristan; Geert-Jan M. Kruijff; Pierre Lison; Andrzej Pronobis; Kristoffer Sjöö; Alen Vrečko; Hendrik Zender; Michael Zillich; Danijel Skočaj
There are many different approaches to building a system that can engage in autonomous mental development. In this paper, we present an approach based on what we term self-understanding, by which we mean the explicit representation of and reasoning about what a system does and does not know, and how that knowledge changes under action. We present an architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, which we term self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a goal management and planning system for setting and achieving learning goals.
intelligent robots and systems | 2011
Danijel Skočaj; Matej Kristan; Alen Vrečko; Marko Mahnič; Miroslav Janíček; Geert-Jan M. Kruijff; Marc Hanheide; Nick Hawes; Thomas Keller; Michael Zillich; Kai Zhou
In this paper we present representations and mechanisms that facilitate continuous learning of visual concepts in dialogue with a tutor and show the implemented robot system. We present how beliefs about the world are created by processing visual and linguistic information and show how they are used for planning system behaviour with the aim at satisfying its internal drive - to extend its knowledge. The system facilitates different kinds of learning initiated by the human tutor or by the system itself. We demonstrate these principles in the case of learning about object colours and basic shapes.
intelligent robots and systems | 2009
Alen Vrečko; Danijel Skočaj; Nick Hawes; Aleš Leonardis
We present a general method for integrating visual components into a multi-modal cognitive system. The integration is very generic and can work with an arbitrary set of modalities. We illustrate our integration approach with a specific instantiation of the architecture schema that focuses on integration of vision and language: a cognitive system able to collaborate with a human, learn and display some understanding of its surroundings. As examples of cross-modal interaction we describe mechanisms for clarification and visual learning.
international conference on advanced robotics | 2011
Kai Zhou; Andreas Richtsfeld; Michael Zillich; Markus Vincze; Alen Vrečko; Danijel Skočaj
Semantic visual perception for knowledge acquisition plays an important role in human cognition, as well as in the learning process of any cognitive robot. In this paper, we present a visual information abstraction mechanism designed for continuously learning robotic systems. We generate spatial information in the scene by considering plane estimation and stereo line detection coherently within a unified probabilistic framework, and show how spaces of interest (SOIs) are generated and segmented using the spatial information. We also demonstrate how the existence of SOIs is validated in the long-term learning process. The proposed mechanism facilitates robust visual information abstraction which is a requirement for continuous interactive learning. Experiments demonstrate that with the refined spatial information, our approach provides accurate and plausible representation of visual objects.
robotics and biomimetics | 2010
Kai Zhou; Michael Zillich; Markus Vincze; Alen Vrečko; Danijel Skočaj
Attention operators based on 2D image cues (such as color, texture) are well known and discussed extensively in the vision literature but are not ideally suited for robotic applications. In such contexts it is the 3D structure of scene elements that makes them interesting or not. We show how a bottom-up exploration mechanism that selects spaces of interest (SOIs) based on scene elements that pop out from planes is used within a larger architecture for a cognitive system. This mechanism simplifies the object localization as single plane detection, which is however not practical when dealing with real scenes that contains objects with complicated structures (e.g. objects in a multi-layer shelf). Therefore, the key work required for this situation is the multi-plane estimation, which is solved in this paper using Particle Swarm Optimization (PSO).
Cognitive Systems | 2010
Nick Hawes; Jeremy L. Wyatt; Mohan Sridharan; Marek Sewer Kopicki; Somboon Hongeng; I A Calvert; Aaron Sloman; Geert-Jan M. Kruijff; Henrik Jacobsson; Michael Brenner; Danijel Skočaj; Alen Vrečko; Nikodem Majer; Michael Zillich
Research in CoSy was scenario driven. Two scenarios were created, the Play- Mate and the Explorer. One of the integration goals of the project was to build integrated systems that addressed the tasks in these two scenarios. This chapter concerns the integrated system for the PlayMate scenario.
Journal of Experimental and Theoretical Artificial Intelligence | 2016
Danijel Skočaj; Alen Vrečko; Marko Mahnič; Miroslav Janíček; Geert-Jan M. Kruijff; Marc Hanheide; Nick Hawes; Jeremy L. Wyatt; Thomas Keller; Kai Zhou; Michael Zillich; Matej Kristan
This article presents an integrated robot system capable of interactive learning in dialogue with a human. Such a system needs to have several competencies and must be able to process different types of representations. In this article, we describe a collection of mechanisms that enable integration of heterogeneous competencies in a principled way. Central to our design is the creation of beliefs from visual and linguistic information, and the use of these beliefs for planning system behaviour to satisfy internal drives. The system is able to detect gaps in its knowledge and to plan and execute actions that provide information needed to fill these gaps. We propose a hierarchy of mechanisms which are capable of engaging in different kinds of learning interactions, e.g. those initiated by a tutor or by the system itself. We present the theory these mechanisms are build upon and an instantiation of this theory in the form of an integrated robot system. We demonstrate the operation of the system in the case of learning conceptual models of objects and their visual properties.
Neurocomputing | 2012
Alen Vrečko; Aleš Leonardis; Danijel Skočaj
Binding - the ability to combine two or more modal representations of the same entity into a single shared representation - is vital for every cognitive system operating in a complex environment. In order to successfully adapt to changes in a dynamic environment the binding mechanism has to be supplemented with cross-modal learning. In this paper we define the problems of high-level binding and cross-modal learning. By these definitions we model a binding mechanism in a Markov logic network and define its role in a cognitive architecture. We evaluate a prototype binding system off-line, using three different inference methods.
human-robot interaction | 2013
Michael Zillich; Kai Zhou; Danijel Skočaj; Matej Kristan; Alen Vrečko; Miroslav Janíček; Geert-Jan M. Kruijff; Thomas Keller; Marc Hanheide; Nick Hawes; Marko Mahnič
The video presents the robot George learning visual concepts in dialogue with a tutor.
international conference on adaptive and natural computing algorithms | 2011
Alen Vrečko; Danijel Skočaj; Aleš Leonardis
Binding -- the ability to combine two or more modal representations of the same entity into a single shared representation is vital for every cognitive system operating in a complex environment. In order to successfully adapt to changes in an dynamic environment the binding mechanism has to be supplemented with cross-modal learning. In this paper we define the problems of high-level binding and crossmodal learning. By these definitions we model a binding mechanism and a cross-modal learner in a Markov logic network and test the system on a synthetic object database.