Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas C. Damianou is active.

Publication


Featured researches published by Andreas C. Damianou.


knowledge discovery and data mining | 2014

Active learning for sparse bayesian multilabel classification

Deepak Vasisht; Andreas C. Damianou; Manik Varma; Ashish Kapoor

We study the problem of active learning for multilabel classification. We focus on the real-world scenario where the average number of positive (relevant) labels per data point is small leading to positive label sparsity. Carrying out mutual information based near-optimal active learning in this setting is a challenging task since the computational complexity involved is exponential in the total number of labels. We propose a novel inference algorithm for the sparse Bayesian multilabel model of [17]. The benefit of this alternate inference scheme is that it enables a natural approximation of the mutual information objective. We prove that the approximation leads to an identical solution to the exact optimization problem but at a fraction of the optimization cost. This allows us to carry out efficient, non-myopic, and near-optimal active learning for sparse multilabel classification. Extensive experiments reveal the effectiveness of the method.


robotics and biomimetics | 2016

An integrated probabilistic framework for robot perception, learning and memory

Uriel Martinez-Hernandez; Andreas C. Damianou; Daniel Camilleri; Luke Boorman; Neil D. Lawrence; Tony J. Prescott

Learning and perception from multiple sensory modalities are crucial processes for the development of intelligent systems capable of interacting with humans. We present an integrated probabilistic framework for perception, learning and memory in robotics. The core component of our framework is a computational Synthetic Autobiographical Memory model which uses Gaussian Processes as a foundation and mimics the functionalities of human memory. Our memory model, that operates via a principled Bayesian probabilistic framework, is capable of receiving and integrating data flows from multiple sensory modalities, which are combined to improve perception and understanding of the surrounding environment. To validate the model, we implemented our framework in the iCub humanoid robotic, which was able to learn and recognise human faces, arm movements and touch gestures through interaction with people. Results demonstrate the flexibility of our method to successfully integrate multiple sensory inputs, for accurate learning and recognition. Thus, our integrated probabilistic framework offers a promising core technology for robust intelligent systems, which are able to perceive, learn and interact with people and their environments.


Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science | 2017

Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling

Paris Perdikaris; Maziar Raissi; Andreas C. Damianou; Neil D. Lawrence; George Em Karniadakis

Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.


conference on biomimetic and biohybrid systems | 2015

A Top-Down Approach for a Synthetic Autobiographical Memory System

Andreas C. Damianou; Carl Henrik Ek; Luke Boorman; Neil D. Lawrence; Tony J. Prescott

Autobiographical memory AM refers to the organisation of ones experience into a coherent narrative. The exact neural mechanisms responsible for the manifestation of AM in humans are unknown. On the other hand, the field of psychology has provided us with useful understanding about the functionality of a bio-inspired synthetic AM SAM system, in a higher level of description. This paper is concerned with a top-down approach to SAM, where known components and organisation guide the architecture but the unknown details of each module are abstracted. By using Bayesian latent variable models we obtain a transparent SAM system with which we can interact in a structured way. This allows us to reveal the properties of specific sub-modules and map them to functionality observed in biological systems. The top-down approach can cope well with the high performance requirements of a bio-inspired cognitive system. This is demonstrated in experiments using faces data.


IEEE Transactions on Cognitive and Developmental Systems | 2017

DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self

Clément Moulin-Frier; Tobias Fischer; Maxime Petit; Grégoire Pointeau; Jordi-Ysard Puigbò; Ugo Pattacini; Sock Ching Low; Daniel Camilleri; Phuong D. H. Nguyen; Matej Hoffmann; Hyung Jin Chang; Martina Zambelli; Anne-Laure Mealier; Andreas C. Damianou; Giorgio Metta; Tony J. Prescott; Yiannis Demiris; Peter Ford Dominey; Paul F. M. J. Verschure

This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both human and robot. The framework, based on a biologically grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the-art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.


international conference on robotics and automation | 2016

Probabilistic consolidation of grasp experience

Yasemin Bekiroglu; Andreas C. Damianou; Renaud Detry; Johannes A. Stork; Danica Kragic; Carl Henrik Ek

We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.


conference on biomimetic and biohybrid systems | 2016

iCub Visual Memory Inspector: Visualising the iCub’s Thoughts

Daniel Camilleri; Andreas C. Damianou; Harry Jackson; Neil D. Lawrence; Tony J. Prescott

This paper describes the integration of multiple sensory recognition models created by a Synthetic Autobiographical Memory into a structured system. This structured system provides high level control of the overall architecture and interfaces with an iCub simulator based in Unity which provides a virtual space for the display of recollected events.


conference on biomimetic and biohybrid systems | 2015

Extending a Hippocampal Model for Navigation Around a Maze Generated from Real-World Data

Luke Boorman; Andreas C. Damianou; Uriel Martinez-Hernandez; Tony J. Prescott

An essential component in the formation of understanding is the ability to use past experience to comprehend the here and now, and to aid selection of future action. Past experience is stored as memories which are then available for recall at very short notice, allowing for understanding of short and long term action. Autobiographical memory ABM is a form of temporally organised memory and is the organisation of episodes and contextual information from an individuals experience into a coherent narrative, which is key to a sense of self. Formation and recall of memories is essential for effective and adaptive behaviour in the world, providing contextual information necessary for planning actions and memory functions, such as event reconstruction. Here we tested and developed a previously defined computational memory model, based on hippocampal structure and function, as a first step towards developing a synthetic model of human ABM SAM. The hippocampal model chosen has functions analogous to that of human ABM. We trained the model on real-world sensory data and demonstrate successful, biologically plausible memory formation and recall, in a navigational task. The hippocampal model will later be extended for application in a biologically inspired system for human-robot interaction.


international world wide web conferences | 2018

Leveraging Crowdsourcing Data For Deep Active Learning – An Application: Learning Intents in Alexa

Jie Yang; Thomas Drake; Andreas C. Damianou; Yoelle Maarek

This paper presents a generic Bayesian framework that enables any deep learning model to actively learn from targeted crowds. Our framework inherits from recent advances in Bayesian deep learning, and extends existing work by considering the targeted crowdsourcing approach, where multiple annotators with unknown expertise contribute an uncontrolled amount (often limited) of annotations. Our framework leverages the low-rank structure in annotations to learn individual annotator expertise, which then helps to infer the true labels from noisy and sparse annotations. It provides a unified Bayesian model to simultaneously infer the true labels and train the deep learning model in order to reach an optimal learning efficacy. Finally, our framework exploits the uncertainty of the deep learning model during prediction as well as the annotators» estimated expertise to minimize the number of required annotations and annotators for optimally training the deep learning model. We evaluate the effectiveness of our framework for intent classification in Alexa (Amazon»s personal assistant), using both synthetic and real-world datasets. Experiments show that our framework can accurately learn annotator expertise, infer true labels, and effectively reduce the amount of annotations in model training as compared to state-of-the-art approaches. We further discuss the potential of our proposed framework in bridging machine learning and crowdsourcing towards improved human-in-the-loop systems.


conference towards autonomous robotic systems | 2016

A Bioinspired Approach to Vision

Daniel Camilleri; Luke Boorman; Uriel Martinez; Andreas C. Damianou; Tony J. Prescott

This paper describes the design of a computational vision framework inspired by the cortices of the brain. The proposed framework carries out visual saliency and provides pathways through which object segmentation, learning and recognition skills can be learned and acquired through experience.

Collaboration


Dive into the Andreas C. Damianou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Henrik Ek

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luke Boorman

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michalis K. Titsias

Athens University of Economics and Business

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guilherme A. Barreto

Federal University of Ceará

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge