Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Calma is active.

Publication


Featured researches published by Adrian Calma.


Information Sciences | 2015

Transductive active learning – A new semi-supervised learning approach based on iteratively refined generative models to capture structure in data

Tobias Reitmaier; Adrian Calma; Bernhard Sick

Abstract Pool-based active learning is a paradigm where users (e.g., domains experts) are iteratively asked to label initially unlabeled data, e.g., to train a classifier from these data. An appropriate selection strategy has to choose unlabeled data for such user queries in an efficient and effective way (in principle, high classification performance at low labeling costs). In our transductive active learning approach we provide a completely labeled data pool (samples are either labeled by the experts or in a semi-supervised way) in each active learning cycle. Thereby, a key aspect is to explore and exploit information about structure in data. Structure in data can be detected and modeled by means of clustering algorithms or probabilistic, generative modeling techniques, for instance. Usually, this is done at the beginning of the active learning process when the data are still unlabeled. In our approach we show how a probabilistic generative model, initially parametrized with unlabeled data, can iteratively be refined and improved when during the active learning process more and more labels became available. In each cycle of the active learning process we use this generative model to label all samples not labeled by an expert so far in order to train the kind of classifier we want to train with the active learning process. Thus, this transductive learning process can be combined with any selection strategy and any kind of classifier. Here, we combine it with the 4DS selection strategy and the CMM probabilistic classifier described in previous work. For 20 publicly available benchmark data sets, we show that this new transductive learning process helps to improve pool-based active learning noticeably.


international conference on autonomic computing | 2016

Lifelong Learning and Collaboration of Smart Technical Systems in Open-Ended Environments -- Opportunistic Collaborative Interactive Learning

Gernot Bahle; Adrian Calma; Jan Marco Leimeister; Paul Lukowicz; Sarah Oeste-Reiss; Tobias Reitmaier; Albrecht Schmidt; Bernhard Sick; Gerd Stumme; Katharina Anna Zweig

Today, so-called “smart” or “intelligent” systems heavily rely on machine learning techniques to adjust their behavior by means of sample data (e.g., sensor observations). But, it will be more and more complicated or even impossible to provide those data at design-time of that system. As a consequence, these systems have to learn at run-time. Moreover, these systems will have to self-organize their learning processes. They have to decide which information or knowledge source they use at which time, depending on the quality of the information or knowledge they collect, the availability of these sources, the costs of gathering the information or knowledge, etc. With this article, we propose opportunistic collaborative interactive learning (O-CIL) as a new learning principle for future, even “smarter” systems. O-CIL will enable a “lifelong” or “never-ending” learning of such systems in open-ended (i.e., time-variant) environments, based on active behavior and collaboration of such systems. Not only these systems collaborate, also humans collaborate either directly or indirectly by interacting with these systems. The article characterizes O-CIL, summarizes related work, sketches research challenges, and illustrates O-CIL with some preliminary results.


2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W) | 2017

Learning to Learn: Dynamic Runtime Exploitation of Various Knowledge Sources and Machine Learning Paradigms

Adrian Calma; Daniel Kottke; Bernhard Sick; Sven Tomforde

The ability to learn at runtime is a fundamental prerequisite for self-adaptive and self-organising systems that allows for dealing with unanticipated conditions and dynamic environments. Often, this machine learning process has to be highly or fully autonomous. That is, the degree of interaction with humans must be reduced to a minimum. In principle, there exist various learning paradigms for this task such as transductive learning, reinforcement learning, collaborative learning, or - if interaction with humans is allowed but has to be efficient - active learning. These paradigms are based on different knowledge sources such as appropriate sensor measurements, humans, or databases as well as access models considering e.g., availability or reliability. In this article, we propose a novel meta learning approach that aims at dynamically exploiting various possible combinations of knowledge sources and machine learning paradigms at runtime. The approach is learning in the sense that it self-optimises a certain objective function (e.g., it maximises a classification accuracy) at runtime. We present an architectural concept for this learning scheme, discuss some possible use cases to highlight the benefits, and derive a research agenda for future work in this field.


international joint conference on neural network | 2016

Resp-kNN: A probabilistic k-nearest neighbor classifier for sparsely labeled data

Adrian Calma; Tobias Reitmaier; Bernhard Sick

Over the past few years extensive research has been conducted to solve classification problems with help of machine learning techniques. However, machine learning is data-driven and obtaining labeled data is often challenging in real applications. Techniques that try to overcome this burden, especially, in the presence of sparsely labeled data, can be found in the field of semi-supervised or active learning, as both make use of unlabeled data. In this paper, a semi-supervised k-nearest neighbor classifier, called Resp-kNN, is proposed for sparsely labeled data. This classifier is based on a probabilistic mixture model and, therefore, combines the advantages of classifiers based on non-parametric density estimate (such as a classical k-nearest neighbor classifier based on Euclidean distance) and classifiers based on parametric density estimates (such as classifiers based on Gaussian mixtures). Experimental results on 21 publicly available benchmark data sets show that Resp-kNN is more robust (regarding the choice of k) and effective for sparsely labeled classification compared to several standard methods.


Archive | 2016

From Active Learning to Dedicated Collaborative Interactive Learning

Adrian Calma; Jan Marco Leimeister; Paul Lukowicz; Sarah Oeste-Reiß; Tobias Reitmaier; Albrecht Schmidt; Bernhard Sick; Gerd Stumme; Katharina Anna Zweig


arXiv: Learning | 2015

A New Vision of Collaborative Active Learning.

Adrian Calma; Tobias Reitmaier; Bernhard Sick; Paul Lukowicz


international symposium on neural networks | 2018

Active Sorting – An Efficient Training of a Sorting Robot with Active Learning Techniques

Marek Herde; Daniel Kottke; Adrian Calma; Maarten Bieshaar; Stephan Deist; Bernhard Sick


international symposium on neural networks | 2018

The Other Human in The Loop – A Pilot Study to Find Selection Strategies for Active Learning

Daniel Kottke; Adrian Calma; Denis Huseljic; Christoph Sandrock; George Kachergis; Bernhard Sick


international symposium on neural networks | 2018

Active Learning With Realistic Data - A Case Study

Adrian Calma; Moritz Stolz; Daniel Kottke; Sven Tomforde; Bernhard Sick


hawaii international conference on system sciences | 2018

Leveraging the Potentials of Dedicated Collaborative Interactive Learning: Conceptual Foundations to Overcome Uncertainty by Human-Machine Collaboration

Adrian Calma; Sarah Oeste-Reiß; Bernhard Sick; Jan Marco Leimeister

Collaboration


Dive into the Adrian Calma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albrecht Schmidt

Ludwig Maximilian University of Munich

View shared research outputs
Top Co-Authors

Avatar

Katharina Anna Zweig

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Georg Krempl

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Jochen Kuhn

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge