Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Nickles is active.

Publication


Featured researches published by Matthias Nickles.


Archive | 2004

Agents and Computational Autonomy

Matthias Nickles; Michael Rovatsos; Gerhard Weiss

In this paper we contend that adaptation and learning are essential in designing and building autonomous software systems for reallife applications. In particular, we will argue that in dynamic, complex domains autonomy and adaptability go hand by hand, that is, that agents cannot make their own decisions if they are not provided with the ability to adapt to the changes occurring in the environment they are situated. In the second part, we maintain the need for taking up animal learning models and theories to overcome some serious problems in reinforcement


european conference on machine learning | 2009

Statistical relational learning with formal ontologies

Achim Rettinger; Matthias Nickles; Volker Tresp

We propose a learning approach for integrating formal knowledge into statistical inference by exploiting ontologies as a semantically rich and fully formal representation of prior knowledge. The logical constraints deduced from ontologies can be utilized to enhance and control the learning task by enforcing description logic satisfiability in a latent multi-relational graphical model. To demonstrate the feasibility of our approach we provide experiments using real world social network data in form of a


Machine Learning | 2011

Statistical relational learning of trust

Achim Rettinger; Matthias Nickles; Volker Tresp

\mathcal{SHOIN}(D)


Engineering Applications of Artificial Intelligence | 2005

Expectation-oriented modeling

Matthias Nickles; Michael Rovatsos; Gerhard Weiss

ontology. The results illustrate two main practical advancements: First, entities and entity relationships can be analyzed via the latent model structure. Second, enforcing the ontological constraints guarantees that the learned model does not predict inconsistent relations. In our experiments, this leads to an improved predictive performance.


adaptive agents and multi-agents systems | 2003

Interaction is meaning: a new model for communication in open systems

Michael Rovatsos; Matthias Nickles; Gerhard Weiss

The learning of trust and distrust is a crucial aspect of social interaction among autonomous, mentally-opaque agents. In this work, we address the learning of trust based on past observations and context information. We argue that from the truster’s point of view trust is best expressed as one of several relations that exist between the agent to be trusted (trustee) and the state of the environment. Besides attributes expressing trustworthiness, additional relations might describe commitments made by the trustee with regard to the current situation, like: a seller offers a certain price for a specific product. We show how to implement and learn context-sensitive trust using statistical relational learning in form of a Dirichlet process mixture model called Infinite Hidden Relational Trust Model (IHRTM). The practicability and effectiveness of our approach is evaluated empirically on user-ratings gathered from eBay. Our results suggest that (i) the inherent clustering achieved in the algorithm allows the truster to characterize the structure of a trust-situation and provides meaningful trust assessments; (ii) utilizing the collaborative filtering effect associated with relational data does improve trust assessment performance; (iii) by learning faster and transferring knowledge more effectively we improve cold start performance and can cope better with dynamic behavior in open multiagent systems. The later is demonstrated with interactions recorded from a strategic two-player negotiation scenario.


adaptive agents and multi-agents systems | 2004

Empirical-Rational Semantics of Agent Communication

Matthias Nickles; Michael Rovatsos; Gerhard Weiss

This work introduces expectation-oriented modeling (EOM) as a conceptual and formal framework for the modeling and influencing of black- or gray-box agents and agent interaction from the viewpoint of modelers like artificial agents and application designers. EOM is unique in that autonomous agent behavior is not restricted in advance, but only if it turns out to be necessary at runtime, and does so exploiting a seamless combination of evolving probabilistic and normative behavioral expectations as the key modeling abstraction and as the primary level of analysis and influence. Expectations are attitudes which allow for the relation of observed actions and other events to the modelers intentions and beliefs in an integrated, adaptive manner. In this regard, this work introduces a formal framework for the representation and the semantics of expectations embedded in social contexts. We see the applicability of EOM especially in open domains with a priori unknown and possibly unreliable and insincere actors, where the modeler cannot rely on cooperation or pursue her goals through the exertion of strictly normative power, e.g. the development and assertion of flexible interaction policies for trading platforms in the Internet, as illustrated in a case study. To our knowledge, EOM is the first approach to the modeling, cognitive analysis and influencing of social interaction that aims at tackling the level of expectations explicitly and systematically.


Archive | 2002

Ordnung aus Chaos — Prolegomena zu einer Luhmann’schen Modellierung deentropisierender Strukturbildung in Multiagentensystemen

Kai F. Lorentzen; Matthias Nickles

We propose a new model for agent communication in open systems that is based on the principle that the meaning of communicative acts lies in their experienced consequences. A formal framework for analysing such evolving semantics is defined. An extensive analysis of example interaction processes shows that our framework allows for an assessment of several properties of the communicative conventions governing a multiagent system. Among other advantages, our framework is capable of providing a very straightforward definition of communicative conflict. Also, it allows agents to reason about the effects of their communicative behaviour on the structure of communicative expectations as a whole when making decisions.


cooperative information agents | 2007

Learning Initial Trust Among Interacting Agents

Achim Rettinger; Matthias Nickles; Volker Tresp

The missing of an appropriate semantics of agent communication languages is one of the most challenging issues of contemporary AI. Although several approaches to this problem exist, none of them is really suitable for dealing with agent autonomy, which is a decisive property of artificial agents. This paper introduces an observation-based approach to the semantics of agent communication, which combines benefits of the two most influential traditional approaches to agent communication semantics, namely the mentalistic (agent-centric) and the objectivist (i.e., commitment- or protocol-oriented) approach. Our approach makes use of the fact that the most general meaning of agent utterances lays in their expectable consequences in terms of agent actions, and that communications result from hidden but nevertheless rational and to some extent reliable agent intentions. In this work, we present a formal framework which enables the empirical derivation of communication meanings from the observation of rational agent utterances, and introduce thereby a probabilistic and utility-oriented perspective of social commitments.


Lecture Notes in Computer Science | 2005

Communication systems: a unified model of socially intelligent systems

Matthias Nickles; Michael Rovatsos; Wilfried Brauer; Gerhard Weiß

Wenn man sich die Geschichte der modernen Sozialtheorie — nehmen wir Marx oder Ogburn — bezuglich des Verhaltnisses von technologischen Innovationen und soziologischen Forschungsinteressen ansieht, fallt auf, dass diese Beziehung fast immer einseitig gepflegt wurde. Es gab neue Techniken sowie durchgreifenden sozialen Wandel, und dann kamen die Soziologen und erklarten, wie sich das vielleicht miteinander verbinden lasst. Niemand kam auf die Idee, umgekehrt danach zu fragen, ob soziologische Innovationen von wesentlich technologischem Interesse sein konnten. In der Einleitung zu der vielzitierten Aufsatzsammlung „Technik als sozialer Prozess“ schreibt Peter Weingart apodiktisch, dass die Soziologie „(z)ur Technikentwicklung selbst [.] nichts beizutragen (hat)“ (Weingart 1989: 11). Nun, mit der VKI und der Sozionik hat sich diese Lage vielleicht geandert.


cooperative information agents | 2003

A Framework for the Social Description of Resources in Open Environments

Matthias Nickles; Gerhard Weiß

Trust learning is a crucial aspect of information exchange, negotiation, and any other kind of social interaction among autonomous agents in open systems. But most current probabilistic models for computational trust learning lack the ability to take context into account when trying to predict future behavior of interacting agents. Moreover, they are not able to transfer knowledge gained in a specific context to a related context. Humans, by contrast, have proven to be especially skilled in perceiving traits like trustworthiness in such so-called initial trust situations. The same restriction applies to most multiagent learning problems. In complex scenarios most algorithms do not scale well to large state-spaces and need numerous interactions to learn. We argue that trust related scenarios are best represented in a system of relations to capture semantic knowledge. Following recent work on nonparametric Bayesian models we propose a flexible and context sensitive way to model and learn multidimensional trust values which is particularly well suited to establish trust among strangers without prior relationship. To evaluate our approach we extend a multiagent framework by allowing agents to break an agreed interaction outcome retrospectively. The results suggest that the inherent ability to discover clusters and relationships between clusters that are best supported by the data allows to make predictions about future behavior of agents especially when initial trust is involved.

Collaboration


Dive into the Matthias Nickles's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Achim Rettinger

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge