Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Muller is active.

Publication


Featured researches published by Tim Muller.


trust security and privacy in computing and communications | 2014

Towards Robust and Effective Trust Management for Security: A Survey

Dongxia Wang; Tim Muller; Yang Liu; Jie Zhang

There is a need for robust and effective trust management. Different security problems result in different requirements to the design of trust management, and the existing attacks in trust management for security are yet to be solved. In this paper, we first propose a framework to classify desired properties of trust management for each type of security problems. We then investigate typical representative attacks and existing solutions in trust management for security. By considering both these security properties and attacks on trust management systems, our work serves to propel the design of more effective and robust trust management systems for security.


international conference on trust management | 2014

On Robustness of Trust Systems

Tim Muller; Yang Liu; Sjouke Mauw; Jie Zhang

Trust systems assist in dealing with users who may betray one another. Cunning users (attackers) may attempt to hide the fact that they betray others, deceiving the system. Trust systems that are difficult to deceive are considered more robust. To formally reason about robustness, we formally model the abilities of an attacker. We prove that the attacker model is maximal, i.e. 1) the attacker can perform any feasible attack and 2) if a single attacker cannot perform an attack, then a group of attackers cannot perform that attack. Therefore, we can formulate robustness analogous to security.


international conference on trust management | 2016

Limitations on Robust Ratings and Predictions

Tim Muller; Yang Liu; Jie Zhang

Predictions are a well-studied form of ratings. Their objective nature allows a rigourous analysis. A problem is that there are attacks on prediction systems and rating systems. These attacks decrease the usefulness of the predictions. Attackers may ignore the incentives in the system, so we may not rely on these to protect ourselves. The user must block attackers, ideally before the attackers introduce too much misinformation. We formally axiomatically define robustness as the property that no rater can introduce too much misinformation. We formally prove that notions of robustness come at the expense of other desirable properties, such as the lack of bias or effectiveness. We also show that there do exist trade-offs between the different properties, allowing a prediction system with limited robustness, limited bias and limited effectiveness.


international conference on trust management | 2016

How to Use Information Theory to Mitigate Unfair Rating Attacks

Tim Muller; Dongxia Wang; Yang Liu; Jie Zhang

In rating systems, users want to construct accurate opinions based on ratings. However, the accuracy is bounded by the amount of information transmitted (leaked) by ratings. Rating systems are susceptible to unfair rating attacks. These attacks may decrease the amount of leaked information, by introducing noise. A robust trust system attempts to mitigate the effects of these attacks on the information leakage. Defenders cannot influence the actual ratings: being honest or from attackers. There are other ways for the defenders to keep the information leakage high: blocking/selecting the right advisors, observing transactions and offering more choices. Blocking suspicious advisors can only decrease robustness. If only a limited number of ratings can be used, however, then less suspicious advisors are better, and in case of a tie, newer advisors are better. Observing transactions increases robustness. Offering more choices may increase robustness.


adaptive agents and multi-agents systems | 2016

A language for trust modelling

Tim Muller; Jie Zhang; Yang Liu

The computational trust paradigm supposes that it is possible to quantify trust relations that occur within some software systems. The paradigm covers a variety of trust systems, such as trust management systems, reputation systems and trust-based security systems. Different trust systems have different assumptions, and various trust models have been developed on top of these assumptions Typically, trust models are incomparable, or even mutually unintelligible; as a result their evaluation may be circular or biased. We propose a unified language to express the trust models and trust systems. Within the language, all trust models are comparable, and the problem of circularity or bias is mitigated. Moreover, given a complete set of assumptions in the language, a unique trust model is defined.


modeling decisions for artificial intelligence | 2015

Information Theory for Subjective Logic

Tim Muller; Dongxia Wang; Audun Jøsang

Uncertainty plays an important role in decision making. People try to avoid risks introduced by uncertainty. Probability theory can model these risks, and information theory can measure these risks. Another type of uncertainty is ambiguity; where people are not aware of the probabilities. People also attempt to avoid ambiguity. Subjective logic can model ambiguity-based uncertainty using opinions. We look at extensions of information theory to measure the uncertainty of opinions.


adaptive agents and multi-agents systems | 2015

Using Information Theory to Improve the Robustness of Trust Systems

Dongxia Wang; Tim Muller; Athirai Aravazhi Irissappane; Jie Zhang; Yang Liu


international conference on artificial intelligence | 2015

Quantifying robustness of trust systems against collusive unfair rating attacks using information theory

Dongxia Wang; Tim Muller; Jie Zhang; Yang Liu


adaptive agents and multi-agents systems | 2015

The Fallacy of Endogenous Discounting of Trust Recommendations

Tim Muller; Yang Liu; Jie Zhang


international conference on information fusion | 2015

Trust revision for conflicting sources

Audun Jøsang; Magdalena Ivanovska; Tim Muller

Collaboration


Dive into the Tim Muller's collaboration.

Top Co-Authors

Avatar

Jie Zhang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yang Liu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Dongxia Wang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sjouke Mauw

University of Luxembourg

View shared research outputs
Researchain Logo
Decentralizing Knowledge