Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen Marsh is active.

Publication


Featured researches published by Stephen Marsh.


international conference on trust management | 2006

Exploring different types of trust propagation

Audun Jøsang; Stephen Marsh; Simon Pope

Trust propagation is the principle by which new trust relationships can be derived from pre-existing trust relationship. Trust transitivity is the most explicit form of trust propagation, meaning for example that if Alice trusts Bob, and Bob trusts Claire, then by transitivity, Alice will also trust Claire. This assumes that Bob recommends Claire to Alice. Trust fusion is also an important element in trust propagation, meaning that Alice can combine Bobs recommendation with her own personal experience in dealing with Claire, or with other recommendations about Claire, in order to derive a more reliable measure of trust in Claire. These simple principles, which are essential for human interaction in business and everyday life, manifests itself in many different forms. This paper investigates possible formal models that can be implemented using belief reasoning based on subjective logic. With good formal models, the principles of trust propagation can be ported to online communities of people, organisations and software agents, with the purpose of enhancing the quality of those communities.


Computing with Social Trust | 2009

Examining Trust, Forgiveness and Regret as Computational Concepts

Stephen Marsh; Pamela Briggs

The study of trust has advanced tremendously in recent years, to the extent that the goal of a more unified formalisation of the concept is becoming feasible. To that end, we have begun to examine the closely related concepts of regret and forgiveness and their relationship to trust and its siblings. The resultant formalisation allows computational tractability in, for instance, artificial agents. Moreover, regret and forgiveness, when allied to trust, are very powerful tools in the Ambient Intelligence (AmI) security area, especially where Human Computer Interaction and concrete human understanding are key. This paper introduces the concepts of regret and forgiveness, exploring them from social psychological as well as a computational viewpoint, and presents an extension to Marsh’s original trust formalisation that takes them into account. It discusses and explores work in the AmI environment, and further potential applications.


international conference on trust management | 2012

Trust Transitivity and Conditional Belief Reasoning

Audun Jøsang; Tanja Azderska; Stephen Marsh

Trust transitivity is a common phenomenon embedded in human reasoning about trust. Given a specific context or purpose, trust transitivity is often manifested through the humans’ intuition to rely on the recommendations of a trustworthy advisor about another entity that the advisor recommends. Although this simple principle has been formalised in various ways for many trust and reputation systems, there is no real or physical basis for trust transitivity to be directly translated into a mathematical model. In that sense, all mathematical operators for trust transitivity proposed in the literature must be considered ad hoc; they represent attempts to model a very complex human phenomenon as if it were lendable to analysis by the laws of physics. Considering this nature of human trust transitivity in reality, any simple mathematical model will essentially have rather poor predictive power. In this paper, we propose a new interpretation of trust transitivity that is radically different from those described in the literature so far. More specifically, we consider recommendations from an advisor as evidence that the relying party will use as input arguments in conditional reasoning models for assessing hypotheses about the trust target. The proposed model of conditional trust transitivity is based on the framework of subjective logic.


human factors in computing systems | 2000

Trust in design

Stephen Marsh; John F. Meech

We argue that trust is an important aspect in how people make use of computers, and that designing interfaces which take trust into account and reason using trust will result in more effective, comfortable interactions for the user. One method that may provide results is the encouragement of anthropomorphism on the part of the user. Trust and anthropomorphism will play a large role in many areas, including notably e-commerce and home entertainment.


Archive | 2010

Trust Management IV

Masakatsu Nishigaki; Audun Jøsang; Yuko Murayama; Stephen Marsh

This book constitutes the refereed proceedings of the 4th IFIP WG 11.11 International Conference, IFIPTM 2010, held in Morioka, Japan, in June 2010. The 18 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 61 submissions. The papers are organized in topical sections on privacy and trust; security through trust; trust models and management; trust models; and experimental and experiential trust.


Journal of Information Processing | 2011

Defining and investigating device comfort

Stephen Marsh; Pamela Briggs; Khalil El-Khatib; Babak Esfandiari; John A. Stewart

Device Comfort is a concept that uses an enhanced notion of trust to allow a personal (likely mobile) device to better reason about the state of interactions and actions between it, its owner, and the environment. This includes allowing a better understanding of how to manage information in fine-grained context as well as addressing the personal security of the user. To do this, it forms a unique relationship with the user, focusing on the devices judgment of user in context. This paper introduces and defines Device Comfort, including an examination of what makes up the comfort of a device in terms of trust and other considerations, and discusses the uses of such an approach. It also presents some ongoing developmental work in the concept, and an initial formal model of Device Comfort, its makeup and behaviour.


acm symposium on applied computing | 2012

Preference-oriented QoS-based service discovery with dynamic trust and reputation management

Zeinab Noorian; Michael W. Fleming; Stephen Marsh

In the presence of a variety of service providers that offer web services with overlapping or identical functionality, service consumers need a mechanism to distinguish one service from another based on their own subjective quality of service (QoS) preferences. Typical approaches in this field rely on trusted third parties to monitor the behaviour of service providers and endorse their performance based on their delivered services to different users. However, the issue of evaluating the credibility of user reports is one of the essential problems yet to be solved in the e-Business application area. In this paper we propose a two-layered preference-oriented service selection framework that integrates trust and reputation management techniques with an advanced procurement auction model in order to choose the most pertinent service provider that meets a consumers QoS requirements. We will give a formal description of our approach and validate it with experiments demonstrating that our solution yields high-quality results under various realistic circumstances.


Journal of Trust Management | 2014

Editorial: Journal of Trust Management

Stephen Marsh

© L p Dear reader Welcome! It’s been a long time coming, but here at last is the first tranche of papers and the official opening of the Journal of Trust Management, a SpringerOpen publication that is dedicated to the exposition and exploration of research and development in the fields of trust management and computational trust. This is a new journal, and still a relatively novel field, the presence of ‘trust’ and trust management tracks in just about all security conferences (and more than a few computer science, HCI, etc. conferences) notwithstanding. Indeed, the presence of such tracks illustrates two things about the field – it is recognized as being important, if not vital; and it is rather often misunderstood or misappropriated. Thus, a few lines are necessary about what this journal is exploring, and what trust management and computational trust are. Bear with me, or, if you already know and don’t need to read another exploration, feel free to skip to the end, where the papers in this issue are discussed, and the important people related to this journal are acknowledged. Firstly, we should think about what this journal is and is not. It’s not 100% a computer science journal. It’s not 100% a social science journal. And it’s not 100% a security journal. It is, however, a mixture of social, computer and security. This isn’t becauase we can’t define the field, it’s because the field touches on, and is touched by, a great many subject areas. We therefore welcome submissions from across the board of explorations of how people use trust, particularly in situations where they are supported by computational systems. We welcome submissions related to how trust is used in security applications, from Trust-Based Access Control to Trusted Computing and all points inbetween. We equally welcome explorations of trust from the computational point of view – how systems can use it, calculate (with) it, make decisions around and with it, and justify them. How, indeed, different trust models can be used in the many places where systems make decisions, whether or not humans are in that particular loop. We also welcome examinations and expositions of research related to how humans trust technology, in fields such as Human Computer Interaction and Computer Supported Collaborative Work. In other words, we are interested in the role of trust where technology and people interact, where technology makes decisions, and where uncertainty about actions exists – and that is a lot of touch points. The history of computational trust and trust management, though short (indeed, around 20 years), is exciting and sometimes tangled. Part of the problem is the close relationship trust has with security. Like any close relationship, it has its ups and downs and its fair share of misunderstandings, and we are just beginning to get to grips with the differences between trust and security, as well as their co-existence.


Autonomous Agents and Multi-Agent Systems | 2014

Trust-oriented buyer strategies for seller reporting and selection in competitive electronic marketplaces

Zeinab Noorian; Jie Zhang; Yuan Liu; Stephen Marsh; Michael W. Fleming

In competitive electronic marketplaces where some selling agents may be dishonest and quality products offered by good sellers are limited, selecting the most profitable sellers as transaction partners is challenging, especially when buying agents lack personal experience with sellers. Reputation systems help buyers to select sellers by aggregating seller information reported by other buyers (called advisers). However, in such competitive marketplaces, buyers may also be concerned about the possibility of losing business opportunities with good sellers if they report truthful seller information. In this paper, we propose a trust-oriented mechanism built on a game theoretic basis for buyers to: (1) determine an optimal seller reporting strategy, by modeling the trustworthiness (competency and willingness) of advisers in reporting seller information; (2) discover sellers who maximize their profit by modeling the trustworthiness of sellers and considering the buyers’ preferences on product quality. Experimental results confirm that competitive marketplaces operating with our mechanism lead to better profit for buyers and create incentives for seller honesty.


international conference on trust management | 2012

Rendering unto Cæsar the Things That Are Cæsar’s: Complex Trust Models and Human Understanding

Stephen Marsh; Anirban Basu; Natasha Dwyer

In this position paper we examine some of the aspects of trust models, deployment, use and ‘misuse,’ and present a manifesto for the application of computational trust in sociotechnical systems. Computational Trust formalizes the trust processes in humans in order to allow artificial systems to better make decisions or give better advice. This is because trust is flexible, readily understood, and relatively robust. Since its introduction in the early ’90s, it has gained in popularity because of these characteristics. However, what it has oftentimes lost is understandability. We argue that one of the original purposes of computational trust reasoning was the human element - the involvement of humans in the process of decision making for tools, importantly at the basic level of understanding why the tools made the decisions they did. The proliferation of ever more complex models may serve to increase the robustness of trust management in the face of attack, but does little to help mere humans either understand or, if necessary, intervene when the trust models fail or cannot arrive at a sensible decision.

Collaboration


Dive into the Stephen Marsh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali A. Ghorbani

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar

Khalil El-Khatib

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael W. Fleming

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar

Zeinab Noorian

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy Pitt

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge