Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Luck is active.

Publication


Featured researches published by Michael Luck.


Autonomous Agents and Multi-Agent Systems | 2006

TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

W. T. L. Teacy; Jigar Patel; Nicholas R. Jennings; Michael Luck

In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent’s trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.


intelligent agents | 1997

A Formal Specification of dMARS

Mark d'Inverno; David Kinny; Michael Luck; Michael Wooldridge

The Procedural Reasoning System (PRS) is the best established agent architecture currently available. It has been deployed in many major industrial applications, ranging from fault diagnosis on the space shuttle to air traffic management and business process control. The theory of PRS-like systems has also been widely studied: within the intelligent agents research community, the belief-desire-intention (BDI) model of practical reasoning that underpins PRS is arguably the dominant force in the theoretical foundations of rational agency. Despite the interest in PRS and BDI agents, no complete attempt has yet been made to precisely specify the behaviour of real PRS systems. This has led to the development of a range of systems that claim to conform to the PRS model, but which differ from it in many important respects. Our aim in this paper is to rectify this omission. We provide an abstract formal model of an idealised dMARS system (the most recent implementation of the PRS architecture), which precisely defines the key data structures present within the architecture and the operations that manipulate these structures. We focus in particular on dMARS plans, since these are the key tool for programming dMARS agents. The specification we present will enable other implementations of PRS to be easily developed, and will serve as a benchmark against which future architectural enhancements can be evaluated.


adaptive agents and multi-agents systems | 2005

Coping with inaccurate reputation sources: experimental analysis of a probabilistic trust model

W. T. Luke Teacy; Jigar Patel; Nicholas R. Jennings; Michael Luck

This research aims to develop a model of trust and reputation that will ensure good interactions amongst software agents in large scale open systems. The following are key drivers for our model: (1) agents may be self-interested and may provide false accounts of experiences with other agents if it is beneficial for them to do so; (2) agents will need to interact with other agents with which they have little or no past experience. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agents trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents. When there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.


Applied Artificial Intelligence | 2000

Applying artificial intelligence to virtual reality: Intelligent virtual environments

Michael Luck; Ruth Aylett

Research into virtual environments on the one hand and artificial intelligence and artificial life on the other has largely been carried out by two different groups of people with different preoccupation and interests, but some convergence is now apparent between the two fields. Applications in which activity independent of the user takes place- involving crowds or other agents- are beginning to be tackled, while synthetic agents, virtual humans, and computer pets are all areas in which techniques from the two fields require strong integration. The two communities have much to learn from each other if wheels are not to be reinvented on both sides. This paper reviews the issues arising from combining artificial intelligence and artificial life techniques with those of virtual environments to produce just such intelligent virtual environments. The discussion is illustrated with examples that include environments providing knowledge to direct or assist the user rather than relying entirely on the users knowledge and skills, those in which the user is represented by a partially autonomous avatar, those containing intelligent agents separate from the user, and many others from both sides of the area.


Knowledge Based Systems | 2004

Agent-based formation of virtual organisations

Timothy J. Norman; Alun David Preece; Stuart Chalmers; Nicholas R. Jennings; Michael Luck; Viet Dung Dang; Thuc Duong Nguyen; Vikas Deora; Jianhua Shao; W. Alex Gray; Nick J. Fiddian

Virtual organisations (VOs) are composed of a number of individuals, departments or organisations each of which has a range of capabilities and resources at their disposal. These VOs are formed so that resources may be pooled and services combined with a view to exploiting a perceived market niche. However, in the modern commercial environment it is essential to respond rapidly to changes in the market to remain competitive. Thus, there is a need for robust, agile, flexible systems to support the process of VO management. Within the CONOISE (www.conoise.org) project, agent-based models and techniques are being developed for the automated formation and maintenance of virtual organisations. In this paper we focus on the former, namely how an effective VO may be formed rapidly for a specified purpose.


IEEE Transactions on Education | 1999

Plagiarism in programming assignments

Mike Joy; Michael Luck

The assessment of programming courses is usually carried out by means of programming assignments. Since it is simple to copy and edit computer programs, however, there will always be a temptation among some students following such courses to copy and modify the work of others. As the number of students in these courses is often high, it can be very difficult to detect this plagiarism. The authors have developed a package which will allow programming assignments to be submitted online, and which includes software to assist in detecting possible instances of plagiarism. In this paper, they discuss the concerns that motivated this work, describe the developed software, tailoring the software to different requirements and finally consider its implications for large group teaching.


Autonomous Agents and Multi-Agent Systems | 2004

A Manifesto for Agent Technology: Towards Next Generation Computing

Michael Luck; Peter McBurney; Chris Preist

The European Commissions eEurope initiative aims to bring every citizen, home, school, business and administration online to create a digitally literate Europe. The value lies not in the objective itself, but in its ability to facilitate the advance of Europe into new ways of living and working. Just as in the first literacy revolution, our lives will change in ways never imagined. The vision of eEurope is underpinned by a technological infrastructure that is now taken for granted. Yet it provides us with the ability to pioneer radical new ways of doing business, of undertaking science, and, of managing our everyday activities. Key to this step change is the development of appropriate mechanisms to automate and improve existing tasks, to anticipate desired actions on our behalf (as human users) and to undertake them, while at the same time enabling us to stay involved and retain as much control as required. For many, these mechanisms are now being realised by agent technologies, which are already providing dramatic and sustained benefits in several business and industry domains, including B2B exchanges, supply chain management, car manufacturing, and so on. While there are many real successes of agent technologies to report, there is still much to be done in research and development for the full benefits to be achieved. This is especially true in the context of environments of pervasive computing devices that are envisaged in coming years. This paper describes the current state-of-the-art of agent technologies and identifies trends and challenges that will need to be addressed over the next 10 years to progress the field and realise the benefits. It offers a roadmap that is the result of discussions among participants from over 150 organisations including universities, research institutions, large multinational corporations and smaller IT start-up companies. The roadmap identifies successes and challenges, and points to future possibilities and demands; agent technologies are fundamental to the realisation of next generation computing.


Autonomous Agents and Multi-Agent Systems | 2004

The dMARS Architecture: A Specification of the Distributed Multi-Agent Reasoning System

Mark d'Inverno; Michael Luck; Michael P. Georgeff; David Kinny; Michael Wooldridge

The Procedural Reasoning System (PRS) is the best established agent architecture currently available. It has been deployed in many major industrial applications, ranging from fault diagnosis on the space shuttle to air traffic management and business process control. The theory of PRS-like systems has also been widely studied: within the intelligent agents research community, the belief-desire-intention (BDI) model of practical reasoning that underpins PRS is arguably the dominant force in the theoretical foundations of rational agency. Despite the interest in PRS and BDI agents, no complete attempt has yet been made to precisely specify the behaviour of real PRS systems. This has led to the development of a range of systems that claim to conform to the PRS model, but which differ from it in many important respects. Our aim in this paper is to rectify this omission. We provide an abstract formal model of an idealised dMARS system (the most recent implementation of the PRS architecture), which precisely defines the key data structures present within the architecture and the operations that manipulate these structures. We focus in particular on dMARS plans, since these are the key tool for programming dMARS agents. The specification we present will enable other implementations of PRS to be easily developed, and will serve as a benchmark against which future architectural enhancements can be evaluated.


adaptive agents and multi-agents systems | 2002

Constraining autonomy through norms

Fabiola López y López; Michael Luck; Mark d'Inverno

Despite many efforts to understand why and how norms can be incorporated into agents and multi-agent systems, there are still several gaps that must be filled. This paper focuses on one of the most important processes concerned with norms, namely that of norm compliance. However, instead of taking a static view of norms in which norms are straighforwardly complied with, we adopt a more dynamic view in which an agents motivations, and therefore its autonomy, play an important role. We analyse the motivations that an agent might have to comply with norms, and then formally propose a set of strategies for use by agents in norm-based systems. Finally, through some simulation experiments, the effects of autonomous norm compliance in both individual agents and societies are analysed.


international conference on trust management | 2005

A probabilistic trust model for handling inaccurate reputation sources

Jigar Patel; W. T. Luke Teacy; Nicholas R. Jennings; Michael Luck

This research aims to develop a model of trust and reputation that will ensure good interactions amongst software agents in large scale open systems in particular. The following are key drivers for our model: (1) agents may be self-interested and may provide false accounts of experiences with other agents if it is beneficial for them to do so; (2) agents will need to interact with other agents with which they have no past experience. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agents trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents. When there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.

Collaboration


Dive into the Michael Luck's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nir Oren

University of Aberdeen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mike Joy

University of Warwick

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luc Moreau

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Felipe Meneguzzi

Pontifícia Universidade Católica do Rio Grande do Sul

View shared research outputs
Researchain Logo
Decentralizing Knowledge