Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cristiano Castelfranchi is active.

Publication


Featured researches published by Cristiano Castelfranchi.


Trust and deception in virtual societies | 2001

Social trust: a cognitive approach

Rino Falcone; Cristiano Castelfranchi

As it was been written in the call of the original workshop “In recent research on electronic commerce” trust has been recognized as one of the key factors for successful electronic commerce adoption. In electronic commerce problems of trust are magnified, because agents reach out far beyond their familiar trade environments. Also it is far from obvious whether existing paper-based techniques for fraud detection and prevention are adequate to establish trust in an electronic network environment where you usually never meet your trade partner face to face, and where messages can be read or copied a million times without leaving any trace. With the growing impact of electronic commerce distance trust building becomes more and more important, and better models of trust and deception are needed. One trend is that in electronic communication channels extra agents, the so called Trusted Third Parties, are introduced in an agent community that take care of trust building among the other agents in the network. But in fact different kind of trust are needed and should be modelled and supported: trust in the environment and in the infrastructure (the socio-technical system); trust in your agent and in mediating agents; trust in the potential partners; trust in the warrantors and authorities (if any).


intelligent agents | 1995

Guarantees for autonomy in cognitive agent architecture

Cristiano Castelfranchi

The paper analyses which features of an agent architecture determine its Autonomy. I claim that Autonomy is a relational concept. First, Autonomy from environment (stimuli) is analysed, and the notion of Cognitive Reactivity is introduced to show how the cognitive architecture of the agent guarantees Stimulus-Autonomy and deals with the “Descartes problem” relative to the external “causes” of behaviour. Second, Social Autonomy is analysed (Autonomy from others). A distinction between Executive Autonomy and Motivational Autonomy is introduced. Some limitations that current postulates on Rational interacting agents could impose on their Autonomy are discussed. Architectural properties and postulates that guarantee a sufficient Autonomy in cognitive social agents are defined. These properties give the agent control over its own mental states (Beliefs and Goals). In particular, a “double filter” architecture against influence, is described. What guarantees agents control over its own Beliefs is specified: relevance, credibility, introspective competence. Particular attention is devoted to the “non negotiability of beliefs” (Pascal law): the fact that you cannot change the others Beliefs by using promises or threats. What guarantees agents control over its Goals is specified: self-interested goal adoption, and indirect influencing. Finally, it is argued how and why social dependence and power relations should limit the agents Autonomy.


Lecture Notes in Computer Science | 2000

Engineering Social Order

Cristiano Castelfranchi

Social Order becomes a major problem in MAS and in computer mediated human interaction. After explaining the notions of Social Order and Social Control, I claim that there are multiple and complementary approaches to Social Order and to its engineering: all of them must be exploited. In computer science one try to solve this problem by rigid formalisation and rules, constraining infrastructures, security devices, etc. I think that a more socially oriented approach is also needed. My point is that Social Control - and in particular decentralised and autonomous Social Control - will be one of the most effective approaches.


intelligent agents | 1998

Autonomous Norm Acceptance

Rosaria Conte; Cristiano Castelfranchi; Frank Dignum

It is generally acknowledged that norms and normative action emphasize autonomy on the side of decision. But what about the autonomous formation of normative goals? This paper is intended to contribute to a theory of how agents form normative beliefs and goals, and to formulate general but non exhaustive principles of norm based autonomous agent-hood - namely goal generation and decision making- upon which to construct software agents.


Robotics and Autonomous Systems | 1998

Towards a theory of delegation for agent-based systems

Cristiano Castelfranchi; Rino Falcone

Abstract In this paper a theory of delegation is presented. There are at least three reasons for developing such a theory. First, one of the most relevant notions of “agent” is based on the notion of “task” and of “on behalf of”. In order to found this notion a theory of delegation among agents is needed. Second, the notion of autonomy should be based on different kinds and levels of delegation. Third, the entire theory of cooperation and collaboration requires the definition of the two complementary attitudes of goal delegation and adoption linking collaborating agents. After motivating the necessity for a principled theory of delegation (and adoption) the paper presents a plan-based approach to this theory. We analyze several dimensions of the delegation/adoption (on the basis of the interaction between the agents, of the specification of the task, of the possibility to subdelegate, of the delegation of the control, of the help levels). The agents autonomy and levels of agency are then deduced. We describe the modelling of the client from the contractors point of view and vice versa, with their differences, and the notion of trust that directly derives from this modelling. Finally, a series of possible conflicts between client and contractor are considered: in particular collaborative conflicts, which stem from the contractors intention to help the client beyond its request or delegation and to exploit its own knowledge and intelligence (reasoning, problem solving, planning, and decision skills) for the client itself.


International Journal of Electronic Commerce | 2002

The Role of Trust and Deception in Virtual Societies

Cristiano Castelfranchi; Yao-Hua Tan

In hybrid situations where artificial agents and human agents interact, the artificial agents must be able to reason about the trustworthiness and deceptive actions of their human counterpart. Thus a theory of trust and deception is needed that will support interactions between agents in virtual societies. There are several theories on trust (fewer on deception!), but none that deals specifically with virtual communities. Building on these earlier theories, the role of trust and deception in virtual communities is analyzed, with examples to illustrate the objectives a theory of trust should fulfill.In hybrid situations where artificial agents and human agents interact, the artificial agents must be able to reason about the trustworthiness and deceptive actions of their human counterpart. Thus a theory of trust and deception is needed that will support interactions between agents in virtual societies. There are several theories on trust (fewer on deception!), but none that deals specifically with virtual communities. Building on these earlier theories, the role of trust and deception in virtual communities is analyzed, with examples to illustrate the objectives a theory of trust should fulfill.


Psychological Research-psychologische Forschung | 2009

Thinking as the control of imagination: a conceptual framework for goal-directed systems

Giovanni Pezzulo; Cristiano Castelfranchi

This paper offers a conceptual framework which (re)integrates goal-directed control, motivational processes, and executive functions, and suggests a developmental pathway from situated action to higher level cognition. We first illustrate a basic computational (control-theoretic) model of goal-directed action that makes use of internal modeling. We then show that by adding the problem of selection among multiple action alternatives motivation enters the scene, and that the basic mechanisms of executive functions such as inhibition, the monitoring of progresses, and working memory, are required for this system to work. Further, we elaborate on the idea that the off-line re-enactment of anticipatory mechanisms used for action control gives rise to (embodied) mental simulations, and propose that thinking consists essentially in controlling mental simulations rather than directly controlling behavior and perceptions. We conclude by sketching an evolutionary perspective of this process, proposing that anticipation leveraged cognition, and by highlighting specific predictions of our model.


Archive | 1990

Blushing as a Discourse: Was Darwin Wrong?

Cristiano Castelfranchi; Isabella Poggi

The aim of this chapter is to consider the social and biological functions of shame and the communicative value of its most typical expression, blushing, while arguing against Darwins theory of blushing, which would deny it any specific function.


Cognition & Emotion | 2007

The envious mind

Maria Miceli; Cristiano Castelfranchi

This work provides an analysis of the basic cognitive components of envy. In particular, the roles played by the envious partys social comparison with, and ill will against, the better off are emphasised. The ill will component is characterised by the enviers ultimate goal or wish that the envied suffer some harm, and is distinguished from resentment and sense of injustice, which have often been considered part of envy. The reprehensible nature of envy is discussed, and traced back to the analysis of its components. Finally, we explore both points of overlap and distinguishing features between envy and other emotions such as jealousy or emulation, and make a few general remarks, pointing to the necessity of overcoming conceptual looseness in the notion of envy.


Autonomous Agents and Multi-Agent Systems | 2000

The Socio-cognitive Dynamics of Trust: Does Trust Create Trust?

Rino Falcone; Cristiano Castelfranchi

We 1 will examine in this paper three crucial aspects of trust dynamics: a) How As trusting B and relying on it in situation Ω can actually (objectively) influnce Bs trustworthiness within Ω. Either trust is a self-fulfilling prophecy that modifies the probability of the predicted event; or it is a self-defeating strategy by negatively influencing the events. And also how A can be aware of and take into account the effect of its own decision in the very moment of that decision. b) How trust creates a reciprocal trust, and distrust elicits distrust; but also vice versa: how As trust in B could induce lack of trust or distrust in B towards A, while As diffidence can make B more trustful in A. And also how A can be aware of and take into account this effect of its own decision in the very moment of that decision. c) How diffuse trust diffuses trust (trust atmosphere), that is how As trusting B can influence C trusting B or D, and so on. Those phenomena are very crucial in human societies (market, groups, states), however we claim that they are also very fundamental in computer mediated organizations, interactions (like Electronic Commerce), cooperation (Computer Supported Cooperative Work), etc. and even in Multi-Agent Systems with autonomous agents.

Collaboration


Dive into the Cristiano Castelfranchi's collaboration.

Top Co-Authors

Avatar

Rino Falcone

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Maria Miceli

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Rosaria Conte

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Luca Tummolini

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabio Paglieri

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge