Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul B. de Laat is active.

Publication


Featured researches published by Paul B. de Laat.


Ethics and Information Technology | 2005

Trusting Virtual Trust

Paul B. de Laat

Can trust evolve on the Internet between virtual strangers? Recently, Pettit answered this question in the negative. Focusing on trust in the sense of ‘dynamic, interactive, and trusting’ reliance on other people, he distinguishes between two forms of trust: primary trust rests on the belief that the other is trustworthy, while the more subtle secondary kind of trust is premised on the belief that the other cherishes one’s esteem, and will, therefore, reply to an act of trust in kind (‘trust-responsiveness’). Based on this theory Pettit argues that trust between virtual strangers is impossible: they lack all evidence about one another, which prevents the imputation of trustworthiness and renders the reliance on trust-responsiveness ridiculous. I argue that this argument is flawed, both empirically and theoretically. In several virtual communities amazing acts of trust between pure virtuals have been observed. I propose that these can be explained as follows. On the one hand, social cues, reputation, reliance on third parties, and participation in (quasi-) institutions allow imputing trustworthiness to varying degrees. On the other, precisely trust-responsiveness is also relied upon, as a necessary supplement to primary trust. In virtual markets, esteem as a fair trader is coveted while it contributes to building up one’s reputation. In task groups, a hyperactive style of action may be adopted which amounts to assuming (not: inferring) trust. Trustors expect that their virtual co-workers will reply in kind while such an approach is to be considered the most appropriate in cyberspace. In non-task groups, finally, members often display intimacies while they are confident someone else ‘out there’ will return them. This is facilitated by the one-to-many, asynchronous mode of communication within mailing lists.


Ethics and Information Technology | 2010

How can contributors to open-source communities be trusted? On the assumption, inference, and substitution of trust

Paul B. de Laat

Open-source communities that focus on content rely squarely on the contributions of invisible strangers in cyberspace. How do such communities handle the problem of trusting that strangers have good intentions and adequate competence? This question is explored in relation to communities in which such trust is a vital issue: peer production of software (FreeBSD and Mozilla in particular) and encyclopaedia entries (Wikipedia in particular). In the context of open-source software, it is argued that trust was inferred from an underlying ‘hacker ethic’, which already existed. The Wikipedian project, by contrast, had to create an appropriate ethic along the way. In the interim, the assumption simply had to be that potential contributors were trustworthy; they were granted ‘substantial trust’. Subsequently, projects from both communities introduced rules and regulations which partly substituted for the need to perceive contributors as trustworthy. They faced a design choice in the continuum between a high-discretion design (granting a large amount of trust to contributors) and a low-discretion design (leaving only a small amount of trust to contributors). It is found that open-source designs for software and encyclopaedias are likely to converge in the future towards a mid-level of discretion. In such a design the anonymous user is no longer invested with unquestioning trust.


The Information Society | 2004

Evolution of open source networks in industry

Paul B. de Laat

The open source software movement has become a threat to corporate software development. In response, companies started to develop products and services related to open source software. Subsequently, they also tried to come to terms with the processes that are characteristic of open source software development. This article examines the efforts made by companies to use open source principles and practices for corporate purposes. The study shows that over time the open source-inspired networks developed by these companies gradually come to resemble classical corporate networks.


Ethics and Information Technology | 2012

Coercion or empowerment? Moderation of content in Wikipedia as `essentially contested' bureaucratic rules

Paul B. de Laat

In communities of user-generated content, systems for the management of content and/or their contributors are usually accepted without much protest. Not so, however, in the case of Wikipedia, in which the proposal to introduce a system of review for new edits (in order to counter vandalism) led to heated discussions. This debate is analysed, and arguments of both supporters and opponents (of English, German and French tongue) are extracted from Wikipedian archives. In order to better understand this division of the minds, an analogy is drawn with theories of bureaucracy as developed for real-life organizations. From these it transpires that bureaucratic rules may be perceived as springing from either a control logic or an enabling logic. In Wikipedia, then, both perceptions were at work, depending on the underlying views of participants. Wikipedians either rejected the proposed scheme (because it is antithetical to their conception of Wikipedia as a community) or endorsed it (because it is consonant with their conception of Wikipedia as an organization with clearly defined boundaries). Are other open-content communities susceptible to the same kind of ‘essential contestation’?


Social Epistemology | 2012

Open Source Production of Encyclopedias: Editorial Policies at the Intersection of Organizational and Epistemological Trust

Paul B. de Laat

The ideas behind open source software are currently applied to the production of encyclopedias. A sample of six English text-based, neutral-point-of-view, online encyclopedias of the kind are identified: h2g2, Wikipedia, Scholarpedia, Encyclopedia of Earth, Citizendium and Knol. How do these projects deal with the problem of trusting their participants to behave as competent and loyal encyclopedists? Editorial policies for soliciting and processing content are shown to range from high discretion to low discretion; that is, from granting unlimited trust to limited trust. Their conceptions of the proper role for experts are also explored and it is argued that to a great extent they determine editorial policies. Subsequently, internal discussions about quality guarantee at Wikipedia are rendered. All indications are that review and “super-review” of new edits will become policy, to be performed by Wikipedians with a better reputation. Finally, while for encyclopedias the issue of organizational trust largely coincides with epistemological trust, a link is made with theories about the acceptance of testimony. It is argued that both non-reductionist views (the “acceptance principle” and the “assurance view”) and reductionist ones (an appeal to background conditions, and a—newly defined—“expertise view”) have been implemented in editorial strategies over the past decade.The ideas behind open source software are currently applied to the production of encyclopedias. A sample of six English text-based, neutral-point-of-view, online encyclopedias of the kind are identified: h2g2, Wikipedia, Scholarpedia, Encyclopedia of Earth, Citizendium and Knol. How do these projects deal with the problem of trusting their participants to behave as competent and loyal encyclopedists? Editorial policies for soliciting and processing content are shown to range from high discretion to low discretion; that is, from granting unlimited trust to limited trust. Their conceptions of the proper role for experts are also explored and it is argued that to a great extent they determine editorial policies. Subsequently, internal discussions about quality guarantee at Wikipedia are rendered. All indications are that review and “super-review” of new edits will become policy, to be performed by Wikipedians with a better reputation. Finally, while for encyclopedias the issue of organizational trust largely...


Ethics and Information Technology | 2015

The use of software tools and autonomous bots against vandalism: eroding Wikipedia's moral order?

Paul B. de Laat

English-language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘coactivity’ in use between humans and bots, this research ‘discloses’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new hierarchical layer. Further, surveillance exhibits several troubling features: questionable profiling practices (concerning anonymous users in particular), the use of the controversial measure of reputation (under consideration), ‘oversurveillance’ where quantity trumps quality, and a prospective loss of the required moral skills whenever bots take over from humans. The most troubling aspect, though, is that Wikipedia has become a Janus-faced institution. One face is the basic platform of MediaWiki software, transparent to all. Its other face is the anti-vandalism system, which, in contrast, is opaque to the average user, in particular as a result of the algorithms and neural networks in use. Finally it is argued that this secrecy impedes a much needed discussion to unfold; a discussion that should focus on a ‘rebalancing’ of the anti-vandalism system and the development of more ethical information practices towards contributors.


Ethics and Information Technology | 2014

From open-source software to Wikipedia: `Backgrounding' trust by collective monitoring and reputation tracking

Paul B. de Laat

Open-content communities that focus on co-creation without requirements for entry have to face the issue of institutional trust in contributors. This research investigates the various ways in which these communities manage this issue. It is shown that communities of open-source software—continue to—rely mainly on hierarchy (reserving write-access for higher echelons), which substitutes (the need for) trust. Encyclopedic communities, though, largely avoid this solution. In the particular case of Wikipedia, which is confronted with persistent vandalism, another arrangement has been pioneered instead. Trust (i.e. full write-access) is ‘backgrounded’ by means of a permanent mobilization of Wikipedians to monitor incoming edits. Computational approaches have been developed for the purpose, yielding both sophisticated monitoring tools that are used by human patrollers, and bots that operate autonomously. Measures of reputation are also under investigation within Wikipedia; their incorporation in monitoring efforts, as an indicator of the trustworthiness of editors, is envisaged. These collective monitoring efforts are interpreted as focusing on avoiding possible damage being inflicted on Wikipedian spaces, thereby being allowed to keep the discretionary powers of editing intact for all users. Further, the essential differences between backgrounding and substituting trust are elaborated. Finally it is argued that the Wikipedian monitoring of new edits, especially by its heavy reliance on computational tools, raises a number of moral questions that need to be answered urgently.


open source systems | 2009

Panel: Governance in Open Source Projects and Communities

Francesco Bolici; Paul B. de Laat; Jan Ljungberg; Andrea Pontiggia; Cristina Rossi Lamastra

“Although considerable research has been devoted to the growth and expansion of open source communities and the comparison between the efficiency of corporate structures and community structures in the field of software development, rather less attention has been paid to their governance structures (control, monitoring, supervision)” (Lattemann and Stieglitz 2005).


social informatics | 2006

Internet-Based Commons of Intellectual Resources: An Exploration of their Variety

Paul B. de Laat

During the two last decades, speeded up by the development of the Internet, several types of commons have been opened up for intellectual resources. In this article their variety is being explored as to the kind of resources and the type of regulation involved. The open source software movement initiated the phenomenon, by creating a copyrightbased commons of source code that can be labelled ‘dynamic’: allowing both use and modification of resources. Additionally, such a commons may be either protected from appropriation (by ‘copyleft’ licensing), or unprotected. Around the year 2000, this approach was generalized by the Creative Commons initiative. In the process they added a ‘static’ commons, in which only use of resources is allowed. This mould was applied to the sciences and the humanities in particular, and various Open Access initiatives unfolded. A final aspect of copyright-based commons is the distinction between active and passive commons: while the latter is only a site for obtaining resources, the former is also a site for production of new resources by communities of volunteers (‘peer production’). Finally, several patent commons are discussed, which mainly aim at preventing patents blocking the further development of science. Throughout, attention is drawn to interrelationships between the various commons.


Philosophy & Technology | 2017

Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?

Paul B. de Laat

Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante.

Collaboration


Dive into the Paul B. de Laat's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Ljungberg

University of Gothenburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge