Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mario Haim is active.

Publication


Featured researches published by Mario Haim.


Journalism: Theory, Practice & Criticism | 2018

Readers’ perception of computer-generated news: Credibility, expertise, and readability

Andreas Graefe; Mario Haim; Bastian Haarmann; Hans-Bernd Brosius

We conducted an online experiment to study people’s perception of automated computer-written news. Using a 2 × 2 × 2 design, we varied the article topic (sports, finance; within-subjects) and both the articles’ actual and declared source (human-written, computer-written; between-subjects). Nine hundred eighty-six subjects rated two articles on credibility, readability, and journalistic expertise. Varying the declared source had small but consistent effects: subjects rated articles declared as human written always more favorably, regardless of the actual source. Varying the actual source had larger effects: subjects rated computer-written articles as more credible and higher in journalistic expertise but less readable. Across topics, subjects’ perceptions did not differ. The results provide conservative estimates for the favorability of computer-written news, which will further increase over time and endorse prior calls for establishing ethics of computer-written news.


Health Communication | 2017

Abyss or Shelter? On the Relevance of Web Search Engines’ Search Results When People Google for Suicide

Mario Haim; Florian Arendt; Sebastian Scherr

ABSTRACT Despite evidence that suicide rates can increase after suicides are widely reported in the media, appropriate depictions of suicide in the media can help people to overcome suicidal crises and can thus elicit preventive effects. We argue on the level of individual media users that a similar ambivalence can be postulated for search results on online suicide-related search queries. Importantly, the filter bubble hypothesis (Pariser, 2011) states that search results are biased by algorithms based on a person’s previous search behavior. In this study, we investigated whether suicide-related search queries, including either potentially suicide-preventive or -facilitative terms, influence subsequent search results. This might thus protect or harm suicidal Internet users. We utilized a 3 (search history: suicide-related harmful, suicide-related helpful, and suicide-unrelated) × 2 (reactive: clicking the top-most result link and no clicking) experimental design applying agent-based testing. While findings show no influences either of search histories or of reactivity on search results in a subsequent situation, the presentation of a helpline offer raises concerns about possible detrimental algorithmic decision-making: Algorithms “decided” whether or not to present a helpline, and this automated decision, then, followed the agent throughout the rest of the observation period. Implications for policy-making and search providers are discussed.


Digital journalism | 2018

Burst of the Filter Bubble

Mario Haim; Andreas Graefe; Hans-Bernd Brosius

In offering personalized content geared toward users’ individual interests, recommender systems are assumed to reduce news diversity and thus lead to partial information blindness (i.e., filter bubbles). We conducted two exploratory studies to test the effect of both implicit and explicit personalization on the content and source diversity of Google News. Except for small effects of implicit personalization on content diversity, we found no support for the filter-bubble hypothesis. We did, however, find a general bias in that Google News over-represents certain news outlets and under-represents other, highly frequented, news outlets. The results add to a growing body of evidence, which suggests that concerns about algorithmic filter bubbles in the context of online news might be exaggerated.


Digital journalism | 2017

Automated News: Better than expected?

Mario Haim; Andreas Graefe

We conducted two experiments to study people’s prior expectations and actual perceptions of automated and human-written news. We found that, first, participants expected more from human-written news in terms of readability and quality; but not in terms of credibility. Second, participants’ expectations of quality were rarely met. Third, when participants saw only one article, differences in the perception of automated and human-written articles were small. However, when presented with two articles at once, participants preferred human-written news for readability but automated news for credibility. These results contest previous claims according to which expectation adjustment explains differences in perceptions of human-written and automated news.


New Media & Society | 2018

Equal access to online information? Google’s suicide-prevention disparities may amplify a global digital divide:

Sebastian Scherr; Mario Haim; Florian Arendt

Worldwide, people profit from equally accessible online health information via search engines. Therefore, equal access to health information is a global imperative. We studied one specific scenario, in which Google functions as a gatekeeper when people seek suicide-related information using both helpful and harmful suicide-related search terms. To help prevent suicides, Google implemented a “suicide-prevention result” (SPR) at the very top of such search results. While this effort deserves credit, the present investigation compiled evidence that the SPR is not equally displayed to all users. Using a virtual agent-based testing methodology, a set of 3 studies in 11 countries found that the presentation of the SPR varies depending on where people search for suicide-related information. Language is a key factor explaining these differences. Google’s algorithms thereby contribute to a global digital divide in online health-information access with possibly lethal consequences. Higher and globally balanced display frequencies are desirable.


Archive | 2017

Normative Qualitätsansprüche an algorithmischen Journalismus

Konstantin Dörr; Mario Haim; Nina Köberer

Die fortschreitende Digitalisierung und Automatisierung im Journalismus stellt auch professionsethisches Handeln unter Druck. Dabei erzeugen die automatisierte Textproduktion und deren Verbreitung ethische Fragen auf individueller, organisatorischer und gesellschaftlicher Ebene. Bezogen auf normative Qualitatsanspruche diskutiert der Beitrag Fragen der journalistischen Verantwortung und der Transparenz. Es zeigt sich, dass die Technologie die Dimensionen der Verantwortung vor, wahrend und nach der Nachrichtenproduktion masgeblich beeinflusst. Neue Akteure spielen dabei ebenso eine zentrale Rolle wie die normativen Anspruche im Umgang und bei der Kennzeichnung algorithmisch generierter Inhalte.


computational social science | 2018

Who sets the cyber agenda? Intermedia agenda-setting online: the case of Edward Snowden’s NSA revelations

Mario Haim; Gabriel Weimann; Hans-Bernd Brosius


Studies in Communication | Media | 2018

Popularity cues in online media: A review of conceptualizations, operationalizations, and general effects

Mario Haim; Anna Sophie Kümpel; Hans-Bernd Brosius


AoIR Selected Papers of Internet Research | 2017

Popularity indicators in online media. A review of research on the effects of metric user information

Anna Sophie Kümpel; Mario Haim


Archive | 2016

Zum Einfluss von Suchmaschinen-Algorithmen auf das Erscheinen von Hinweisen zur Telefonseelsorge bei erhöhter Suizidalität

Mario Haim; Florian Arendt; Sebastian Scherr

Collaboration


Dive into the Mario Haim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian Scherr

Ludwig Maximilian University of Munich

View shared research outputs
Researchain Logo
Decentralizing Knowledge