Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anton Kühberger is active.

Publication


Featured researches published by Anton Kühberger.


Organizational Behavior and Human Decision Processes | 2002

Framing decisions: Hypothetical and real

Anton Kühberger; Michael Schulte-Mecklenbeck; Josef Perner

This paper addresses the general issue of whether the practice of investigating human decision making in hypothetical choice situations is at all warranted, or under what conditions. A particularly relevant factor that affects the match between real decisions and hypothetical decisions is the importance of a decisions consequences. In the literature experimental gambles tend to confound the reality of the decision situation with the size of the payoffs: hypothetical decisions tend to offer large payoffs, and real decisions tend to offer only small payoffs. Using the well-known framing effect (a tendency of risk-aversion for gains and of risk-seeking for losses) we find that the framing effect depends on payoff size but hypothetical choices match real choices for small as well as large payoffs. These results appear paradoxical unless size of incentive is clearly distinguished from the reality status of decision (real versus hypothetical). Since the field lacks a general theory of when hypothetical decisions match real decisions, the discussion presents an outline for developing such a theory. 2002 Elsevier Science (USA). All rights reserved.


PLOS ONE | 2014

Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

Anton Kühberger; Astrid Fritz; Thomas Scherndl

Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.


Theory & Psychology | 2013

A comprehensive review of reporting practices in psychological journals: Are effect sizes really enough?

Astrid Fritz; Thomas Scherndl; Anton Kühberger

Over-reliance on significance testing has been heavily criticized in psychology. Therefore the American Psychological Association recommended supplementing the p value with additional elements such as effect sizes, confidence intervals, and considering statistical power seriously. This article elaborates the conclusions that can be drawn when these measures accompany the p value. An analysis of over 30 summary papers (including over 6,000 articles) reveals that, if at all, only effect sizes are reported in addition to p’s (38%). Only every 10th article provides a confidence interval and statistical power is reported in only 3% of articles. An increase in reporting frequency of the supplements to p’s over time owing to stricter guidelines was found for effect sizes only. Given these practices, research faces a serious problem in the context of dichotomous statistical decision making: since significant results have a higher probability of being published (publication bias), effect sizes reported in articles may be seriously overestimated.


Mind & Language | 1999

Predicting others through simulation or by theory? A method to decide

Josef Perner; Anton Kühberger; Siegfried Schrofner

A method is presented for deciding whether correct predictions about other people are based on simulation or theory use. The differentiating power of this method was assessed with cognitive estimation biases (e.g. estimating the area of Brazil) in two variations. Experiments 1 and 2 operated with the influence of response scales of different length. Experiment 3 used the difference between free estimates that tended to be far off the true value and estimates constrained by an appropriate response scale, where estimates became greatly more realistic. The critical question is how well observer subjects can predict these target biases under two different presentation conditions. Response scale biases (Experiments 1 and 2) were more strongly predicted when observer subjects were presented with the two scales juxtaposed, than when responses for each scale were given independently. This speaks for the use of a theory, since simulation should, if there is any difference at all, be made more difficult by the juxtaposition of conditions. The difference between free and constrained estimations (Experiment 3) was more strongly predicted under independent than u.nder juxtaposed presentation. This speaks for the use of simulation since use of a theory should, if anything, be helped by juxtaposition of problems since it helps highlight the theoretically relevant actor. Results are discussed in view of recent proposals about when simulation is likely to be used, i.e. for belief fixation but not action prediction (Stich and Nichols, 1995b), for content fixation (Heal, 1996a), and for rational effects only (Heal, 1996b).


The Journal of Psychology | 1996

Decision Processes and Decision Trees in Gambles and More Natural Decision Tasks

Oswald Huber; Anton Kühberger

Abstract Most of the experimental results on the risky decision behavior of individuals have been in reference to simple gambles. An investigation was conducted to determine whether those results can be generalized to more natural situations; 32 participants were required to make choices in one gambling task and three natural-decision tasks. Half were trained and guided to draw a decision tree during the decision process. Behavior in the gambling tasks differed systematically from that in the natural-decision tasks, in the cognitive representation of the decision situation constructed by the decision maker and in the role of subjective probabilities in such a representation. The results call into question the general claim that the drawing of a decision tree aids decision making. Among other effects, participants who drew a decision tree introduced less background knowledge and more often biased the presented information than did those who did not draw a decision tree.


BMC Research Notes | 2015

The significance fallacy in inferential statistics.

Anton Kühberger; Astrid Fritz; Eva Lermer; Thomas Scherndl

BackgroundStatistical significance is an important concept in empirical science. However the meaning of the term varies widely. We investigate into the intuitive understanding of the notion of significance.MethodsWe described the results of two different experiments published in a major psychological journal to a sample of students of psychology, labeling the findings as ‘significant’ versus ‘non-significant.’ Participants were asked to estimate the effect sizes and sample sizes of the original studies.ResultsLabeling the results of a study as significant was associated with estimations of a big effect, but was largely unrelated to sample size. Similarly, non-significant results were estimated as near zero in effect size.ConclusionsAfter considerable training in statistics, students largely equate statistical significance with medium to large effect sizes, rather than with large sample sizes. The data show that students assume that statistical significance is due to real effects, rather than to ‘statistical tricks’ (e.g., increasing sample size).


Theory & Psychology | 2013

On the correlation between effect size and sample size: A reply

Anton Kühberger; Thomas Scherndl; Astrid Fritz

In a comment on our paper, Bradley and Brand (2013) argue that effect sizes are exaggerated owing to low power and publication bias. They propose to correct these exaggerations by application of a specific formula leading to a better estimate of the “true” effect size. In a simulation we test the effect of this formula and find this “corrective” approach unsatisfactory. We agree with Bradley and Brand on the points that effect sizes are important in primary and secondary research, and that exaggerated effect sizes are a serious problem in research. However, we disagree on the appropriate reaction: A diagnostic approach may be more appropriate than a corrective approach.


Decision Analysis | 2012

Explaining Risk Attitude in Framing Tasks by Regulatory Focus: A Verbal Protocol Analysis and a Simulation Using Fuzzy Logic

Anton Kühberger; Christian Wiener

We investigate the role of salient regulatory focus for risk attitude in framed gambles. In Experiment 1 we measured regulatory focus by collecting verbal protocols and found that people avoided risk under prevention focus and preferred risk under promotion focus. In Experiment 2 the same result was found when measuring peoples regulatory focus with a questionnaire. Finally, the questionnaire data were used as inputs for simulating the choices of our participants by a fuzzy-logic decision generator. The findings show that regulatory focus has a strong effect on risk attitude and that risk attitude in framing tasks can be successfully modeled as some form of fuzzy processing.


Thinking & Reasoning | 2011

Counterfactual closeness and predicted affect

Anton Kühberger; Christa Großbichler; Angelika Wimmer

Empirical research on counterfactual thinking has found a closeness effect: people report higher negative affect if an actual outcome is close to a better counterfactual outcome. However, it remains unclear what actually is a “close” miss. In three experiments that manipulate close counterfactuals, closeness effects were found only when closeness was unambiguously defined either with respect to a contrasted alternative, or with respect to a categorical boundary. In a real task people failed to report greater negative affect when encountering a close numerical miss, while they predicted greater negative affect hypothetically. These results show that counterfactual closeness effects on affect depend on closeness being accessible and unambiguously defined.


Mind & Society | 2002

Framing and the theory-simulation controversy. Predicting people's decisions

Josef Perner; Anton Kühberger

We introduce a particular way of drawing the distinction between the use of theory and simulation in the prediction of peoples decisions and describe an empirical method to test whether theory or simulation is used in a particular case. We demonstrate this method with two effects of decision making involving the choice between a safe option (take amount X) and a risky option (take double the amount X with probability 1/2). Peoples predictions of choice frequencies for trivial (€ 0.75) as opposed to substantial (€ 18) amounts in Experiment 1 are quite accurate when they are presented with both conditions juxtaposed but are less accurate when only given one of the conditions. This result is interpreted to speak for the use of theory in prediction. In contrast peoples predictions of the framing effect for substantial amounts (more risk seeking for positively than negatively framed problems) are accurate only for independent predictions but not for juxtaposed predictions, which speaks for the use of simulation.

Collaboration


Dive into the Anton Kühberger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge