Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raymond Hubbard is active.

Publication


Featured researches published by Raymond Hubbard.


International Journal of Research in Marketing | 1994

Replications and Extensions in Marketing -Rarely Published But Quite Contrary

Raymond Hubbard; J. Scott Armstrong

Replication is rare in marketing. Of 1,120 papers sampled from three major marketing journals, none were replications. Only 1.8% of the papers were extensions, and they consumed 1.1% of the journal space. On average, these extensions appeared seven years after the original study. The publication rate for such works has been decreasing since the 1970s. Published extensions typically produced results that conflicted with the original studies; of the 20 extensions published, 12 conflicted with the earlier results, and only 3 provided full confirmation. Published replications do not attract as many citations after publication as do the original studies, even when the results fail to support the original studies.


Strategic Management Journal | 1998

Replication in strategic management: scientific testing for validity, generalizability, and usefulness

Raymond Hubbard; Daniel E. Vetter; Eldon Little

A number of studies have shown that little replication and extension research is published in the business disciplines. This has deleterious consequences for the development of a cumulative body of knowledge in these same areas. It has been speculated, but never formally tested, that replication research is more likely to be published in lower tiers of the journal hierarchy. The present paper indicates very low levels of replication in management and strategic management journals, regardless of their prestige. Moreover, even those replications that are published tend not to be critical—odd in applied social sciences that are largely preparadigmatic and where extensibility, generalizability and utility of scientific constructs tend to be low. The goal of science is empirical generalization, or knowledge development. Systematically conducted replications with extensions facilitate this goal. It is clear, however, that many editors, reviewers, and researchers hold attitudes toward replication research that betray a lack of understanding about its role. Long-run strategies to dispel these misconceptions must involve changes in graduate training aimed at making the conduct of such vital work second nature. It is further suggested that journals in all tiers create a section specifically for the publication of replication research, and that top-tier journals take the lead in this regard.


Journal of Business Research | 1996

An empirical comparison of published replication research in accounting, economics, finance, management, and marketing

Raymond Hubbard; Daniel E. Vetter

Abstract The results of a large-scale content analysis of 18 leading business journals covering the 22-year time period 1970 to 1991 show published replication and extension research is uncommon in the business disciplines. For example, such research typically constitutes less than 10% of published empirical work in the accounting, economics, and finance areas, and 5% or less in the management and marketing fields. Further, when such work is undertaken the results usually conflict with existing findings. This raises the prospect that empirical results in these areas may be of limited value for guiding the development of business theory and practice. Strategies for cultivating a replication research tradition to facilitate knowledge development in the business disciplines are suggested.


Journal of Business Research | 1987

An empirical comparison of alternative methods for principal component extraction

Raymond Hubbard; Stuart J. Allen

Abstract A major problem confronting users of principal component analysis is the determination of how many components to extract from an empirical correlation matrix. Using 30 such matrices obtained from marketing and psychology sources, the authors provide a comparative assessment of the extraction capabilities exhibited by five principal component decision rules. These are the Kaiser-Guttman, scree, Bartlett, Horn, and random intercepts procedures. Application of these rules produces highly discrepant results. The random intercepts and Bartlett formulations yield unacceptable component solutions by grossly under- and overfactoring respectively. The Kaiser-Guttman and scree rules performed equivalently, yet revealed tendencies to overfactor. In comparison Horns test acquitted itself with distinction, and warrants greater attention from applied researchers.


Educational and Psychological Measurement | 2000

The Historical Growth of Statistical Significance Testing in Psychology--and Its Future Prospects.

Raymond Hubbard; Patricia A. Ryan

The historical growth in the popularity of statistical significance testing is examined using a random sample of annual data from 12 American Psychological Association (APA) journals. The results replicate and extend the findings of Hubbard, Parsa, and Luthy, who used data from only the Journal of Applied Psychology. The results also confirm Gigerenzer and Murrays allegation that an inference revolution occurred in psychology between 1940 and 1955. An assessment of the future prospects for statistical significance testing is offered. It is concluded that replication with extension research, and its connections with meta-analysis, is a better vehicle for developing a cumulative knowledge base in the discipline than statistical significance testing. It is conceded, however, that statistical significance testing is likely here to stay.


General Economics and Teaching | 1992

Are Null Results Becoming an Endangered Species in Marketing

Raymond Hubbard; J. Scott Armstrong

Editorial procedures in the social and biomedical sciences are said to promote studies that falsely reject the null hypothesis. This problem may also exist in major marketing journals. Of 692 papers using statistical significance tests sampled from theJournal of Marketing, Journal of Marketing Research, andJournal of Consumer Research between 1974 and 1989, only 7.8% failed to reject the null hypothesis. The percentage of null results declined by one-half from the 1970s to the 1980s. TheJM and theJMR registered marked decreases. The small percentage of insignificant results could not be explained as being due to inadequate statistical power.


Marketing Theory | 2002

How the Emphasis on ‘Original’ Empirical Marketing Research Impedes Knowledge Development

Raymond Hubbard; R. Murray Lindsay

Empirical research in marketing should focus on the development of empirical generalizations. Marketers do a huge amount of empirical research, but have little in the way of empirical generalizations. This is primarily because most empirical research consists of ‘original’ or ‘novel’ works looking for significant differences, rather than significant sameness, in unrelated data sets, thus exemplifying the ‘cult of the isolated study’. As a result, the marketing literature is made up largely of uncorroborated, fragmented, ‘one-off’ results. Such results are of little use to marketing practitioners or academicians. We discuss a number of impediments to the development of empirical generalizations – preoccupation with the hypotheticodeductive conception of science, preoccupation with ‘statistical’ rather than ‘empirical’ generalization, the ‘publish or perish’ syndrome in academia, and denigration of replication-with-extension research. We conclude that replication-with-extension research must be championed as the vehicle for discovering empirical generalizations.


Educational and Psychological Measurement | 2000

Statistical Significance with Comments by Editors of Marketing Journals The Historical Growth of Statistical Significance Testing in Psychology—and its Future Prospects

Raymond Hubbard; Patricia A. Ryan

The historical growth in the popularity of statistical significance testing is examined using a random sample of annual data from 12 American Psychological Association (APA) journals. The results replicate and extend the findings of Hubbard, Parsa, and Luthy, who used data from only the Journal of Applied Psychology. The results also confirm Gigerenzer and Murray’s allegation that an inference revolution occurred in psychology between 1940 and 1955. An assessment of the future prospects for statistical significance testing is offered. It is concluded that replication with extension research, and its connections with meta-analysis, is a better vehicle for developing a cumulative knowledge base in the discipline than statistical significance testing. It is conceded, however, that statistical significance testing is likely here to stay.


Theory & Psychology | 2004

Alphabet Soup Blurring the Distinctions Betweenp’s anda’s in Psychological Research

Raymond Hubbard

Confusion over the reporting and interpretation of results of commonly employed classical statistical tests is recorded in a sample of 1,645 papers from 12 psychology journals for the period 1990 through 2002. The confusion arises because researchers mistakenly believe that their interpretation is guided by a single unified theory of statistical inference. But this is not so: classical statistical testing is a nameless amalgamation of the rival and often contradictory approaches developed by Ronald Fisher, on the one hand, and Jerzy Neyman and Egon Pearson, on the other. In particular, there is extensive failure to acknowledge the incompatibility of Fisher’s evidential pvalue with the Type I error rate, •, of Neyman–Pearson statistical orthodoxy. The distinction between evidence (p’s) and errors (•’s) is not trivial. Rather, it reveals the basic differences underlying Fisher’s ideas on significance testing and inductive inference, and Neyman–Pearson views on hypothesis testing and inductive behavior. So complete is this misunderstanding over measures of evidence versus error that it is not viewed as even being a problem among the vast majority of researchers and other relevant parties. These include the APA Task Force on Statistical Inference, and those writing the guidelines concerning statistical testing mandated in APA Publication Manuals. The result is that, despite supplanting Fisher’s significance-testing paradigm some fifty years or so ago, recognizable applications of Neyman–Pearson theory are few and far between in psychology’s empirical literature. On the other hand, Fisher’s influence is ubiquitous.


Journal of Marketing Education | 2006

Why We Don't Really Know What Statistical Significance Means: Implications for Educators

Raymond Hubbard; J. Scott Armstrong

In marketing journals and market research textbooks, two concepts of statistical significance—p values and αlevels—are commonly mixed together. This is unfortunate because they each have completely different interpretations. The upshot is that many investigators are confused over the meaning of statistical significance. We explain how this confusion has arisen and make several suggestions to teachers and researchers about how to overcome it.

Collaboration


Dive into the Raymond Hubbard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stuart J. Allen

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Eldon Little

Indiana University Southeast

View shared research outputs
Top Co-Authors

Avatar

Daniel E. Vetter

Central Michigan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Murray Lindsay

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge