Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dale L. Goodhue is active.

Publication


Featured researches published by Dale L. Goodhue.


Management Information Systems Quarterly | 2000

Data warehousing supports corporate strategy at first American corporation 1,2

Brian L. Cooper; Hugh J. Watson; Barbara H. Wixom; Dale L. Goodhue

From 1990 through 1998, First American Corporation (FAC) changed its corporate strategy from a traditional banking approach to a customer relationship-oriented strategy that placed FAC.s customers at the center of all aspects of the companys operations. The transformation made FACs an innovative leader in the financial services industry. This case study describes FACs transformation and the way in which a data warehouse called VISION helped make it happen. FACs experiences suggest lessons for managers who plan to use technology to support changes that are designed to significantly improve organizational performance. In addition, they raise interesting questions about the means by which information technology can be used to gain competitive advantage.


Information & Management | 2000

User evaluations of IS as surrogates for objective performance

Dale L. Goodhue; Barbara D. Klein; Salvatore T. March

User evaluations of information systems are frequently used as measures of MIS success, since it is extremely difficult to get objective measures of system performance. However, user evaluations have been appropriately criticized as lacking a clearly articulated theoretical basis for linking them to systems effectiveness, and almost no research has been found that explicitly tests the link between user evaluations of systems and objectively measured performance. In this paper, we focus on user evaluations of task-technology fit for mandatory use systems and develop theoretical arguments for the link to individual performance. This is then empirically tested in a controlled experiment with objective performance measures and carefully validated user evaluations. Statistically significant support for the link is found for one measure of performance but not for a second. These findings are consistent with others which found that users are not necessarily accurate reporters of key constructs related to use of IS, specifically that self reporting is a poor measure of actual utilization. The possibility that user evaluations have a stronger link to performance when users receive feedback on their performance is proposed. Implications are discussed.


Information & Management | 2004

Understanding the local-level costs and benefits of ERP through organizational information processing theory

Thomas F. Gattiker; Dale L. Goodhue

Using organizational information processing theory (OIPT), we suggest several factors that influence some of the enterprise resource planning (ERP) costs and benefits that organizations are experiencing. Though we do not attempt to address all important factors that contribute to an ERPs impact, we suggest two organizational characteristics that may have received insufficient attention in other ERP literature: interdependence and differentiation. High interdependence among organizational sub-units, contributes to the positive ERP-related effects because of ERPs ability to coordinate activities and facilitate information flows. However, when differentiation among sub-units is high, organizations may incur ERP-related compromise or design costs. We provide a case study that explores the viability of this framework. The case describes some local-level impacts of ERP and provides some evidence of the validity of the model. Unexpected findings are also presented.


hawaii international conference on system sciences | 2006

PLS, Small Sample Size, and Statistical Power in MIS Research

Dale L. Goodhue; William Lewis; Ronald L. Thompson

There is a pervasive belief in the Management Information Systems (MIS) field that Partial Least Squares (PLS) has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. We conducted a study using Monte Carlo simulation to compare these three relatively popular techniques for modeling relationships among variables under varying sample sizes (N = 40, 90, 150, and 200) and varying effect sizes (large, medium, small and no effect). The focus of the analysis was on comparing the path estimates and the statistical power for each combination of technique, sample size, and effect size. The results suggest that PLS with bootstrapping does not have special abilities with respect to statistical power at small sample sizes. In fact, for simple models with normally distributed data and relatively reliable measures, none of the three techniques have adequate power to detect small or medium effects at small sample sizes. These findings run counter to extant suggestions in MIS literature.


Information Systems Research | 2007

Research Note---Statistical Power in Analyzing Interaction Effects: Questioning the Advantage of PLS with Product Indicators

Dale L. Goodhue; William Lewis; Ronald L. Thompson

A significant amount of information systems (IS) research involves hypothesizing and testing for interaction effects. Chin et al. (2003) completed an extensive experiment using Monte Carlo simulation that compared two different techniques for detecting and estimating such interaction effects: partial least squares (PLS) with a product indicator approach versus multiple regression with summated indicators. By varying the number of indicators for each construct and the sample size, they concluded that PLS using product indicators was better (at providing higher and presumably more accurate path estimates) than multiple regression using summated indicators. Although we view the Chin et al. (2003) study as an important step in using Monte Carlo analysis to investigate such issues, we believe their results give a misleading picture of the efficacy of the product indicator approach with PLS. By expanding the scope of the investigation to include statistical power, and by replicating and then extending their work, we reach a different conclusion---that although PLS with the product indicator approach provides higher point estimates of interaction paths, it also produces wider confidence intervals, and thus provides less statistical power than multiple regression. This disadvantage increases with the number of indicators and (up to a point) with sample size. We explore the possibility that these surprising results can be explained by capitalization on chance. Regardless of the explanation, our analysis leads us to recommend that if sample size or statistical significance is a concern, regression or PLS with product of the sums should be used instead of PLS with product indicators for testing interaction effects.


Management Information Systems Quarterly | 2012

Does PLS have advantages for small sample size or non-normal data?

Dale L. Goodhue; William Lewis; Ronald L. Thompson

There is a pervasive belief in the MIS research community that PLS has advantages over other techniques when analyzing small sample sizes or data with non-normal distributions. Based on these beliefs, major MIS journals have published studies using PLS with sample sizes that would be deemed unacceptably small if used with other statistical techniques. We used Monte Carlo simulation more extensively than previous research to evaluate PLS, multiple regression, and LISREL in terms of accuracy and statistical power under varying conditions of sample size, normality of the data, number of indicators per construct, reliability of the indicators, and complexity of the research model. We found that PLS performed as effectively as the other techniques in detecting actual paths, and not falsely detecting non-existent paths. However, because PLS (like regression) apparently does not compensate for measurement error, PLS and regression were consistently less accurate than LISREL. When used with small sample sizes, PLS, like the other techniques, suffers from increased standard deviations, decreased statistical power,and reduced accuracy. All three techniques were remarkably robust against moderate departures from normality, and equally so. In total, we found that the similarities in results across the three techniques were much stronger than the differences.


Information & Management | 2002

The benefits of data warehousing: why some organizations realize exceptional payoffs

Hugh J. Watson; Dale L. Goodhue; Barbara H. Wixom

Data warehousing is one of the key developments in the information systems (IS) field. While its benefits are plentiful, some organizations are receiving more significant returns than others. The types of returns can vary in the impact they have on the organization and the ease in which they can be quantified and measured. This article presents a framework that shows how data warehouses can transform an organization; it also offers a compelling explanation for why differences in impact exist. Case studies of data warehousing initiatives at a large manufacturing company (LMC), the Internal Revenue Service, and a financial services company (FSC) are presented and discussed within the context of the framework. The analysis shows that the benefits that each company received can be tied to the way in which it conforms to the framework.


Management Information Systems Quarterly | 1997

Can humans detect errors in data? Impact of base rates, incentives, and goals

Barbara D. Klein; Dale L. Goodhue; Gordon B. Davis

There is strong evidence that data items stored in organizational databases have a significant rate of errors. If undetected in use, those errors in stored data may significantly affect business outcomes. Published research suggests that users of information systems ~Robert Zmud was the accepting senior editor for this paper. ISRL Categories: AD05, BG03, HC0201


International Journal of Human-computer Interaction | 2003

Implementation Partner Involvement and Knowledge Transfer in the Context of ERP Implementations

Marc N. Haines; Dale L. Goodhue

Enterprise Resource Planning (ERP) systems are difficult and costly to implement. Studies show that a large portion of the overall implementation cost can be attributed to consulting fees. Indeed, hardly any organization has the internal knowledge and skills to implement an ERP system successfully without external help. Therefore, it becomes crucial to use consultants effectively to improve the likelihood of success and simultaneously keep the overall costs low. In this article the authors draw from agency theory to generate a framework that explains how consultant involvement and knowledge of the implementing organization can impact the outcome of the project. Portions of the framework are illustrated by examples from a series of interviews involving 12 companies that had implemented an ERP. It is suggested that choosing the right consultants and using their skills and knowledge appropriately, as well as transferring and retaining essential knowledge within the organization, is essential to the overall success of an ERP system implementation.


hawaii international conference on system sciences | 1992

User evaluations of MIS success: what are we really measuring?

Dale L. Goodhue

Many empirical studies in MIS literature ask users for their evaluations of systems as a measure of IS success. These user evaluations are variously called user attitudes, information satisfaction, MIS appreciation, information channel disposition, value, usefulness, etc. Do all these terms refer to the same underlying construct? If there are different constructs, what are they, and which instruments cover which constructs? There has been no clear discussion in the literature comparing these different measures, and no framework has been developed by which to compare them. MIS researchers are faced with some confusion about how to compare results across studies, and a lack of guidance in choosing an appropriate instrument for new empirical work. A theoretical framework is presented showing the critical constructs which lead in a causal fashion from systems and their characteristics to performance impacts at the individual level. This allows one to more clearly define and contrast the various user evaluation constructs, and to develop guidance for researchers contemplating employing them.<<ETX>>

Collaboration


Dive into the Dale L. Goodhue's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeanne W. Ross

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cynthia Mathis Beath

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

John F. Rockart

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marc N. Haines

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge