Miguel I. Aguirre-Urreta
DePaul University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Miguel I. Aguirre-Urreta.
Management Information Systems Quarterly | 2012
Miguel I. Aguirre-Urreta; George M. Marakas
Researchers in a number of disciplines, including Information Systems, have argued that much of past research may have incorrectly specified the relationship between latent variables and indicators as reflective when an understanding of a construct and its measures indicates that a formative specification would have been warranted. Coupled with the posited severe biasing effects of construct misspecification on structural parameters, these two assertions would lead to concluding that an important portion of our literature is largely invalid. While we do not delve into the issue of when one specification should be employed over another, our work here contends that construct misspecification, but with a particular exception, does not lead to severely biased estimates. We argue, and show through extensive simulations, that a lack of attention to the metric in which relationships are expressed is responsible for the current belief in the negative effects of misspecification.
ACM Sigmis Database | 2008
Miguel I. Aguirre-Urreta; George M. Marakas
The empirical literature comparing entity relationship and object-oriented modeling techniques, while vibrant, has often yielded equivocal findings. This review employs Normans (1986) Theory of Action to distinguish between model creation and comprehension studies, and applies and extends the theoretical framework proposed by Gemino and Wand (2004) to highlight and detail several issues that may need further exploration if consistent results in this stream are to be realized. Specifically, this paper explores why and how issues of ontological foundation, training, equivalence of conceptual models, and modeling practices may result in differences between alternative modeling techniques. A comprehensive picture of this literature is provided and a number of potential avenues for future research are proposed.
Information Systems Research | 2014
Miguel I. Aguirre-Urreta; George M. Marakas
Information systems researchers have recently begun to propose models that include formatively specified constructs, and largely rely on partial least squares PLS to estimate the parameters of interest in those models. In this research, we focus on those cases where the formatively specified constructs are endogenous to other constructs in the research model in addition to their own manifest indicators, which are quite common in published research in the discipline, and analyze whether PLS is a valid statistical technique for estimating those models. Although there is evidence that covariance-based approaches can accurately estimate them, this is the first research that examines whether PLS can indeed do so. Through a theoretical analysis based on the inner workings of the PLS algorithm, which is later validated and extended through a series of Monte Carlo simulations, we conclude that is not the case. Specifically, estimates obtained from PLS are capturing something other than the relationship of interest when the formatively specified constructs are endogenous to others in the model. We show how our results apply more generally to a class of models, and discuss implications for future research practice.
Measurement: Interdisciplinary Research & Perspective | 2016
Miguel I. Aguirre-Urreta; Mikko Rönkkö; George M. Marakas
ABSTRACT One of the central assumptions of the causal-indicator literature is that all causal indicators must be included in the research model and that the exclusion of one or more relevant causal indicators would have severe negative consequences by altering the meaning of the latent variable. In this research we show that the omission of a relevant causal indicator does not affect downstream estimates relating the focal latent variable to other variables in the model, which challenges the current stance in the literature. Further, we argue that this occurrence presents a fundamental challenge to the causal-indicator literature, in that the lack of negative consequences is not consistent with the tenet that latent variables derive their meaning from the set of causal indicators included in a research model. Rather, though causal indicators help identify the focal latent variable, its meaning is derived from its position as a common factor of other downstream variables—latent or observed—to which it is related.
ACM Sigmis Database | 2013
Miguel I. Aguirre-Urreta; George M. Marakas; Michael E. Ellis
The accurate estimation of reliability is of great importance to the conduct and interpretation of empirical research as it is used to judge the quality of reported research, often plays a role in publication decisions, and is a key element of meta-analytic reviews. When employing partial least squares (PLS) as the method of analysis, the reliability of the composites involved in the model is typically the parameter examined. In this research, we describe the existence of three important issues concerning the accuracy of composite reliability estimation in PLS analysis: the assumption of equal indicator weights, the bias in loading estimates, and the lack of independence between indicator loadings and weights. We subsequently present an alternative approach to correct these issues. Using a Monte Carlo simulation we provide a demonstration of both the effects of these issues on research decisions and the improved accuracy of the alternative method.
Journal of Organizational and End User Computing | 2012
George M. Marakas; Miguel I. Aguirre-Urreta
In this paper, the authors conduct a study to explore the evaluation and choice between candidate software applications. Using business professionals, technology adoption is investigated by presenting participants with an alternative choice set using software applications relevant to the professional domain of the subjects. Results from this study, focusing on models of intentions, provide evidence to suggest the underlying process by which choice behaviors are determined and demonstrate the value of incorporating choice into models of technology adoption, particularly in situations where selection is made from a set of candidate technologies, such as in an organizational adoption decision. In addition, theoretically derived models of comparison processes are examined to develop further understanding into how individuals arrive at a specific choice behavior. A second study is conducted to further validate the obtained results. Implications for future research into the processes leading to adoption of information technologies are also presented.
Information Systems Research | 2014
Miguel I. Aguirre-Urreta; George M. Marakas
We appreciate the interest shown by Rigdon et al. [Rigdon EE, Becker J-M, Rai A, Ringle CM, Diamantopoulos A, Karahanna E, Straub DW, Dijkstra TK 2014 Conflating antecedents and formative indicators: A comment on Aguirre-Urreta and Marakas. Inform. Systems Res. 254:780-784.] in our recent work and for the time and effort spent in carefully considering it and offering their comments and concerns. In what follows, and within the limitations of a short rejoinder, we offer our response to their comments, highlighting points of agreement and noting where more research is necessary.
Behaviour & Information Technology | 2014
Iris Reychav; Miguel I. Aguirre-Urreta
This research investigated Internet-based knowledge search patterns of engineers and scientists working in R&D for companies in the pharmacological and information technology sectors in Israel. Building on earlier work that considers the multidimensional nature of the relative advantage construct, we examine how perceptions of learning, informational convenience, and trust affected intentions to use the Internet to acquire new knowledge. In particular, these perceptions were studied with regard to both active and passive modes of interaction. We also considered here which types of technological knowledge are acquired by researchers, and how that differs across two professional communities of practice – scientists and engineers. This study sheds light on how R&D workers perceive the relative advantage of acquiring necessary knowledge through passive and active modes of communication with other researchers that are facilitated by the Internet. Findings are of interest to the literature on knowledge spillover because the capability of an organisation to acquire, disseminate, and exploit knowledge is crucial to R&D efforts.
Measurement: Interdisciplinary Research & Perspective | 2016
Miguel I. Aguirre-Urreta; Mikko Rönkkö; George M. Marakas
We begin this brief rejoinder by thanking all the authors who took time to provide comments on our work, which appeared recently in this journal (Aguirre-Urreta, Rönkkö, & Marakas, 2016). All commentaries appear to suggest that causal indicators cannot be used for measurement but differ in how strongly this conclusion is stated; commentaries challenging our work and its conclusions are notably absent. In what follows we outline some common themes across the commentaries, provide some thoughts on these themes, and outline areas we believe need further research.
Psychological Methods | 2018
Miguel I. Aguirre-Urreta; Mikko Rönkkö; Cameron N. McIntosh
Several calls have been made for replacing coefficient &agr; with more contemporary model-based reliability coefficients in psychological research. Under the assumption of unidimensional measurement scales and independent measurement errors, two leading alternatives are composite reliability and maximal reliability. Of these two, the maximal reliability statistic, or equivalently Hancock’s H, has received a significant amount of attention in recent years. The difference between composite reliability and maximal reliability is that the former is a reliability index for a scale mean (or unweighted sum), whereas the latter estimates the reliability of a scale score where indicators are weighted differently based on their estimated reliabilities. The formula for the maximal reliability weights has been derived using population quantities; however, their finite-sample behavior has not been extensively examined. Particularly, there are two types of bias when the maximal reliability statistic is calculated from sample data: (a) the sample maximal reliability estimator is a positively biased estimator of population maximal reliability, and (b) the true reliability of composites formed with maximal reliability weights calculated from sample data is on average less than the population reliability. Both effects are more pronounced in small-sample scenarios (e.g., <100). We also demonstrate that the composite reliability estimator for equally weighted composite exhibits substantially less bias, which makes it a more appropriate choice for the small-sample case.