Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ehsan S. Soofi is active.

Publication


Featured researches published by Ehsan S. Soofi.


Journal of the American Statistical Association | 1994

Capturing the Intangible Concept of Information

Ehsan S. Soofi

Abstract The purpose of this article is to discuss the intricacies of quantifying information in some statistical problems. The aim is to develop a general appreciation for the meanings of information functions rather than their mathematical use. This theme integrates fundamental aspects of the contributions of Kullback, Lindley, and Jaynes and bridges chaos to probability modeling. A synopsis of information-theoretic statistics is presented in the form of a pyramid with Shannon at the vertex and a triangular base that signifies three distinct variants of quantifying information: discrimination information (Kullback), mutual information (Lindley), and maximum entropy information (Jaynes). Examples of capturing information by the maximum entropy (ME) method are discussed. It is shown that the ME approach produces a general class of logit models capable of capturing various forms of sample and nonsample information. Diagnostics for quantifying information captured by the ME logit models are given, and decom...


Decision Sciences | 2003

Multiple Conceptualizations of Small Business Web Use and Benefit

Kurt Pflughoeft; K. Ramamurthy; Ehsan S. Soofi; Masoud Yasai-Ardekani; Fatemeh Zahedi

Small businesses play an important role in the U.S. economy and there is anecdotal evidence that use of the Web is beneficial to such businesses. There is, however, little systematic analysis of the conditions that lead to successful use of and thereby benefits from the Web for small businesses. Based on the innovation adoption, organizations, and information systems (IS) implementation literature, we identify a set of variables that are related to adoption, use, and benefits of information technology (IT), with particular emphasis on small businesses. These variables are reflective of an organizations contextual characteristics, its IT infrastructure, Web use, and Web benefits. Since the extant research does not suggest a single theoretical model for Web use and benefits in the context of small businesses, we adopt a modeling approach and explore the relationships between “context-IT-use-benefit” (CIUB) through three models—partial-mediator, reduced partial-mediator, and mediator. These models posit that the extent of Web use by small businesses and the associated benefits are driven by organizations’ contextual characteristics and their IT infrastructure. They differ in the endogeneity/exogeneity of the extent of IT sophistication, and in the direct/mediated effects of organizational context. We examine whether the relationships between variables identified in the literature hold within the context of these models using two samples of small businesses with national coverage, including various sizes, and representing several industry sectors. The results show that the evidence for patterns of relationships is similar across the two independent samples for two of these models. We highlight the relationships within the reduced partial-mediator and mediator models for which conclusive evidence are given by both samples. Implications for small business managers and providers of Web-based technologies are discussed.


Journal of Econometrics | 1999

Ordering univariate distributions by entropy and variance

Nader Ebrahimi; Esfandiar Maasoumi; Ehsan S. Soofi

This paper examines the role of variance and entropy in ordering distributions and random prospects. There is no universal relation between entropy and variance orderings of distributions. But we place their relationship in the context of a stronger ordering relation known as dispersion ordering. Further, some conditions are identified under which variance and entropy order similarly when continuous variables are transformed. We also analyze parametric changes which do not disturb the agreement between these rankings. The results are conveniently tabulated in terms of distribution parameters.


Journal of the American Statistical Association | 2000

Principal Information Theoretic Approaches

Ehsan S. Soofi

Kailath, J. (1980), Linear Systems, New York Wiley. Kalman, R. (1960), “A New Approach to Linear Filtering and Prediction Problems,” Journal of Basic Engineering, 82, 3545. Kay, S. (1988), Modern Spectral Estimutioh, Englewood Cliffs, NJ: Prentice-Hall. Kitagawa, G. (1993), “A Monte Car10 Filtering and Smoothing Method for Non-Gaussian Nonlinear State-Space Models,” in Proceedings of the Second U.S.-Japan Joint Seminar on Time Series, pp. 110-131. Institute of Statistical Mathematics. Kitagawa, G., and Gersch, W. (1996), Smoothness Priors Analysis of Time Series, Berlin: Springer-Verlag. Ljung, L. (19871, System Ident


Statistics & Probability Letters | 1994

Two measures of sample entropy

Nader Ebrahimi; Kurt Pflughoeft; Ehsan S. Soofi

cation: Theory for the User, Englewood Cliffs, NJ: Prentice-Hall. Papoulis, A. (19681, Systems and Transforms With Applications in Optics, Malabar, FL: Krieger. Ramsay, J., and Silverman, B. (19971, Functional Data Analysis, New York: Springer-Verlag. Rissanen, J. (1978), “Modeling by Shortest Date Description,” Automatica, 14, 465471. (1986), “Stochastic Complexity and Modeling,” The Annals of Statistics, 14, 1080-1 100. Robinson, E. (1967), Multichannel Time Series, New York: Prentice-Hall. York: Academic Press.


Journal of the American Statistical Association | 1992

A Generalizable Formulation of Conditional Logit with Diagnostics

Ehsan S. Soofi

In many statistical studies the entropy of a distribution function is of prime interest. This paper proposes two estimators of the entropy. Both estimators are obtained by modifying the estimator proposed by Vasicek (1976). Consistency of both estimators is proved, and comparisons have been made with Vasiceks estimator and its generalization proposed by Dudewicz and Van der Meulen (1987). The results indicate that the proposed estimators have less bias and have less mean squared error than Vasiceks estimator and its generalization


Journal of Econometrics | 2002

Information indices: unification and applications

Ehsan S. Soofi; J.J. Retzer

Abstract The conditional logit model is a multinomial logit model that permits the inclusion of choice-specific attributes. This article shows that the conditional logit model will maximize entropy given a set of attribute-value preserving constraints. A correspondence between the maximum entropy (ME) and maximum likelihood (ML) estimates for logit probabilities is established. Some easily computable and useful diagnostics for logit analysis are provided, and it is shown that an evaluation of the relative importance of attributes can be made using the ME formulation. The ME formulation is also generalized to accommodate initial choice probabilities into the logit model. An example is given. KEY WORDS: Choice models; Entropy; Kullback-Leibler discrimination information function; Relative importance.


IEEE Transactions on Information Theory | 2004

Information properties of order statistics and spacings

Nader Ebrahimi; Ehsan S. Soofi; Hassan Zahedi

The unified framework of information theoretic statistics was established by Kullback (1959). Since then numerous information indices have been developed in various contexts. This paper represents many of these indices in a unified context. The unification thread is the discrimination information function: information indices are all logarithmic measures of discrepancy between two probability distributions. First, we present a summary of informational aspects of the basic information functions, a unification of various information-theoretic modeling approaches, and some explication in terms of traditional measures. We then tabulate a unified representation of assortments of information indices developed in the literature for maximum entropy modeling, covariate information, and influence diagnostics. The subjects of these indices include parametric model fitting, nonparametric entropy estimation, categorical data analysis, the linear and exponential family regression, and time series. The coverage however, is not exhaustive. The tabulation includes sampling theory and Bayesian indices, but the focus is on interpretation as descriptive measures and inferential properties are noted tangentially. Finally, applications of some information indices are illustrated through modeling duration data for Sprints churned customer and choice of long distance provider.


Decision Sciences | 2000

A Framework for Measuring the Importance of Variables with Applications to Management Research and Decision Models

Ehsan S. Soofi; Joseph J. Retzer; Masoud Yasai-Ardekani

We explore properties of the entropy, Kullback-Leibler information, and mutual information for order statistics. The probability integral transformation plays a pivotal role in developing our results. We provide bounds for the entropy of order statistics and some results that relate entropy ordering of order statistics to other well-known orderings of random variables. We show that the discrimination information between order statistics and data distribution, the discrimination information among the order statistics, and the mutual information between order statistics are all distribution free and are computable using the distributions of the order statistics of the samples from the uniform distribution. We also discuss information properties of spacings for uniform and exponential samples and provide a large sample distribution-free result on the entropy of spacings. The results show interesting symmetries of information orderings among order statistics.


Journal of the American Statistical Association | 1995

Information Distinguishability with Application to Analysis of Failure Data

Ehsan S. Soofi; Nader Ebrahimi; Mohamed Habibullah

In many disciplines, including various management science fields, researchers have shown interest in assigning relative importance weights to a set of explanatory variables in multivariable statistical analysis. This paper provides a synthesis of the relative importance measures scattered in the statistics, psychometrics, and management science literature. These measures are computed by averaging the partial contributions of each variable over all orderings of the explanatory variables. We define an Analysis of Importance (ANIMP) framework that reflects two desirable properties for the relative importance measures discussed in the literature: additive separability and order independence. We also provide a formal justification and generalization of the “averaging over all orderings” procedure based on the Maximum Entropy Principle. We then examine the question of relative importance in management research within the framework of the “contingency theory of organizational design” and provide an example of the use of relative importance measures in an actual management decision situation. Contrasts are drawn between the consequences of use of statistical significance, which is an inappropriate indicator of relative importance and the results of the appropriate ANIMP measures.

Collaboration


Dive into the Ehsan S. Soofi's collaboration.

Top Co-Authors

Avatar

Nader Ebrahimi

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar

Refik Soyer

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Dennis H. Gensch

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul C. Nystrom

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph J. Retzer

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar

Kurt Pflughoeft

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

D. V. Gokhale

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge