Alfred L. Norman
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alfred L. Norman.
The Review of Economic Studies | 1987
Alfred L. Norman
The transactions cost for alternative exchange mechanisms for the household exchange problem can be characterized by the computational complexity of the exchange process. The computational complexity for any exchange mechanism is at least nH, where n is the number of goods and H is the number of households. Imposing the conditions of conservation, nonnegativity and quid pro quo results in a command exchange mechanism whose computational complexity is nH. Multiparty barter exchange, formalized using graph theory, has computational complexity equal to the minimum of (nH2, n2H). Introducing an auxiliary good, money, reduces the computational complexity to nH. A problem with decentralized information is demonstrated.
The Journal of Higher Education | 1974
Stephen A. Hoenack; Alfred L. Norman
How universities should adapt to an environment in which growth, both of enrollments and of public financial support, will be much slower than in the past is an important contemporary issue. In the years between World War II to the mid-sixties, intensified instructional and research demands contributed to a rapid expansion of almost all university programs. In the recent past, however, a perceived lessening of the rate of return to higher education, combined with a declining college age population, has contributed to a diminished growth rate of both demand and support. These changing historical circumstances have made resource allocation a critical problem confronting university administrators, who in the 1970s face contracting budgets, and who are saddled with economic and political problem-solving mechanisms inherited from an era of nearly unlimited growth. A new resource allocation mechanism is needed that will allow universities the flexibility and that will provide the incentives to adapt to changing educational demands in the coming decades. One currently proposed method of university resource allocation is a
Journal of Economic Dynamics and Control | 1994
Alfred L. Norman; David W. Shimer
Abstract Complexity theory provides formal procedures for analyzing problem difficulty. Frank H. Knight in Risk, Uncertainty and Profit assumed intelligence is finite and stressed the difficulty of solving problems involving uncertainty. In this paper, a risk decision is a stochastic optimization problem where the parameters and the functional forms required to determine the optimal decision are known. An uncertain decision is a stochastic optimization problem where at least one parameter or functional form must be estimated. Using complexity theory, a valid distinction can be made between risk and uncertainty which is consistent with Bayesian statistics. From the perspective of bounded rationality Knights concepts of consolidation and specilization can be reconciled with the Bayesians.
Computing in Economics and Finance | 1994
Alfred L. Norman
Herbert Simon advocates that economists should study procedureal rationality instead of substantive rationality. One approach for studying procedural rationality is to consider algorithmic representations of procedures, which can then be studied using the concepts of computability and complexity. For some time, game theorists have considered the issue of computability and have employed automata to study bounded rationality. Outside game theory very little research has been performed. Very simple examples of the traditional economic optimization models can require transfinite computations. The impact of procedural rationality on economics depends on the computational resources available to economic agents.
Journal of Economic Dynamics and Control | 1983
Alfred L. Norman; Leon S. Lasdon; Jun Kuan Hsin
Abstract Two generalized reduced gradient (GRG) codes for solving and optimizing large non-linear implicitly defined econometric models are compared using a version of the Federal Reserve Board quarterly model. The first code uses a Gauss-Seidel algorithm to solve the model and obtains the reduced gradient via a finite difference approach suggested by Fair. The second code uses a modified Newton algorithm to solve the model and obtains the reduced gradient using Lagrange multipliers. The results for solving the model showed the Gauss-Seidel code to be faster. For optimization problems the results were problem specific.
Computing in Economics and Finance | 2004
Alfred L. Norman; A. Ahmed; J. Chou; A. Dalal; K. Fortson; M. Jindal; C. Kurz; H. Lee; K. Payne; R. Rando; Kevin Sheppard; E. Sublett; J. Sussman; I. White
A consumer entering a new bookstore can face more than 250,000alternatives. The efficiency of compensatory and noncompensatory decisionrulesfor finding a preferred item depends on the efficiency of their associatedinformation operators. At best, item-by-item information operators lead tolinear computational complexity; set information operators, on the other hand,can lead to constant complexity. We perform an experiment demonstrating thatsubjects are approximately rational in selecting between sublinear and linearrules. Many markets are organized by attributes that enable consumers toemploya set-selection-by-aspect rule using set information operations. In cyberspacedecision rules are encoded as decision aids.
Computing in Economics and Finance | 2001
Alfred L. Norman; J. Chou; M. Chowdhury; A. Dalal; K. Fortson; M. Jindal; K. Payne; M. Rajan
Utility maximization with a weakly separable utility function requires aconsumer create an optimal budget for each separable subgroup. We show thatcomputational complexity of optimal budgeting is the maximum of an exponentialin the number of alternatives and a quadratic in the number of budgetincrements. From a budget survey of undergraduates we show that anundergraduate procedural consumer can obtain a budget estimate from theexperience of previous students and can monitor the flow of funds and can makeadjustments at a minuscule fraction of the calculations needed for optimalbudgeting.
Journal of Economic Behavior and Organization | 2003
Alfred L. Norman; M. Ahmed; J. Chou; K. Fortson; C. Kurz; H. Lee; L. Linden; K. Meythaler; R. Rando; Kevin Sheppard; N. Tantzen; I. White; M. Ziegler
Binary comparison operators form the basis of consumer set theory. If humans could only perform binary comparisons, the most efficient procedure a human might employ to make a complete preference ordering of n items would be a n log2n algorithm. But, if humans are capable of assigning each item an ordinal utility value, they are capable of implementing a more efficient linear algorithm. In this paper, we consider six incentive systems for ordering three different sets of objects: pens, notebooks, and Hot Wheels. All experimental evidence indicates that humans are capable of implementing a linear algorithm, for small sets.
Journal of Economic Dynamics and Control | 1984
Alfred L. Norman
Abstract Using dynamic programming MacRae obtained a linear feedback law for her open loop constrained variance (OLCV) strategy. To compute the control law requires solving a two point boundary value problem. The dynamic programming formulation does not admit gradient related algorithms. This paper presents an alternative augmented Lagrangian formulation which can be solved as a deterministic constrained optimization problem. The new approach is superior because it admits gradient related algorithms plus all the algorithms which could be employed using the original approach. Three examples demonstrate the superiority of the new approach.
IEEE Transactions on Automatic Control | 1973
Alfred L. Norman; M.R. Norman
Most econometric models have many versions which differ from one another by 1) minor variations in the specification, or 2) choice of parameter estimator. This paper demonstrates how control theory can be employed to discriminate between alternative versions of an econometric model. For many policy problems involving a simple objective function and a single control variable, the investigator can hypothesize from the tenets of a given economic theory qualitative characteristics of the optimal economic behavior. By computing the optimal economic policy, the investigator can determine which versions are consistent with the hypothesis. This procedure was employed to determine which versions of a revised form of the Klein-Goldberger model are consistent with three hypotheses derived from economic theory. For the first two hypotheses, the results indicate that consistency depends on the choice of parameter estimates. For the third hypothesis, an inflation test at full employment, the results indicate that all versions are inconsistent with the hypothesis.