Yishay Spector
Hebrew University of Jerusalem
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yishay Spector.
Neurocomputing | 1996
Moshe Leshno; Yishay Spector
Abstract In this paper we evaluate the prediction capability of various neural network models. The models examined in this study differ on the following parameters: data span, learning technique and number of iterations. The neural net prediction capabilities are also compared to results obtained by classical discriminant analysis models. The specific case evaluated is bankruptcy prediction. The common assumption of all bankruptcy prediction models is that fundamental economic factors and the characteristics of a firm are reflected in its financial statements. Therefore, using analytic tools and data from the firms financial reports, one can evaluate and predict its future financial status. Since the number of bankrupt firms is limited, we used examples (financial statements) from various periods preceding the bankruptcy event. Although the financial statements from the bankruptcy period convey more information, financial statements form distinct periods always improved the models. The prediction capability of the models is improved by using enhanced learning techniques. However, if the enhancement learning technique is too ‘strong’, the model becomes too specific for the training data set and thus loses its prediction capabilities.
International Journal of Production Research | 1992
Boaz Ronen; Yishay Spector
Abstract Modern management philosophies, such as just in time (JIT), the theory of constraints (TOC) and total quality management (TQM), place a strong emphasis on operations management. These approaches create techniques and procedures for effective flow of materials, but do not provide sufficient tools to consider the economic outcomes of the various alternatives. This paper applies the cost/utilization model to the analysis of production lines and materials flow. The model combines the Pareto approach with the TOC approach. The Pareto approach concentrates on the important and costly elements of the organization. TOC focuses on the organizations constraints. It is presented in a simple graphic display aimed to allow managers to locate better constraint resources, detect faults in the planning of the production line, examine improper fluctuations in the process and pinpoint their sources. The model is a top-management decision-support tool that may be applied in areas such as buffer policy, assessment ...
The Journal of Portfolio Management | 1996
Haim Levy; Yishay Spector
YISHAY SPECTOR is a professor at the School of Business at Hebrew University in Jerusalem, Israel. t is generally accepted that the investment horizon plays a crucial role in determining (the optimum composition of an investment portfolio. This I assumption is vahd for all utility functions (myopic and non-myopic) that &sallow periodcal portfolio revisions and, alternatively, for non-myopic utility functions (e.g., -e”) that do allow portfolio revisions at the end of each period. The investment horizon does not play a role in determining portfolio composition when a myopic uthty function allowing periodical revisions prevails (see Samuelson [199O]) or when investors base their investment decisions on the meanvariance rule in combination with portfblio revisions (see Levy and Samuelson [1992]).’ It is commonly believed that givec a long-term investment horizon investors should concentrate their investment in risky stocks (see Bernstein [ 19761). In so doing, they will enjoy high mean returns with relatively low risk. According to this view, (henceforth, the “practitioners’ view”), “time diversification” substitutes for “cross-asset diversification.” Although the idea is intuitively appealing, research has produced evidence contradicting the claim: 1) when investment decisions are based on the meanvariance (MV) rule, as the holding period increases (with no revisions), investors should concentrate on senior securities, which are characterized by relatively low mean returns (Levy [1989]; Levy and Gunthorpe [1992]; 2) in the case of a myopic udity fLinction of the type U(W) = (W’*)/(l a) (with a > 0), the portfo-
European Journal of Operational Research | 1996
Yishay Spector; Moshe Leshno; Moshe Ben Horin
Abstract Existing stochastic dominance rules apply to variables such as income, wealth and rates of return, all of which are measured on cardinal scales. This study develops and applies stochastic dominance rules for ordinal data. It is shown that the new rules are consistent with the traditional von Neumann-Morgenstern expected utility approach, and that they are applicable and relevant in a wide variety of managerial decision making situations, where existing stochastic dominance rules fail to apply. We apply ordinal SD rules to the transformation of random variables.
Operations Research Letters | 1997
Yishay Spector; Dror Zuckerman
In this paper we consider a stochastic R&D decision model for a single firm operating in a competitive environment. The study focuses on the firms optimal policy which maximizes the expected discounted net return from the project. The firms policy is composed of two ingredients: a stopping time which determines when the developed technology should be introduced and protected by a patent, and an investment strategy which specifies the expenditure rate throughout the R&D program. The main findings of the study are:(a)Under a constant expenditure rate strategy, the optimal stopping time of the project is a control limit policy of the following form: stop whenever the projects state exceeds a fixed critical value, or when a similar technology is introduced and protected by one of the firms rivals, whichever occurs first. (b)For a R&D race model in which the winner-takes-all competition and the losers return is zero, we show that the firms optimal expenditure rate throughout the R&D program increases monotonically as a function of the projects state. In order to gain a better insight regarding optimal R&D programs in competitive markets we examine the effect of key economic parameters on the firms optimal policy.
European Journal of Operational Research | 1996
Ran Giladi; Yishay Spector; Adi Raveh
Abstract Computer performance is an important issue for engineering and economic aspects of computer usage, planning, design and research. The Co-plot methodology graphically relates attributes, observations and interrelations between attributes, thus enabling the simultaneous study of all the observations and variables of a given data set. An analysis of performance attributes of computers from the 1980s (1981–1991) was carried out according to the Co-plot methodology. The analysis shows that during this decade, computer performance became more dependent on IO rate, cache size and multiprocessing, and less dependent on memory-size and IO addressing capabilities. At the same time, the high correlation between CPU speed and performance was maintained.
Mathematical Social Sciences | 1992
Moshe Leshno; Yishay Spector
Abstract This paper presents a short and elementary proof of Blackwells theorem. This theorem states the statistical conditions under which one information structure (or experiment) is more informative than another information structure. By more informative we mean that one information structure does not have less economic value to any decision-maker than another information structure, regardless of the payoff function, the decision-makers utility function, and the a priori probability distribution.
Journal of Mathematical Economics | 1995
Yoram Kroll; Moshe Leshno; Haim Levy; Yishay Spector
Abstract This paper defines conditions for ‘Increasing Risk’ when the utility functions of risk averse investors are characterized by decreasing absolute risk aversion (DARA). Rothschild and Stiglitz ( Journal of Economic Theory 1970, 2, 225–243, and 1971, 3, 66–84) define cases when a random variable Y is ‘more risky’ (or ‘more variable’) than another variable X for the utility functions of risk averse investors. They conclude that Y is riskier than X if G , the cumulative distribution of Y , can be formed from F , the cumulative distribution of X , by adding a series of mean preserving spread (MPS) steps to F . This paper suggests considering a sequence of steps which are denoted by ‘mean preserving spread and antispread’ (MPSA). We define the condition under which a random variable (r.v.) Y is ‘more risky’ (or ‘more variable’) than another variable X for DARA utility functions. We prove that for DARA utility functions, Y is riskier than X if and only if G , the cumulative distribution function of Y , can be formed from F , the cumulative distribution function of X by adding a series of MPSA steps to F , under the restrictions stated in the paper. The economic intuition and impact of MPS and MPSA steps on the optimum diversification strategy are demonstrated.
European Journal of Operational Research | 1995
Boaz Ronen; Yishay Spector
Abstract This paper integrates elements of Information Economics, Quality Control, Portfolio Selection and Decision Theory into an approach to facilitate the selection of an attribute sampling plan for a lot subject to inspection. The optimal selection is based on two criteria: the expected payoff and its standard deviation. Quality sampling plans are introduced as information structures, and ranked by the two criteria. The paper shows that different organizations should prefer different plans according to their strategies and preferences. The paper provides methods to rank order different sampling plans by an economical criterion as opposed to the commonly accepted pure statistical criterion.
Naval Research Logistics | 1997
Moshe Leshno; Yishay Spector
Classification among groups is a crucial problem in managerial decision making. Classification techniques are used in: identifying stressed firms, classifying among consumer types, rating of firms’ bonds, etc. Neural networks are recognized as important and emerging methodologies in the area of classification. In this paper, we study the effect of training sample size and the neural network topology on the classification capability of neural networks. We also compare neural network capabilities with those of commonly used statistical methodologies. Experiments were designed and carried out on two-group classification problems to find answers to these questions.