Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven Sandow is active.

Publication


Featured researches published by Sven Sandow.


International Journal of Theoretical and Applied Finance | 2003

Model Performance Measures For Expected Utility Maximizing Investors

Craig A. Friedman; Sven Sandow

We examine model performance measures in four contexts: Discrete Probability, Continuous Probability, Conditional Discrete Probability and Conditional Probability Density Models. We consider the model performance question from the point of view of an investor who evaluates models based on the performance of the (optimal) strategies that the models suggest. Under this new paradigm, the investor selects the model with the highest estimated expected utility. We interpret our performance measures in information theoretic terms and provide new generalizations of entropy and Kullback-Leibler relative entropy. We show that the relative performance measure is independent of the market prices if and only if the investors utility function is a member of a logarithmic family that admits a wide range of possible risk aversions. In this case, we show that the relative performance measure is equivalent to the (easily understood) differential expected growth of wealth or the (familiar) likelihood ratio. We state conditions under which relative performance measures for general utilities are well approximated by logarithmic-family-based relative performance measures. Some popular probability model performance measures (including ROC methods) are not consistent with our framework. We demonstrate that rank based performance measures can suggest model selections that are disastrous under various popular utilities.


Entropy | 2007

A Utility-Based Approach to Some Information Measures

Craig A. Friedman; Jinggang Huang; Sven Sandow

We review a decision theoretic, i.e., utility-based, motivation for entropy and Kullback-Leibler relative entropy, the natural generalizations that follow, and various properties of thesegeneralized quantities. We then consider these generalized quantities in an easily interpreted spe-cial case. We show that the resulting quantities, share many of the properties of entropy andrelative entropy, such as the data processing inequality and the second law of thermodynamics.We formulate an important statistical learning problem – probability estimation – in terms of ageneralized relative entropy. The solution of this problem reflects general risk preferences via theutility function; moreover, the solution is optimal in a sense of robust absolute performance.


Journal of Risk | 2007

How Much is a Model Upgrade Worth

Sven Sandow; Jinggang Huang; Craig A. Friedman

In order to shed light on cost/benefit tradeoffs faced by financial model builders and purchasers, we construct a monetary measure of the differential performance between probabilistic models. To this end, we adopt the point of view of an investor who maximizes his expected utility under the model he believes. We explore general properties of the monetary value of a model upgrade and investigate a variety of important special cases. We develop explicit formulas that can be applied in realistic settings by practitioners, for example, to default probability models, and provide a case study in this context.


Journal of Credit Risk | 2006

Financially Motivated Model Performance Measures

Craig A. Friedman; Sven Sandow

Since probabilistic models are now widely used for financial decision-making, model performance measurement is critically important. We discuss model performance measures that explicitly reflect the financial consequences of decisions based on the models and show how these model performance measures can uncover deficiencies in credit risk models that remain undetected by many popular measures.


Archive | 2005

Some Decision Theoretic Generalizations of Information Measures

Craig A. Friedman; Jinggang Huang; Sven Sandow

We review a decision theoretic, i.e., utility-based, motivation for entropy and Kullback-Leibler relative entropy, the natural generalizations that follow, and various properties of these generalized quantities. We then consider these generalized quantities in an easily interpreted special case. We show that the resulting quantities, share many of the properties of entropy and relative entropy, such as the data processing inequality and the second law of thermodynamics. We formulate an important statistical learning problem - probability estimation - in terms of a generalized relative entropy. The solution of this problem reflects general risk preferences via the utility function; moreover, the solution is optimal in a sense of robust absolute performance.


Archive | 2004

A Financial Approach to Machine Learning with Applications to Credit Risk

Craig A. Friedman; Jinggang Huang; Sven Sandow

We review a particular financially motivated method for evaluating probabilistic models and learning such models from data. We adopt the viewpoint of an expected-utility-maximizing investor who would use the model to make decisions (bets) that result in well-defined payoffs. In order to evaluate a particular model, we assume that there is an investor who believes the model. This investor allocates his assets so as to maximize his expected utility according to his beliefs, i.e., the investor allocates so as to maximize the expectation of his utility under the model probability measure. We then measure the success of the investors investment strategy in terms of the average utility the strategy provides on an out-of-sample data set. For an investor with a utility function in a certain logarithmic family, the resulting performance measure is the likelihood ratio. In the learning approach that we review here, we consider a one-parameter family of Pareto optimal models, which we define in terms of consistency with the training data and consistency with a prior (benchmark) model. We measure the former by means of the large-sample distribution of a vector of sample-averaged features, and the latter by means of a generalized relative entropy. We express each Pareto optimal model as the solution of a strictly convex optimization problem and its strictly concave (and tractable) dual, which is a regularized maximization of expected utility over a well-defined family of functions. Each Pareto optimal model is robust in the sense that it maximizes the worst-case outperformance relative to the benchmark model. We select the Pareto optimal model with maximum (out-of-sample) expected utility. We review the application of this learning method to two important credit risk problems: estimating conditional default probabilities, and estimating conditional probabilities for recovery rates of defaulted debt.


The Journal of Risk Finance | 2013

Data‐efficient model building for financial applications

Sven Sandow; Xuelong Zhou

Purpose – Investors often rely on probabilistic models that were learned from small historical labeled datasets. The purpose of this article is to propose a new method for data‐efficient model learning.Design/methodology/approach – The proposed method, which is an extension of the standard minimum relative entropy (MRE) approach and has a clear financial interpretation, belongs to the class of semi‐supervised algorithms, which can learn from data that are only partially labeled with values of the variable of interest.Findings – This study tests the method on an artificial dataset and uses it to learn a model for recovery of defaulted debt. In both cases, the resulting models perform better than the standard MRE model, when the number of labeled data is small.Originality/value – The method can be applied to financial problems where labeled data are sparse but unlabeled data are readily available.


The Journal of Risk Finance | 2007

Data-Efficient Model Building for Financial Applications: A Semi-Supervised Learning Approach

Sven Sandow; Xuelong Zhou

Investors often rely on probabilistic models that were learned from small historical datasets. We propose a new method for data-efficient model learning. This method, which is an extension of the standard minimum relative entropy (MRE) approach and has a clear financial interpretation, belongs to the class of semi-supervised algorithms, which can learn from data that are only partially labeled with values of the variable of interest. We test our method on an artificial dataset and use it to learn a model for recovery of defaulted debt. In both cases, the resulting models perform better than the standard MRE model, when the number of labeled data is small.


International Journal of Theoretical and Applied Finance | 2006

INFORMATION, MODEL PERFORMANCE, PRICING AND TRADING MEASURES IN INCOMPLETE MARKETS

Jinggang Huang; Sven Sandow; Craig A. Friedman

In the incomplete market setting, we define a generalized Kullback-Leibler relative entropy in terms of an investors expected utility. We motivate, from an economic point of view, this quantity — the relative U-entropy. Relative U-entropy measures the discrepancy from a set of pricing measures to a single probability measure. We show that the relative U-entropy shares a number of important properties with the usual Kullback-Leibler relative entropy, and establish the link between this quantity and the pricing measure corresponding to the least favorable market completion. We also describe an economic performance measure for probabilistic models that may be used by an investor in an incomplete market setting. We then introduce a statistical learning paradigm suitable for investors who learn models and base investment decisions, in an incomplete market, on these models.


Journal of Machine Learning Research | 2003

Learning probabilistic models: an expected utility maximization approach

Craig A. Friedman; Sven Sandow

Collaboration


Dive into the Sven Sandow's collaboration.

Researchain Logo
Decentralizing Knowledge