Featured Researches

Pricing of Securities

The Stochastic Balance Equation for the American Option Value Function and its Gradient

In the paper we consider the problem of valuation and hedging of American options written on dividend-paying assets whose price dynamics follow the multidimensional diffusion model. We derive a stochastic balance equation for the American option value function and its gradient. We prove that the latter pair is the unique solution of the stochastic balance equation as a result of the uniqueness in the related adapted future-supremum problem.

Read more
Risk Management

Some results on the risk capital allocation rule induced by the Conditional Tail Expectation risk measure

Risk capital allocations (RCAs) are an important tool in quantitative risk management, where they are utilized to, e.g., gauge the profitability of distinct business units, determine the price of a new product, and conduct the marginal economic capital analysis. Nevertheless, the notion of RCA has been living in the shadow of another, closely related notion, of risk measure (RM) in the sense that the latter notion often shapes the fashion in which the former notion is implemented. In fact, as the majority of the RCAs known nowadays are induced by RMs, the popularity of the two are apparently very much correlated. As a result, it is the RCA that is induced by the Conditional Tail Expectation (CTE) RM that has arguably prevailed in scholarly literature and applications. Admittedly, the CTE RM is a sound mathematical object and an important regulatory RM, but its appropriateness is controversial in, e.g., profitability analysis and pricing. In this paper, we address the question as to whether or not the RCA induced by the CTE RM may concur with alternatives that arise from the context of profit maximization. More specifically, we provide exhaustive description of all those probabilistic model settings, in which the mathematical and regulatory CTE RM may also reflect the risk perception of a profit-maximizing insurer.

Read more
Trading and Market Microstructure

Liquidation, Leverage and Optimal Margin in Bitcoin Futures Markets

Using the generalized extreme value theory to characterize tail distributions, we address liquidation, leverage, and optimal margins for bitcoin long and short futures positions. The empirical analysis of perpetual bitcoin futures on BitMEX shows that (1) daily forced liquidations to out- standing futures are substantial at 3.51%, and 1.89% for long and short; (2) investors got forced liquidation do trade aggressively with average leverage of 60X; and (3) exchanges should elevate current 1% margin requirement to 33% (3X leverage) for long and 20% (5X leverage) for short to reduce the daily margin call probability to 1%. Our results further suggest normality assumption on return significantly underestimates optimal margins. Policy implications are also discussed.

Read more
General Finance

Internal hydro- and wind portfolio optimisation in real-time market operations

In this paper aspects related to handling of intraday imbalances for hydro and wind power are addressed. The definition of imbalance cost is established and used to describe the potential benefits of shifting from plant-specific schedules to a common load requirement for wind and hydropower units in the same price area. The Nordpool intraday pay-as-bid market has been the basis for evaluation of imbalances, and some main characteristics for this market has been described. We consider how internal handling of complementary imbalances within the same river system with high inflow uncertainty and constrained reservoirs can reduce volatility in short-term marginal cost and risk compared to trading in the intraday market. We have also shown the the imbalance cost for a power producer with both wind and hydropower assets can be reduced by internal balancing in combination with sales and purchase in a pay-as-bid intraday market

Read more
Portfolio Management

MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management

Financial portfolio management is one of the most applicable problems in reinforcement learning (RL) owing to its sequential decision-making nature. Existing RL-based approaches, while inspiring, often lack scalability, reusability, or profundity of intake information to accommodate the ever-changing capital markets. In this paper, we propose MSPM, a modularized and scalable, multi-agent RL-based system for financial portfolio management. MSPM involves two asynchronously updated units: an Evolving Agent Module (EAM) and Strategic Agent Module (SAM). A self-sustained EAM produces signal-comprised information for a specific asset using heterogeneous data inputs, and each EAM employs its reusability to have connections to multiple SAMs. An SAM is responsible for asset reallocation in a portfolio using profound information from the connected EAMs. With the elaborate architecture and the multi-step condensation of volatile market information, MSPM aims to provide a customizable, stable, and dedicated solution to portfolio management, unlike existing approaches. We also tackle the data-shortage issue of newly-listed stocks by transfer learning, and validate the indispensability of EAM with four different portfolios. Experiments on 8-year U.S. stock market data prove the effectiveness of MSPM in profit accumulation, by its outperformance over existing benchmarks.

Read more
Mathematical Finance

Bertram's Pairs Trading Strategy with Bounded Risk

Finding Bertram's optimal trading strategy for a pair of cointegrated assets following the Ornstein--Uhlenbeck price difference process can be formulated as an unconstrained convex optimization problem for maximization of expected profit per unit of time. This model is generalized to the form where the riskiness of profit, measured by its per-time-unit volatility, is controlled (e.g. in case of existence of limits on riskiness of trading strategies imposed by regulatory bodies). The resulting optimization problem need not be convex. In spite of this undesirable fact, it is demonstrated that the problem is still efficiently solvable. In addition, the problem that parameters of the price difference process are never known exactly and are imprecisely estimated from an observed finite sample is investigated (recalling that this problem is critical for practice). It is shown how the imprecision affects the optimal trading strategy by quantification of the loss caused by the imprecise estimate compared to a theoretical trader knowing the parameters exactly. The main results focus on the geometric and optimization-theoretic viewpoint of the risk-bounded trading strategy and the imprecision resulting from the statistical estimates.

Read more
Statistical Finance

Combination of window-sliding and prediction range method based on LSTM model for predicting cryptocurrency

The present study aims to establish the model of the cryptocurrency price trend based on financial theory using the LSTM model with multiple combinations between the window length and the predicting horizons, the random walk model is also applied with different parameter settings.

Read more
Computational Finance

Variational Autoencoders: A Hands-Off Approach to Volatility

A volatility surface is an important tool for pricing and hedging derivatives. The surface shows the volatility that is implied by the market price of an option on an asset as a function of the option's strike price and maturity. Often, market data is incomplete and it is necessary to estimate missing points on partially observed surfaces. In this paper, we show how variational autoencoders can be used for this task. The first step is to derive latent variables that can be used to construct synthetic volatility surfaces that are indistinguishable from those observed historically. The second step is to determine the synthetic surface generated by our latent variables that fits available data as closely as possible. As a dividend of our first step, the synthetic surfaces produced can also be used in stress testing, in market simulators for developing quantitative investment strategies, and for the valuation of exotic options. We illustrate our procedure and demonstrate its power using foreign exchange market data.

Read more
Economics

The Origin and the Resolution of Nonuniqueness in Linear Rational Expectations

The nonuniqueness of rational expectations is explained: in the stochastic, discrete-time, linear, constant-coefficients case, the associated free parameters are coefficients that determine the public's most immediate reactions to shocks. The requirement of model-consistency may leave these parameters completely free, yet when their values are appropriately specified, a unique solution is determined. In a broad class of models, the requirement of least-square forecast errors determines the parameter values, and therefore defines a unique solution. This approach is independent of dynamical stability, and generally does not suppress model dynamics. Application to a standard New Keynesian example shows that the traditional solution suppresses precisely those dynamics that arise from rational expectations. The uncovering of those dynamics reveals their incompatibility with the new I-S equation and the expectational Phillips curve.

Read more

Pricing of Securities

The Stochastic Balance Equation for the American Option Value Function and its Gradient

In the paper we consider the problem of valuation and hedging of American options written on dividend-paying assets whose price dynamics follow the multidimensional diffusion model. We derive a stochastic balance equation for the American option value function and its gradient. We prove that the latter pair is the unique solution of the stochastic balance equation as a result of the uniqueness in the related adapted future-supremum problem.

More from Pricing of Securities
Climate Change Valuation Adjustment (CCVA) using parameterized climate change impacts

We introduce Climate Change Valuation Adjustment (CCVA) to capture climate change impacts on CVA+FVA that are currently invisible assuming typical market practice. To discuss such impacts on CVA+FVA from changes to instantaneous hazard rates we introduce a flexible and expressive parameterization to capture the path of this impact to climate change endpoints, and transient transition effects. Finally we provide quantification of examples of typical interest where there is risk of economic stress from sea level change up to 2101, and from transformations of business models. We find that even with the slowest possible uniform approach to a climate change impact in 2101 there can still be significant CVA+FVA impacts on interest rate swaps of 20 years or more maturity. Transformation effects on CVA+FVA are strongly dependent on timing and duration of business model transformation. Using a parameterized approach enables discussion with stakeholders of economic impacts on CVA+FVA, whatever the details behind the climate impact.

More from Pricing of Securities
A structural approach to default modelling with pure jump processes

We present a general framework for the estimation of corporate default based on a firm's capital structure, when its assets are assumed to follow a pure jump Lévy processes; this setup provides a natural extension to usual default metrics defined in diffusion (log-normal) models, and allows to capture extreme market events such as sudden drops in asset prices, which are closely linked to default occurrence. Within this framework, we introduce several processes featuring negative jumps only and derive practical closed formulas for equity prices, which enable us to use a moment-based algorithm to calibrate the parameters from real market data and to estimate the associated default metrics. A notable feature of these models is the redistribution of credit risk towards shorter maturity: this constitutes an interesting improvement to diffusion models, which are known to underestimate short term default probabilities. We also provide extensions to a model featuring both positive and negative jumps and discuss qualitative and quantitative features of the results. For readers convenience, practical tools for model implementation and R code are also included.

More from Pricing of Securities
Risk Management

Some results on the risk capital allocation rule induced by the Conditional Tail Expectation risk measure

Risk capital allocations (RCAs) are an important tool in quantitative risk management, where they are utilized to, e.g., gauge the profitability of distinct business units, determine the price of a new product, and conduct the marginal economic capital analysis. Nevertheless, the notion of RCA has been living in the shadow of another, closely related notion, of risk measure (RM) in the sense that the latter notion often shapes the fashion in which the former notion is implemented. In fact, as the majority of the RCAs known nowadays are induced by RMs, the popularity of the two are apparently very much correlated. As a result, it is the RCA that is induced by the Conditional Tail Expectation (CTE) RM that has arguably prevailed in scholarly literature and applications. Admittedly, the CTE RM is a sound mathematical object and an important regulatory RM, but its appropriateness is controversial in, e.g., profitability analysis and pricing. In this paper, we address the question as to whether or not the RCA induced by the CTE RM may concur with alternatives that arise from the context of profit maximization. More specifically, we provide exhaustive description of all those probabilistic model settings, in which the mathematical and regulatory CTE RM may also reflect the risk perception of a profit-maximizing insurer.

More from Risk Management
Liquidity Stress Testing using Optimal Portfolio Liquidation

We build an optimal portfolio liquidation model for OTC markets, aiming at minimizing the trading costs via the choice of the liquidation time. We work in the Locally Linear Order Book framework of \cite{toth2011anomalous} to obtain the market impact as a function of the traded volume. We find that the optimal terminal time for a linear execution of a small order is proportional to the square root of the ratio between the amount being bought or sold and the average daily volume. Numerical experiments on real market data illustrate the method on a portfolio of corporate bonds.

More from Risk Management
Insurance Business and Sustainable Development

In this study, we will discuss recent developments in risk management of the global financial and insurance business with respect to sustainable development. So far climate change aspects have been the dominant aspect in managing sustainability risks and opportunities, accompanied by the development of several legislative initiatives triggered by supervisory authorities. However, a sole concentration on these aspects misses out other important economic and social facets of sustainable development goals formulated by the UN. Such aspects have very recently come into the focus of the European Committee concerning the Solvency II project for the European insurance industry. Clearly the new legislative expectations can be better handled by larger insurance companies and holdings than by small- and medium-sized mutual insurance companies which are numerous in central Europe, due to their historic development starting in the late medieval ages and early modern times. We therefore also concentrate on strategies within the risk management of such small- and medium-sized enterprises that can be achieved without much effort, in particular those that are not directly related to climate change.

More from Risk Management
Trading and Market Microstructure

Liquidation, Leverage and Optimal Margin in Bitcoin Futures Markets

Using the generalized extreme value theory to characterize tail distributions, we address liquidation, leverage, and optimal margins for bitcoin long and short futures positions. The empirical analysis of perpetual bitcoin futures on BitMEX shows that (1) daily forced liquidations to out- standing futures are substantial at 3.51%, and 1.89% for long and short; (2) investors got forced liquidation do trade aggressively with average leverage of 60X; and (3) exchanges should elevate current 1% margin requirement to 33% (3X leverage) for long and 20% (5X leverage) for short to reduce the daily margin call probability to 1%. Our results further suggest normality assumption on return significantly underestimates optimal margins. Policy implications are also discussed.

More from Trading and Market Microstructure
Auction Type Resolution on Smart Derivatives

This paper proposes an auction type resolution for smart derivatives. It has been discussed to migrate derivatives contracts to smart contracts (smart derivatives). Automation is often discussed in this context. It is also important to prepare to avoid disputes from practical perspectives. There are controversial issues to terminate the relationship at defaults. In OTC derivative markets, master agreements define a basic policy for the liquidation process but there happened some disputes over these processes. We propose to define an auction type resolution in smart derivatives, which each participant would find beneficial.

More from Trading and Market Microstructure
Cross impact in derivative markets

We introduce a linear cross-impact framework in a setting in which the price of some given financial instruments (derivatives) is a deterministic function of one or more, possibly tradeable, stochastic factors (underlying). We show that a particular cross-impact model, the multivariate Kyle model, prevents arbitrage and aggregates (potentially non-stationary) traded order flows on derivatives into (roughly stationary) liquidity pools aggregating order flows traded on both derivatives and underlying. Using E-Mini futures and options along with VIX futures, we provide empirical evidence that the price formation process from order flows on derivatives is driven by cross-impact and confirm that the simple Kyle cross-impact model is successful at capturing parsimoniously such empirical phenomenology. Our framework may be used in practice for estimating execution costs, in particular hedging costs.

More from Trading and Market Microstructure
General Finance

Internal hydro- and wind portfolio optimisation in real-time market operations

In this paper aspects related to handling of intraday imbalances for hydro and wind power are addressed. The definition of imbalance cost is established and used to describe the potential benefits of shifting from plant-specific schedules to a common load requirement for wind and hydropower units in the same price area. The Nordpool intraday pay-as-bid market has been the basis for evaluation of imbalances, and some main characteristics for this market has been described. We consider how internal handling of complementary imbalances within the same river system with high inflow uncertainty and constrained reservoirs can reduce volatility in short-term marginal cost and risk compared to trading in the intraday market. We have also shown the the imbalance cost for a power producer with both wind and hydropower assets can be reduced by internal balancing in combination with sales and purchase in a pay-as-bid intraday market

More from General Finance
The VIX index under scrutiny of machine learning techniques and neural networks

The CBOE Volatility Index, known by its ticker symbol VIX, is a popular measure of the market's expected volatility on the SP 500 Index, calculated and published by the Chicago Board Options Exchange (CBOE). It is also often referred to as the fear index or the fear gauge. The current VIX index value quotes the expected annualized change in the SP 500 index over the following 30 days, based on options-based theory and current options-market data. Despite its theoretical foundation in option price theory, CBOE's Volatility Index is prone to inadvertent and deliberate errors because it is weighted average of out-of-the-money calls and puts which could be illiquid. Many claims of market manipulation have been brought up against VIX in recent years. This paper discusses several approaches to replicate the VIX index as well as VIX futures by using a subset of relevant options as well as neural networks that are trained to automatically learn the underlying formula. Using subset selection approaches on top of the original CBOE methodology, as well as building machine learning and neural network models including Random Forests, Support Vector Machines, feed-forward neural networks, and long short-term memory (LSTM) models, we will show that a small number of options is sufficient to replicate the VIX index. Once we are able to actually replicate the VIX using a small number of SP options we will be able to exploit potential arbitrage opportunities between the VIX index and its underlying derivatives. The results are supposed to help investors to better understand the options market, and more importantly, to give guidance to the US regulators and CBOE that have been investigating those manipulation claims for several years.

More from General Finance
How Decentralized is the Governance of Blockchain-based Finance: Empirical Evidence from four Governance Token Distributions

Novel blockchain technology provides the infrastructure layer for the creation of decentralized appli-cations. A rapidly growing ecosystem of applications is built around financial services, commonly referred to as decentralized finance. Whereas the intangible concept of decentralization is presented as a key driver for the applications, defining and measuring decentralization is multifaceted. This pa-per provides a framework to quantify decentralization of governance power among blockchain appli-cations. Governance of the applications is increasingly important and requires striking a balance be-tween broad distribution, fostering user activity, and financial incentives. Therefore, we aggregate, parse, and analyze empirical data of four finance applications calculating coefficients for the statistical dispersion of the governance token distribution. The gauges potentially support IS scholars for an objective evaluation of the capabilities and limitations of token governance and for fast iteration in design-driven governance mechanisms.

More from General Finance
Portfolio Management

MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management

Financial portfolio management is one of the most applicable problems in reinforcement learning (RL) owing to its sequential decision-making nature. Existing RL-based approaches, while inspiring, often lack scalability, reusability, or profundity of intake information to accommodate the ever-changing capital markets. In this paper, we propose MSPM, a modularized and scalable, multi-agent RL-based system for financial portfolio management. MSPM involves two asynchronously updated units: an Evolving Agent Module (EAM) and Strategic Agent Module (SAM). A self-sustained EAM produces signal-comprised information for a specific asset using heterogeneous data inputs, and each EAM employs its reusability to have connections to multiple SAMs. An SAM is responsible for asset reallocation in a portfolio using profound information from the connected EAMs. With the elaborate architecture and the multi-step condensation of volatile market information, MSPM aims to provide a customizable, stable, and dedicated solution to portfolio management, unlike existing approaches. We also tackle the data-shortage issue of newly-listed stocks by transfer learning, and validate the indispensability of EAM with four different portfolios. Experiments on 8-year U.S. stock market data prove the effectiveness of MSPM in profit accumulation, by its outperformance over existing benchmarks.

More from Portfolio Management
Integrating prediction in mean-variance portfolio optimization

Many problems in quantitative finance involve both predictive forecasting and decision-based optimization. Traditionally, predictive models are optimized with unique prediction-based objectives and constraints, and are therefore unaware of how those predictions will ultimately be used in the context of their final decision-based optimization. We present a stochastic optimization framework for integrating regression based predictive models in a mean-variance portfolio optimization setting. Closed-form analytical solutions are provided for the unconstrained and equality constrained case. For the general inequality constrained case, we make use of recent advances in neural-network architecture for efficient optimization of batch quadratic-programs. To our knowledge, this is the first rigorous study of integrating prediction in a mean-variance portfolio optimization setting. We present several historical simulations using global futures data and demonstrate the benefits of the integrated approach in comparison to the decoupled alternative.

More from Portfolio Management
Deep Reinforcement Learning for Portfolio Optimization using Latent Feature State Space (LFSS) Module

Dynamic Portfolio optimization is the process of distribution and rebalancing of a fund into different financial assets such as stocks, cryptocurrencies, etc, in consecutive trading periods to maximize accumulated profits or minimize risks over a time horizon. This field saw huge developments in recent years, because of the increased computational power and increased research in sequential decision making through control theory. Recently Reinforcement Learning(RL) has been an important tool in the development of sequential and dynamic portfolio optimization theory. In this paper, we design a Deep Reinforcement Learning(DRL) framework as an autonomous portfolio optimization agent consisting of a Latent Feature State Space(LFSS) Module for filtering and feature extraction of financial data which is used as a state space for deep RL model. We develop an extensive RL agent with high efficiency and performance advantages over several benchmarks and model-free RL agents used in prior work. The noisy and non-stationary behaviour of daily asset prices in the financial market is addressed through Kalman Filter. Autoencoders, ZoomSVD, and restricted Boltzmann machines were the models used and compared in the module to extract relevant time series features as state space. We simulate weekly data, with practical constraints and transaction costs, on a portfolio of S&P 500 stocks. We introduce a new benchmark based on technical indicator Kd-Index and Mean-Variance Model as compared to equal weighted portfolio used in most of the prior work. The study confirms that the proposed RL portfolio agent with state space function in the form of LFSS module gives robust results with an attractive performance profile over baseline RL agents and given benchmarks.

More from Portfolio Management
Mathematical Finance

Bertram's Pairs Trading Strategy with Bounded Risk

Finding Bertram's optimal trading strategy for a pair of cointegrated assets following the Ornstein--Uhlenbeck price difference process can be formulated as an unconstrained convex optimization problem for maximization of expected profit per unit of time. This model is generalized to the form where the riskiness of profit, measured by its per-time-unit volatility, is controlled (e.g. in case of existence of limits on riskiness of trading strategies imposed by regulatory bodies). The resulting optimization problem need not be convex. In spite of this undesirable fact, it is demonstrated that the problem is still efficiently solvable. In addition, the problem that parameters of the price difference process are never known exactly and are imprecisely estimated from an observed finite sample is investigated (recalling that this problem is critical for practice). It is shown how the imprecision affects the optimal trading strategy by quantification of the loss caused by the imprecise estimate compared to a theoretical trader knowing the parameters exactly. The main results focus on the geometric and optimization-theoretic viewpoint of the risk-bounded trading strategy and the imprecision resulting from the statistical estimates.

More from Mathematical Finance
Optimal Investment and Consumption under a Habit-Formation Constraint

We extend the result of our earlier study [Angoshtari, Bayraktar, and Young; "Optimal consumption under a habit-formation constraint," available at: arXiv:2012.02277, (2020)] to a market setup that includes a risky asset whose price process is a geometric Brownian motion. We formulate an infinite-horizon optimal investment and consumption problem, in which an individual forms a habit based on the exponentially weighted average of her past consumption rate, and in which she invests in a Black-Scholes market. The novelty of our model is in specifying habit formation through a constraint rather than the common approach via the objective function. Specifically, the individual is constrained to consume at a rate higher than a certain proportion α of her consumption habit. Our habit-formation model allows for both addictive ( α=1 ) and nonaddictive ( 0<α<1 ) habits. The optimal investment and consumption policies are derived explicitly in terms of the solution of a system of differential equations with free boundaries, which is analyzed in detail. If the wealth-to-habit ratio is below (resp. above) a critical level x ??, the individual consumes at (resp. above) the minimum rate and invests more (resp. less) aggressively in the risky asset. Numerical results show that the addictive habit formation requires significantly more wealth to support the same consumption rate compared to a moderately nonaddictive habit. Furthermore, an individual with a more addictive habit invests less in the risky asset compared to an individual with a less addictive habit but with the same wealth-to-habit ratio and risk aversion, which provides an explanation for the equity-premium puzzle.

More from Mathematical Finance
When to Quit Gambling, if You Must!

We develop an approach to solve Barberis (2012)'s casino gambling model in which a gambler whose preferences are specified by the cumulative prospect theory (CPT) must decide when to stop gambling by a prescribed deadline. We assume that the gambler can assist their decision using an independent randomization, and explain why it is a reasonable assumption. The problem is inherently time-inconsistent due to the probability weighting in CPT, and we study both precommitted and naive stopping strategies. We turn the original problem into a computationally tractable mathematical program, based on which we derive an optimal precommitted rule which is randomized and Markovian. The analytical treatment enables us to make several predictions regarding a gambler's behavior, including that with randomization they may enter the casino even when allowed to play only once, that whether they will play longer once they are granted more bets depends on whether they are in a gain or at a loss, and that it is prevalent that a naivite never stops loss.

More from Mathematical Finance
Statistical Finance

Combination of window-sliding and prediction range method based on LSTM model for predicting cryptocurrency

The present study aims to establish the model of the cryptocurrency price trend based on financial theory using the LSTM model with multiple combinations between the window length and the predicting horizons, the random walk model is also applied with different parameter settings.

More from Statistical Finance
Asymmetric Tsallis distributions for modelling financial market dynamics

Financial markets are highly non-linear and non-equilibrium systems. Earlier works have suggested that the behavior of market returns can be well described within the framework of non-extensive Tsallis statistics or superstatistics. For small time scales (delays), a good fit to the distributions of stock returns is obtained with q-Gaussian distributions, which can be derived either from Tsallis statistics or superstatistics. These distributions are symmetric. However, as the time lag increases, the distributions become increasingly non-symmetric. In this work, we address this problem by considering the data distribution as a linear combination of two independent normalized distributions - one for negative returns and one for positive returns. Each of these two independent distributions are half q-Gaussians with different non-extensivity parameter q and temperature parameter beta. Using this model, we investigate the behavior of stock market returns over time scales from 1 to 80 days. The data covers both the .com bubble and the 2008 crash periods. These investigations show that for all the time lags, the fits to the data distributions are better using asymmetric distributions than symmetric q-Gaussian distributions. The behaviors of the q parameter are quite different for positive and negative returns. For positive returns, q approaches a constant value of 1 after a certain lag, indicating the distributions have reached equilibrium. On the other hand, for negative returns, the q values do not reach a stationary value over the time scales studied. In the present model, the markets show a transition from normal to superdiffusive behavior (a possible phase transition) during the 2008 crash period. Such behavior is not observed with a symmetric q-Gaussian distribution model with q independent of time lag.

More from Statistical Finance
Exploring asymmetric multifractal cross-correlations of price-volatility and asymmetric volatility dynamics in cryptocurrency markets

Asymmetric relationship between price and volatility is a prominent feature of the financial market time series. This paper explores the price-volatility nexus in cryptocurrency markets and investigates the presence of asymmetric volatility effect between uptrend (bull) and downtrend (bear) regimes. The conventional GARCH-class models have shown that in cryptocurrency markets, asymmetric reactions of volatility to returns differ from those of other traditional financial assets. We address this issue from a viewpoint of fractal analysis, which can cover the nonlinear interactions and the self-similarity properties widely acknowledged in the field of econophysics. The asymmetric cross-correlations between price and volatility for Bitcoin (BTC), Ethereum (ETH), Ripple (XRP), and Litecoin (LTC) during the period from June 1, 2016 to December 28, 2020 are investigated using the MF-ADCCA method and quantified via the asymmetric DCCA coefficient. The approaches take into account the nonlinearity and asymmetric multifractal scaling properties, providing new insights in investigating the relationships in a dynamical way. We find that cross-correlations are stronger in downtrend markets than in uptrend markets for maturing BTC and ETH. In contrast, for XRP and LTC, inverted reactions are present where cross-correlations are stronger in uptrend markets.

More from Statistical Finance
Computational Finance

Variational Autoencoders: A Hands-Off Approach to Volatility

A volatility surface is an important tool for pricing and hedging derivatives. The surface shows the volatility that is implied by the market price of an option on an asset as a function of the option's strike price and maturity. Often, market data is incomplete and it is necessary to estimate missing points on partially observed surfaces. In this paper, we show how variational autoencoders can be used for this task. The first step is to derive latent variables that can be used to construct synthetic volatility surfaces that are indistinguishable from those observed historically. The second step is to determine the synthetic surface generated by our latent variables that fits available data as closely as possible. As a dividend of our first step, the synthetic surfaces produced can also be used in stress testing, in market simulators for developing quantitative investment strategies, and for the valuation of exotic options. We illustrate our procedure and demonstrate its power using foreign exchange market data.

More from Computational Finance
A deep learning model for gas storage optimization

To the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon. In this article, we utilize techniques inspired by reinforcement learning in order to optimize the operation plans of underground natural gas storage facilities. We provide a theoretical framework and assess the performance of the proposed method numerically in comparison to a state-of-the-art least-squares Monte-Carlo approach. Due to the inherent intricacy originating from the high-dimensional forward market as well as the numerous constraints and frictions, the optimization exercise can hardly be tackled by means of traditional techniques.

More from Computational Finance
Deep Hedging under Rough Volatility

We investigate the performance of the Deep Hedging framework under training paths beyond the (finite dimensional) Markovian setup. In particular we analyse the hedging performance of the original architecture under rough volatility models with view to existing theoretical results for those. Furthermore, we suggest parsimonious but suitable network architectures capable of capturing the non-Markoviantity of time-series. Secondly, we analyse the hedging behaviour in these models in terms of P\&L distributions and draw comparisons to jump diffusion models if the the rebalancing frequency is realistically small.

More from Computational Finance
Economics

The Origin and the Resolution of Nonuniqueness in Linear Rational Expectations

The nonuniqueness of rational expectations is explained: in the stochastic, discrete-time, linear, constant-coefficients case, the associated free parameters are coefficients that determine the public's most immediate reactions to shocks. The requirement of model-consistency may leave these parameters completely free, yet when their values are appropriately specified, a unique solution is determined. In a broad class of models, the requirement of least-square forecast errors determines the parameter values, and therefore defines a unique solution. This approach is independent of dynamical stability, and generally does not suppress model dynamics. Application to a standard New Keynesian example shows that the traditional solution suppresses precisely those dynamics that arise from rational expectations. The uncovering of those dynamics reveals their incompatibility with the new I-S equation and the expectational Phillips curve.

More from Economics
Data-based Automatic Discretization of Nonparametric Distributions

Although using non-Gaussian distributions in economic models has become increasingly popular, currently there is no systematic way for calibrating a discrete distribution from the data without imposing parametric assumptions. This paper proposes a simple nonparametric calibration method based on the Golub-Welsch algorithm for Gaussian quadrature. Application to an optimal portfolio problem suggests that assuming Gaussian instead of nonparametric shocks leads to up to 17% overweighting in the stock portfolio because the investor underestimates the probability of crashes.

More from Economics
Corruption-free scheme of entering into contract: mathematical model

The main purpose of this paper is to formalize the modelling process, analysis and mathematical definition of corruption when entering into a contract between principal agent and producers. The formulation of the problem and the definition of concepts for the general case are considered. For definiteness, all calculations and formulas are given for the case of three producers, one principal agent and one intermediary. Economic analysis of corruption allowed building a mathematical model of interaction between agents. Financial resources distribution problem in a contract with a corrupted intermediary is considered.Then proposed conditions for corruption emergence and its possible consequences. Optimal non-corruption schemes of financial resources distribution in a contract are formed, when principal agent's choice is limited first only by asymmetrical information and then also by external influences.Numerical examples suggesting optimal corruption-free agents' behaviour are presented.

More from Economics

Ready to get started?

Join us today