Featured Researches

Statistical Finance

A Sentiment Analysis Approach to the Prediction of Market Volatility

Prediction and quantification of future volatility and returns play an important role in financial modelling, both in portfolio optimization and risk management. Natural language processing today allows to process news and social media comments to detect signals of investors' confidence. We have explored the relationship between sentiment extracted from financial news and tweets and FTSE100 movements. We investigated the strength of the correlation between sentiment measures on a given day and market volatility and returns observed the next day. The findings suggest that there is evidence of correlation between sentiment and stock market movements: the sentiment captured from news headlines could be used as a signal to predict market returns; the same does not apply for volatility. Also, in a surprising finding, for the sentiment found in Twitter comments we obtained a correlation coefficient of -0.7, and p-value below 0.05, which indicates a strong negative correlation between positive sentiment captured from the tweets on a given day and the volatility observed the next day. We developed an accurate classifier for the prediction of market volatility in response to the arrival of new information by deploying topic modelling, based on Latent Dirichlet Allocation, to extract feature vectors from a collection of tweets and financial news. The obtained features were used as additional input to the classifier. Thanks to the combination of sentiment and topic modelling our classifier achieved a directional prediction accuracy for volatility of 63%.

Read more
Statistical Finance

A Stock Market Model Based on CAPM and Market Size

We introduce a new system of stochastic differential equations which models dependence of market beta and unsystematic risk upon size, measured by market capitalization. We fit our model using size deciles data from Kenneth French's data library. This model is somewhat similar to generalized volatility-stabilized models in (Pal, 2011; Pickova, 2013). The novelty of our work is twofold. First, we take into account the difference between price and total returns (in other words, between market size and wealth processes). Second, we work with actual market data. We study the long-term properties of this system of equations, and reproduce observed linearity of the capital distribution curve. Our model has two modifications: for price returns and for equity premium. Somewhat surprisingly, they exhibit the same fit, with very similar coefficients. In the Appendix, we analyze size-based real-world index funds.

Read more
Statistical Finance

A Structural Model for Fluctuations in Financial Markets

In this paper we provide a comprehensive analysis of a structural model for the dynamics of prices of assets traded in a market originally proposed in [1]. The model takes the form of an interacting generalization of the geometric Brownian motion model. It is formally equivalent to a model describing the stochastic dynamics of a system of analogue neurons, which is expected to exhibit glassy properties and thus many meta-stable states in a large portion of its parameter space. We perform a generating functional analysis, introducing a slow driving of the dynamics to mimic the effect of slowly varying macro-economic conditions. Distributions of asset returns over various time separations are evaluated analytically and are found to be fat-tailed in a manner broadly in line with empirical observations. Our model also allows to identify collective, interaction mediated properties of pricing distributions and it predicts pricing distributions which are significantly broader than their non-interacting counterparts, if interactions between prices in the model contain a ferro-magnetic bias. Using simulations, we are able to substantiate one of the main hypotheses underlying the original modelling, viz. that the phenomenon of volatility clustering can be rationalised in terms of an interplay between the dynamics within meta-stable states and the dynamics of occasional transitions between them.

Read more
Statistical Finance

A Study on Neural Network Architecture Applied to the Prediction of Brazilian Stock Returns

In this paper we present a statistical analysis about the characteristics that we intend to influence in the performance of the neural networks in terms of assertiveness in the prediction of Brazilian stock returns. We created a population of architectures for analysis and extracted the sample that had the best assertive performance. It was verified how the characteristics of this sample stand out and affect the neural networks. In addition, we make inferences about what kind of influence the different architectures have on the performance of neural networks. In the study, the prediction of the return of a Brazilian stock traded on the stock exchange of São Paulo to measure the error committed by the different architectures of constructed neural networks. The results are promising and indicate that some aspects of the neural network architecture have a significant impact on the assertiveness of the model.

Read more
Statistical Finance

A Theory of Information overload applied to perfectly efficient financial markets

Before the massive spread of computer technology, information was far from complex. The development of technology shifted the paradigm: from individuals who faced scarce and costly information to individuals who face massive amounts of information accessible at low costs. Nowadays we are living in the era of big data and investors deal every day with a huge flow of information. In the spirit of the modern idea that economic agents have limited computational capacity, we propose an original model using information overload to show how too much information could cause financial markets to depart from the traditional assumption of informational efficiency. We show that when information tends to infinite, the efficient market hypothesis ceases to be true. This happens also for lower levels of information, when the use of the maximum amount of information is not optimal for investors. The present work can be a stimulus to consider more realistic economic models and it can be further deepened including other realistic features present in financial markets, such as information asymmetry or noise in the transmission of information.

Read more
Statistical Finance

A Time Series Analysis-Based Forecasting Framework for the Indian Healthcare Sector

Designing efficient and robust algorithms for accurate prediction of stock market prices is one of the most exciting challenges in the field of time series analysis and forecasting. With the exponential rate of development and evolution of sophisticated algorithms and with the availability of fast computing platforms, it has now become possible to effectively and efficiently extract, store, process and analyze high volume of stock market data with diversity in its contents. Availability of complex algorithms which can execute very fast on parallel architecture over the cloud has made it possible to achieve higher accuracy in forecasting results while reducing the time required for computation. In this paper, we use the time series data of the healthcare sector of India for the period January 2010 till December 2016. We first demonstrate a decomposition approach of the time series and then illustrate how the decomposition results provide us with useful insights into the behavior and properties exhibited by the time series. Further, based on the structural analysis of the time series, we propose six different methods of forecasting for predicting the time series index of the healthcare sector. Extensive results are provided on the performance of the forecasting methods to demonstrate their effectiveness.

Read more
Statistical Finance

A Time Series Analysis-Based Stock Price Prediction Using Machine Learning and Deep Learning Models

Prediction of future movement of stock prices has always been a challenging task for the researchers. While the advocates of the efficient market hypothesis (EMH) believe that it is impossible to design any predictive framework that can accurately predict the movement of stock prices, there are seminal work in the literature that have clearly demonstrated that the seemingly random movement patterns in the time series of a stock price can be predicted with a high level of accuracy. Design of such predictive models requires choice of appropriate variables, right transformation methods of the variables, and tuning of the parameters of the models. In this work, we present a very robust and accurate framework of stock price prediction that consists of an agglomeration of statistical, machine learning and deep learning models. We use the daily stock price data, collected at five minutes interval of time, of a very well known company that is listed in the National Stock Exchange (NSE) of India. The granular data is aggregated into three slots in a day, and the aggregated data is used for building and training the forecasting models. We contend that the agglomerative approach of model building that uses a combination of statistical, machine learning, and deep learning approaches, can very effectively learn from the volatile and random movement patterns in a stock price data. We build eight classification and eight regression models based on statistical and machine learning approaches. In addition to these models, a deep learning regression model using a long-and-short-term memory (LSTM) network is also built. Extensive results have been presented on the performance of these models, and the results are critically analyzed.

Read more
Statistical Finance

A Weight-based Information Filtration Algorithm for Stock-Correlation Networks

Several algorithms have been proposed to filter information on a complete graph of correlations across stocks to build a stock-correlation network. Among them the planar maximally filtered graph (PMFG) algorithm uses 3n−6 edges to build a graph whose features include a high frequency of small cliques and a good clustering of stocks. We propose a new algorithm which we call proportional degree (PD) to filter information on the complete graph of normalised mutual information (NMI) across stocks. Our results show that the PD algorithm produces a network showing better homogeneity with respect to cliques, as compared to economic sectoral classification than its PMFG counterpart. We also show that the partition of the PD network obtained through normalised spectral clustering (NSC) agrees better with the NSC of the complete graph than the corresponding one obtained from PMFG. Finally, we show that the clusters in the PD network are more robust with respect to the removal of random sets of edges than those in the PMFG network.

Read more
Statistical Finance

A bootstrap test to detect prominent Granger-causalities across frequencies

Granger-causality in the frequency domain is an emerging tool to analyze the causal relationship between two time series. We propose a bootstrap test on unconditional and conditional Granger-causality spectra, as well as on their difference, to catch particularly prominent causality cycles in relative terms. In particular, we consider a stochastic process derived applying independently the stationary bootstrap to the original series. Our null hypothesis is that each causality or causality difference is equal to the median across frequencies computed on that process. In this way, we are able to disambiguate causalities which depart significantly from the median one obtained ignoring the causality structure. Our test shows power one as the process tends to non-stationarity, thus being more conservative than parametric alternatives. As an example, we infer about the relationship between money stock and GDP in the Euro Area via our approach, considering inflation, unemployment and interest rates as conditioning variables. We point out that during the period 1999-2017 the money stock aggregate M1 had a significant impact on economic output at all frequencies, while the opposite relationship is significant only at high frequencies.

Read more
Statistical Finance

A changepoint approach for the identification of financial extreme regimes

Inference over tails is usually performed by fitting an appropriate limiting distribution over observations that exceed a fixed threshold. However, the choice of such threshold is critical and can affect the inferential results. Extreme value mixture models have been defined to estimate the threshold using the full dataset and to give accurate tail estimates. Such models assume that the tail behavior is constant for all observations. However, the extreme behavior of financial returns often changes considerably in time and such changes occur by sudden shocks of the market. Here we extend the extreme value mixture model class to formally take into account distributional extreme changepoints, by allowing for the presence of regime-dependent parameters modelling the tail of the distribution. This extension formally uses the full dataset to both estimate the thresholds and the extreme changepoint locations, giving uncertainty measures for both quantities. Estimation of functions of interest in extreme value analyses is performed via MCMC algorithms. Our approach is evaluated through a series of simulations, applied to real data sets and assessed against competing approaches. Evidence demonstrates that the inclusion of different extreme regimes outperforms both static and dynamic competing approaches in financial applications.

Read more

Ready to get started?

Join us today