Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhengjun Zhang is active.

Publication


Featured researches published by Zhengjun Zhang.


Journal of Business & Economic Statistics | 2018

Max-Linear Competing Factor Models

Qiurong Cui; Zhengjun Zhang

Models incorporating “latent” variables have been commonplace in financial, social, and behavioral sciences. Factor model, the most popular latent model, explains the continuous observed variables in a smaller set of latent variables (factors) in a matter of linear relationship. However, complex data often simultaneously display asymmetric dependence, asymptotic dependence, and positive (negative) dependence between random variables, which linearity and Gaussian distributions and many other extant distributions are not capable of modeling. This article proposes a nonlinear factor model that can model the above-mentioned variable dependence features but still possesses a simple form of factor structure. The random variables, marginally distributed as unit Fréchet distributions, are decomposed into max linear functions of underlying Fréchet idiosyncratic risks, transformed from Gaussian copula, and independent shared external Fréchet risks. By allowing the random variables to share underlying (latent) pervasive risks with random impact parameters, various dependence structures are created. This innovates a new promising technique to generate families of distributions with simple interpretations. We dive in the multivariate extreme value properties of the proposed model and investigate maximum composite likelihood methods for the impact parameters of the latent risks. The estimates are shown to be consistent. The estimation schemes are illustrated on several sets of simulated data, where comparisons of performance are addressed. We employ a bootstrap method to obtain standard errors in real data analysis. Real application to financial data reveals inherent dependencies that previous work has not disclosed and demonstrates the model’s interpretability to real data. Supplementary materials for this article are available online.


Applied Economics | 2018

Can cryptocurrencies be a safe haven: a tail risk perspective analysis

Wenjun Feng; Yiming Wang; Zhengjun Zhang

ABSTRACT Cryptocurrencies are one of the most promising financial innovations of the last decade. Different from major stock indices and the commodities of gold and crude oil, the cryptocurrencies exhibit some characteristics of immature market assets, such as auto-correlated and non-stationary return series, higher volatility, and higher tail risks measured by conditional Value at Risk (VaR) and conditional expected shortfall (ES). Using an extreme-value-theory-based method, we evaluate the extreme characteristics of seven representative cryptocurrencies during 08 August 2015–01 August 2017. We find that during the sub-period of 01 August 2016–01 August 2017, there are finite loss boundaries for most of the selected cryptocurrencies, which are similar to the commodities, and different from the stock indices. Meanwhile, we find that left tail correlations are much stronger than right tail correlations among the cryptocurrencies, and tail correlations increased after August 2016, suggesting high and growing systematic extreme risks. We also find that cryptocurrencies to be both left tail independent, and cross tail independent with four selected stock indices, which implies part of the safe-haven function of the cryptocurrencies, indicating their ability to be a great diversifier for the stock market as gold, but not enough to be a tail hedging tool like gold.


Entropy | 2017

Robust-BD Estimation and Inference for General Partially Linear Models

Chunming Zhang; Zhengjun Zhang

The classical quadratic loss for the partially linear model (PLM) and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD)” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM), which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005) between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations.


Technometrics | 2016

Statistical Learning of Neuronal Functional Connectivity

Chunming Zhang; Yi Chai; Xiao Guo; Muhong Gao; David M. Devilbiss; Zhengjun Zhang

Identifying the network structure of a neuron ensemble beyond the standard measure of pairwise correlations is critical for understanding how information is transferred within such a neural population. However, the spike train data pose significant challenges to conventional statistical methods due to not only the complexity, massive size, and large scale, but also high dimensionality. In this article, we propose a novel “structural information enhanced” (SIE) regularization method for estimating the conditional intensities under the generalized linear model (GLM) framework to better capture the functional connectivity among neurons. We study the consistency of parameter estimation of the proposed method. A new “accelerated full gradient update” algorithm is developed to efficiently handle the complex penalty in the SIE-GLM for large sparse datasets applicable to spike train data. Simulation results indicate that our proposed method outperforms existing approaches. An application of the proposed method to a real spike train dataset, obtained from the prelimbic region of the prefrontal cortex of adult male rats when performing a T-maze based delayed-alternation task of working memory, provides some insight into the neuronal network in that region.


Annals of the Institute of Statistical Mathematics | 2008

The estimation of M4 processes with geometric moving patterns

Zhengjun Zhang


Finance Research Letters | 2017

Informed trading in the Bitcoin market

Wenjun Feng; Yiming Wang; Zhengjun Zhang


Statistica Sinica | 2014

Robust-BD estimation and inference for varying-dimensional general linear models

Chunming Zhang; Xiao Guo; Chen Cheng; Zhengjun Zhang


Statistics and Its Interface | 2011

An extension of max autoregressive models

Philippe Naveau; Zhengjun Zhang; Bin Zhu


arXiv: Probability | 2012

Intrinsically Weighted Means of Marked Point Processes

Alexander Malinowski; Martin Schlather; Zhengjun Zhang


Statistica Sinica | 2017

Random Threshold Driven Tail Dependence Measures with Application to Precipitation Data Analysis

Zhengjun Zhang; Chunming Zhang; Qiurong Cui

Collaboration


Dive into the Zhengjun Zhang's collaboration.

Top Co-Authors

Avatar

Chunming Zhang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Chai

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiao Guo

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin Zhu

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

David M. Devilbiss

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Liang Peng

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge