Econometrics of High-Dimensional Risk Networks | October 16 – 17, 2015, Chicago

     Francis X. Diebold     Eric Ghysels        Per Mykland       Lan Zhang  

The purpose of the workshops is to bring together researchers in the very active and burgeoning field of analyzing large scale data pertaining to risk measures.  This includes but is not limited to studying common factors extracted from risk measures such as realized volatility, estimation of large dimensional covariance matrices, factor analysis of liquidity/default risk measures, networks tools for measuring connectedness, etc.


Friday, October 16

9:00 am     Registration

10:15 am   Opening remarks

10:30 am   Robert Engle   Stern, NYU

Long Run Risk Management: Scenario Generation for the Term Structure


In the low volatility environment preceding the financial crisis, many firms increased their leverage and risk.  Arguably investments in illiquid positions should have been evaluated with respect to long horizon volatility and risk.  Risk managers should account for the risk that the risk can change.  Scenario analysis is a solution to the need.  A probability based scenario generator is developed to examine the long run risk of the US treasury term structure.  It features a reduced rank vector autoregression with Nelson Siegel factors and GARCH-DCC multivariate disturbances.  Backtests motivate model improvements.  This is joint work with Emil Siriwardane, Harvard Business School.

11:15 am   Bryan Kelly   Chicago Booth School of Business

Firm Volatility in Granular Networks


We propose a model of firm volatility based on customer-supplier connectedness.  We assume that customers’ growth rate shocks influence the growth rates of their suppliers, larger suppliers have more customers, and the strength of a customer-supplier link depends on the size of the customer firm.  When the size distribution becomes more dispersed, economic activity is concentrated among a smaller number of firms, the typical supplier becomes less diversified and its volatility increases.  The model is consistent with a set of new stylized facts.  At the macro level, the firm volatility distribution is driven by firm size dispersion; the latter explains common movements in firm-level total and residual volatility.  At the micro level, we show that the concentration of customer networks is an important determinant of firm-level volatility.  This is joint work with Hanno N. Lustig and Stijn Van Nieuwerburgh.

12:00 pm   Francis X. Diebold   University of Pennsylvania

On Estimation and Visualization in Ultra-High-Dimensional Dynamic Stochastic Econometrics


I will start by sketching a motivational application, using lasso methods to shrink, select, and characterize equity return volatility connectedness in the global banking network, 2003-2014.  I will then proceed to discuss a variety of related open questions in the estimation and interpretation of ultra-high-dimensional dynamic economic systems.

1:00 pm    Lunch

2:00 pm    Kim Christensen   CREATES, Denmark

Inference from high-frequency: A subsampling approach


In this paper, we show how to estimate the asymptotic (conditional) covariance matrix, which appears in many central limit theorems in high-frequency estimation of asset return volatility.  We provide a recipe for the estimation of this matrix by subsampling; an approach that computes rescaled copies of the original statistic based on local stretches of high-frequency data, and then it studies the sampling variation of these.  We show that our estimator is consistent both in frictionless markets and models with additive microstructure noise.  We derive a rate of convergence for it and are also able to determine an optimal rate for its tuning parameters (e.g., the number of subsamples).  Subsampling does not require an extra set of estimators to do inference, which renders it trivial to implement.  As a variance-covariance matrix estimator, it has the attractive feature that it is positive semi-definite by construction.  Moreover, the subsampler is to some extent automatic, as it does not exploit explicit knowledge about the structure of the asymptotic covariance.  It therefore tends to adapt to the problem at hand and be robust against misspecification of the noise process.  As such, this paper facilitates assessment of the sampling errors inherent in high-frequency estimation of volatility.  We highlight the finite sample properties of the subsampler in a Monte Carlo study, while some initial empirical work demonstrates its use to draw feasible inference about volatility in financial markets.

2:45 pm    Nikolaus Hautsch   University of Vienna

Efficient Iterative Maximum Likelihood Estimation


We propose a flexible algorithm to efficiently estimate models with complex log-likelihood functions.  Given a consistent but inefficient estimate of the parameter, the procedure yields a computationally tractable, consistent and asymptotic efficient estimate.  The estimator’s asymptotic normality is established and its asymptotic covariance in dependence of the number of iterations is derived.  We derive a lower bound for the approximate number of iterations until the estimator converges to the ordinary maximum likelihood estimator and suggest ways how to accelerate the speed of the algorithm.  We illustrate how to employ the algorithm in different estimation problems.  Small sample properties of the estimator are analyzed in a comprehensive simulation study.  In an empirical application, we use the proposed method to estimate the volatility connectedness between companies by extending the approach by Diebold and Yilmaz (2014) to a higher-dimensional and non-Gaussian setting.

3:30 pm    Break

4:00 pm    Zheng Tracy Ke   University of Chicago

Statistical limits and spectral methods for high-dimensional clustering


We consider a two-class clustering model: Xi = liμ+Zi for 1 ≤ i ≤ n, where li ∈{−1, 1} are the unknown class label, μ ∈R is a sparse vector and Zi ∼N(0,Ip). In this model, we study three interconnected problems: (1) global testing, where H0 : Xi = Zi, 1 ≤ i ≤ p versus H1 being the two-class model above; (2) clustering (estimating the label vector l); (3) feature selection (recovering the support of μ). Under a Rare/Weak signal model, we show fundamental statistical limits for the three desired goals and deliver sharp phase diagrams.  We propose a method, Important Features (IF)-PCA, for high-dimensional clustering. Classical PCA is a standard method for clustering, but it faces challenges when p is much larger than n. IF-PCA has two major innovations: (1) It uses chi-square screening to remove useless features in PCA; (2) It uses an adaption of the recently developed Higher Criticism Thresholding (HCT) for deciding the threshold in chi-square screening. The method is fast in computation and is tuning-free.  We investigate the Hamming clustering error of IF-PCA under Rare/Weak signal model, where more subtle phase transition phenomenon is revealed in a delicate range of signal strength. We apply IF-PCA to 10 gene microarray data sets. In several of these data sets, the method yields much lower error rate than other popular clustering methods including classical PCA, k-means algorithm and hierarchical clustering.

4:45 pm    George Tauchen   Duke University

Jump Regressions


We develop econometric tools for studying jump dependence of two processes from high-frequency observations on a fixed time interval.  In this context, only segments of data around a few outlying observations are informative for the inference.  We derive an asymptotically valid test for stability of a linear jump relation over regions of the jump size domain.  The test has power against general forms of nonlinearity in the jump dependence as well as temporal instabilities.  We further propose an optimal estimator for the linear jump regression model that is formed by optimally weighting the detected jumps with weights based on the diffusive volatility around the jump times.  We derive the asymptotic limit of the estimator, a semiparametric lower efficiency bound for the linear jump regression, and show that our estimator attains the latter.  A higher-order asymptotic expansion for the optimal estimator further allows for finite-sample refinements.  In an empirical application, we use the developed inference techniques to test the stability (in time and jump size) of market jump betas.

5:30 pm    Reception

Saturday, October 17

8:30 am     Registration

9:15 am     Xinghua Zheng   HKUST

On the inference about the spectral distribution of high-dimensional covariance matrix based on noisy observations – with applications to integrated covolatility matrix inference in the presence of microstructure noise


In practice, observations are often contaminated by noise, making the resulting sample covariance matrix to be an information-plus-noise-type covariance matrix.  Aiming to make inferences about the spectrum of the underlying true covariance matrix under such a situation, we establish an asymptotic relationship that describes how the limiting spectral distribution of (true) sample covariance matrices depends on that of information-plus-noise-type sample covariance matrices.  As an application, we consider the inference about the spectrum of integrated covolatility (ICV) matrices of high-dimensional diffusion processes based on high-frequency data with microstructure noise.  The (slightly modified) pre-averaging estimator is an information-plus-noise-type covariance matrix, and the aforementioned result, together with a (generalized) connection between the spectral distribution of true sample covariance matrices and that of the population covariance matrix, enables us to propose a two-step procedure to estimate the spectral distribution of ICV for a class of diffusion processes.  An alternative estimator is further proposed, which possesses two desirable properties: it eliminates the impact of microstructure noise, and its limiting spectral distribution depends only on that of the ICV through the standard Marcenko-Pastur equation.  Numerical studies demonstrate that our proposed methods can be used to estimate the spectrum of the underlying covariance matrix based on noisy observations.

10:00 am   Yazhen Wang   University of Wisconsin

Large Volatility Matrix Estimation with Factor-Based Diffusion Model for High-Frequency Financial data


In this paper, we focus on incorporating the factor influence in asset price modeling and volatility matrix estimation.  We propose to model asset price using a factor-based diffusion process.  The idea is that assets’ prices are governed by common factors, and that assets with similar characteristics share the same association with the factors.  Under the proposed factor-based model, we developed an estimation scheme called “blocking and regularizing”, which deals the aforementioned challenges of volatility estimation with high-frequency data.  The asymptotic properties of the proposed estimator are studied, while its finite sample performance is tested to support theoretical results.

10:45 am   Break

11:15 am   Dacheng Xiu   Chicago Booth School of Business

Using Principal Component Analysis to Estimate a High Dimensional Factor Model with High-Frequency Data


This paper constructs an estimator for the number of common factors in a setting where both the sampling frequency and the number of variables increase.  Empirically, we document that the covariance matrix of a large portfolio of US equities is well represented by a low rank common structure with sparse residual matrix.  When employed for out-of-sample portfolio allocation, the proposed estimator largely outperforms the sample covariance estimator.  This is joint work with Yacine Ait-Sahalia.

12:00 pm   Eric Ghysels   University of North Carolina

Are Main Street and Wall Street Driven by the same Factors?


We study factor models in a unbalance sampling frequency setting – a commonly encountered situation such as for example with panels of financial and macroeconomic data.  This imbalance of sampling frequencies poses serious technical problems which existing methods have not been able to resolve.  We propose a new class of mixed frequency data approximate factor models which enable us to study to full spectrum of monthly Fama-French (FF) data combined with quarterly Stock-Watson (SW) panel data.  We derive the large sample properties of the estimators for the new class of approximate factor models involving mixed frequency data. Using our new approximate factor model, we find that a single common factor between the financial and macro data, and a number of factors specific to either the FF or the SW series.  Our factors are neither linear combinations of FF nor of SW factors.  This is joint work with Elena Andreou, Patrick Gagliardini, and Mirco Rubio.

1:00 pm     Lunch

2:00 pm    Mark Podolskij   CREATES, Denmark

Testing the maximal rank of the volatility process for continuous diffusions observed with noise


In this talk we present a test for the maximal rank of the volatility process in continuous diffusion models observed with noise.  Such models are typically applied in mathematical finance, where latent price processes are corrupted by microstructure noise at ultra high frequencies.  Using high frequency observations we construct a test statistic for the maximal rank of the time varying stochastic volatility process.  Our methodology is based upon a combination of a matrix perturbation approach and pre-averaging.  We will show the asymptotic mixed normality of the test statistic and obtain a consistent testing procedure.  This is joint work with Tobais Fissler.

2:45 pm    Bas Werker   Tilburg University

Arbitrage Pricing Theory for Squared Returns


Recent research has documented the existence of common factors in individual asset’s idiosyncratic variances or squared idiosyncratic returns.  We provide an Arbitrage Pricing Theory that leads to a linear factor structure for prices of squared excess returns.  In this representation both the factors at the return level as well as the factors in idiosyncratic variances appear.  Using standard option data to find the market price of squared excess returns, we document the relevance of both linear return factors and idiosyncratic variance factors in terms of pricing.  This is joint work with Eric Renault and Thijs van der Heijden.

3:30 pm    Yingying Li   HKUST

Solving the High-dimensional Markowitz Optimization Problem: When Sparse Regression Meets Random Matrix Theory


To solve the high-dimensional Markowitz optimization problem, a new approach combining sparse regression and estimation of maximum expected return for a given risk level based on random matrix theory is proposed.  We prove that under some sparsity assumptions on the underlying optimal portfolio, our estimated portfolio, the Response-estimated Sparse Regression Portfolio (ReSReP), asymptotically reaches the maximum expected return and meanwhile satisfies the risk constraint.  To the best of our knowledge, this is the first time that these two goals are simultaneously achieved in the high-dimensional setting.  The superior properties of ReSReP are demonstrated via simulation and extensive empirical studies.  This is based on joint work with Mengmeng AO and Xinghua Zheng.

4:15 pm    Closing remarks

Skip to toolbar