Select Page

# Seminars for 2017-2018

## 2017/2018 SEMINARS

October 19, 2017 – 11:00 am to noon

## Yingying Li

Departments of Finance and ISOM, Hong Kong University of Science and Technology

##### Approaching Mean-Variance Efficiency for Large Portfolios
This paper aims to solve the large dimensional Markowitz optimization problem. Given any risk constraint level, we introduce a new approach for estimating the optimal portfolio, which is developed through a novel regression representation of the mean-variance optimization problem, combined with high-dimensional sparse regression methods. Our solution, under a mild sparsity assumption, asymptotically achieves mean-variance efficiency and meanwhile effectively controls the risk. To the best of our knowledge, this is the first time that these two goals can be simultaneously achieved for large portfolios. The superior properties of our approach are demonstrated via comprehensive simulation and empirical studies.

November 9, 2017 – 11:00 am to noon

## Peter Carr

Department of Finance and Risk Engineering, Tandon School of Engineering, New York University

##### Valgebra
We consider financial settings in which abstract algebra can be used to provide a unique arbitrage-free value for a derivative security. The algebraic structures that we consider include monoids, groups, and semi-fields. The derivative securities that we consider all have a single payoff which can be linear or non-linear. When a payoff function is linear in a set of asset prices, then in the absence of frictions, carrying costs, and arbitrage, it is well known that the value function is the same linear function of asset prices as the payoff. When the payoff function is nonlinear in a set of asset prices, then we choose an algebra for which the payoff becomes linear in that algebra. We then seek sufficient conditions on the dynamics of the underlying assets such that the value function is the same linear function of the asset prices as the payoff in this chosen algebra. Whenever this approach results in a unique arbitrage-free value for a derivative security, we say that the derivative security has been valued algebraically. We refer to this valuation methodology as valgebra.

December 7, 2017 – 11:00 am to noon

## Simone Lenzu

Department of Economics, The University of Chicago

##### Do Marginal Products Differ from User Costs? Micro-Level Evidence from Italian Firms

Using micro-data on firm-specific borrowing costs and wages, we demonstrate that distortions in firms’ employment and investment policies can be empirically measured using firm-level gaps between marginal revenue products and user costs (MRP-cost gaps). We estimate MRP-cost gaps for 4 million firm-year observations in Italy between 1997 and 2013, showing that the variation in these measures is closely related to the extent of credit market frictions and to the degree of labor market rigidities faced by individual firms. Using the estimated MRP-cost gaps, we propose a reallocation algorithm that helps us assess the scope of capital and labor misallocation in Italy, and its impact on aggregate output and Total Factor Productivity (TFP). We calculate that, holding constant the aggregate capital and labor endowments in the economy, the Italian corporate sector could produce between 3 to 4 percent more output by reallocating resources from over-endowed producers toward higher value users. The output losses from misallocation are larger during episodes of macro-financial instability, in non-manufacturing industries, and in geographical regions with less developed socio-economic institutions.

January 4, 2018 – 11:00 am-noon

## E.J. Reedy

Polsky Center for Entrepreneurship and Innovation, The University of Chicago

##### An Introduction to Entrepreneurial Resources at the University
The Polsky Center is an accelerator program based at the University of Chicago. This orientation is particularly recommended for faculty and students who wish to turn part of their research into a business.

January 11, 2018 – 11:30 am-12:30 pm

## Victor DeMiguel

##### A Portfolio Perspective on the Multitude of Firm Characteristics
We investigate how many characteristics matter jointly for an investor who cares not only about average returns but also about portfolio risk, transaction costs, and out-of-sample performance. We find only a small number of characteristics (six) are significant without transaction costs. With transaction costs, the number of significant characteristics increases to 15 because the trades in the underlying stocks required to rebalance different characteristics often net out. Thus, transaction costs increase the dimension of the cross section of stock returns because combining characteristics helps to reduce transaction costs. We also show that investors can improve out-of-sample performance net of transaction costs by exploiting a large set of characteristics instead of the small number considered in prominent asset-pricing models.

February 1, 2018 – 11:30am-12:30pm

## Willem van Vliet

Department of Economics and Booth School of Business, University of Chicago

## Estimating Bank Interconnectedness from Market Data

February 13, 2018 – 11:30am-12:30pm

## Yacine Aït-Sahalia

Department of Economics and Bendheim Center for Finance, Princeton University

##### Closed-Form Implied Volatility Surfaces for Stochastic Volatility (with Chenxu Li and Chen Xu Li)
This paper explores the link between stochastic volatility models and implied volatility data. We develop a closed-form bivariate expansion of the shape characteristics of the implied volatility surface generated by a stochastic volatility model. This makes it possible to analyze the impact of the various parameters and/or structures of a stochastic volatility model on the implied volatility surface. Conversely, we also construct an “implied stochastic volatility model” designed to fit by construction the implied volatility data.

February 15, 2018 – 11:30am-12:30pm

## Markus Reiß

Institut für Matematik, Humoldt-Universität zu Berlin

##### Volatility estimation under one-sided errors with applications to limit order books (joint with M. Bibinger, M. Jirak)
For a semi-martingale X_t, which forms a stochastic boundary, a rate-optimal estimator for its quadratic variation \langle X, X \rangle_t is constructed based on observations in the vicinity of X_t. The problem is embedded in a Poisson point process framework. We derive n^{-1/3} as an optimal convergence rate in a high-frequency framework with n observations (in mean) and we compare it with usual microstructure noise models with estimation rate n^{-1/4}. We discuss a potential application for the estimation of the integrated squared volatility of an efficient price process X_t from intra-day order book quotes.

February 22, 2018 – 11:30am-12:30pm

## Markus Pelger

Department of Management Science and Engineering, Stanford University

##### Estimating Latent Asset-Pricing Factors (joint with Martin Lettau, UC Berkeley)
We develop an estimator for latent factors in a large-dimensional panel of financial data that can explain expected excess returns. Statistical factor analysis based on Principal Component Analysis (PCA) has problems identifying factors with a small variance that are important for asset pricing. We generalize PCA with a penalty term accounting for the pricing error in expected returns. Our estimator searches for factors with a high Sharpe-ratio that can explain both the expected return and covariance structure. We derive the statistical properties of the new estimator and show that our estimator can find asset-pricing factors, which cannot be detected with PCA, even if a large amount of data is available. We extend the estimation approach to general time-varying loadings by using a non-parametric projection on time-varying characteristics. Applying the approach to portfolio and stock data we find factors with Sharpe-ratios more than twice as large as those based on conventional PCA. Our factors accommodate a large set of anomalies better than notable four- and five-factor alternative models.

March 8, 2018 – 11:30am-12:30pm

## Noureddine El Karoui

Department of Statistics, University of California, Berkeley

##### Can we trust the bootstrap (for moderately difficult statistics problems)? Based on joint papers with Elizabeth Purdom, UC Berkeley.
The bootstrap is an important and widely used tool for answering inferential questions in Statistics. It is particularly helpful in many analytically difficult situations. I will discuss the performance of the bootstrap for simple inferential problems in moderate and high-dimension. For instance, one can ask whether the bootstrap provides valid confidence intervals for individuals parameters in linear regression when the number of predictors is not infinitely small compared to the sample size. Similar questions related to Principal Component analysis are also natural from a practical standpoint. We will see that the answer to these questions is generally negative.

Our assessment will be done through a mix of numerical and theoretical investigations. The theory will be developed under the assumptions that the ratio of number of predictors to number of observations is kept fixed in our asymptotics. This is a way to keep the “statistical difficulty” of the problem fixed in the asymptotics. These asymptotic results tend to reflect the finite sample behavior of statistical methods better than traditional asymptotics.

Interestingly, bootstrap methods that are thought to be perform equivalently well for inference -based on classical asymptotic arguments- will be shown to have very different behavior numerically and in our theoretical framework. For instance, some are very conservative and some are very anti-conservative, while they are equally “intuitive”. I will also discuss the behavior of other resampling plans, such as the jackknife, as well as ways to fix some of the problems we have identified.

March 29, 2018 – 11:00 am to noon

## Anders Kock

Department of Economics, University of Oxford

##### Multi-armed Bandits and Optimal Sequential Treatment Allocation with General Welfare Measures
In a treatment allocation problem the individuals to be treated often arrive gradually. However, most focus in the literature has been on the case where a data set is given and one then has to infer treatment effects based on that. In this talk we consider the sequential setting by casting it as a bandit problem. Furthermore, we allow the decision maker to target maximizing smooth functionals of the distribution of treatment outcomes thereby broadening the classic focus on expected effects. We provide near optimal upper bounds on the regret of our policy, which is a variant of Upper Confidence Bound strategy (UCB), in the setting with and without covariates.

April 2, 2018 – 4:30 pm to 5:30 pm – Eckhart Hall, Room 133, 5801 S. Ellis Ave, University of Chicago Campus
This seminar is organized jointly by the Stevanovich Center and the UC Department of Statistics

## Tong Zhang

Executive director of Tencent AI Lab and former professor at Rutgers University

##### Modern Stochastics Optimization Methods for Big Data Machine Learning
In classical optimization, one needs to calculate a full (deterministic) gradient of the objective function at each step, which can be extremely costly for modern applications of big data machine learning. A remedy to this problem is to approximate each full gradient with a random sample over the data. This approach reduces the computational cost at each step, but introduces statistical variance. In this talk, I will present some recent progresses on applying variance reduction techniques previously developed for statistical Monte Carlo methods to this new problem setting. The resulting stochastic optimization methods are highly effective for practical big data problems in machine learning, and the new methods have strong theoretical guarantees that significantly improve the computational lower bounds of classical optimization algorithms.

April 5, 2018 – 11:00 am to noon

## Gustavo Schwenkler

Questrom School of Business, Boston University

##### Estimating the Dynamics of Consumption Growth
We estimate models of consumption growth that allow for long-run risks and disasters using data for a series of countries over a time span of 200 years. Our models are high dimensional, and include several latent factors and measurement errors. Estimation is made possible thanks to a novel likelihood-based methodology. Our estimates imply that the expected growth rate and the instantaneous volatility of consumption fluctuate over time in a highly persistent fashion. Our analysis suggests that this is primarily driven by significant time variation in the rate of disaster arrival. Across countries, our estimates favor models that allow for small and frequent disasters arriving at a volatile and persistent rate.

April 12, 2018 – 11am to noon

## Olivier Scaillet

Geneva Finance Research Institute, University of Geneva, and Swiss Finance Institute

##### Time-Varying Risk Premia in Large International Equity Markets
We estimate international no-arbitrage factor models with time-varying factor exposures and risk premia at the individual stock level using a large unbalanced panel of 58,674 stocks in 46 countries over the 1985-2017 period. Multi-factor models with regional and country-specific factors perform well. Factor risk premia vary over time and across countries and are more volatile in emerging markets. The country-specific risk factor premia are important in emerging markets and to a lesser extent in developed markets. Both the four- and the five-factor models capture the factor structure in U.S.-denominated international stock returns.