Two recent strands of the literature on Structural Vector Autoregressions (SVARs) use higher moments for identification. One of them exploits independence and non-Gaussianity of the shocks; the other, stochastic volatility (heteroskedasticity). These approaches achieve point identification without imposing exclusion or sign restrictions. We review this work critically, and contrast its goals with the separate research program that has pushed for macroeconometrics to rely more heavily on credible economic restrictions and institutional knowledge, as is the standard in microeconometric policy evaluation. Identification based on higher moments imposes substantively stronger assumptions on the shock process than standard second-order SVAR identification methods do. We recommend that these assumptions be tested in applied work. Even when the assumptions are not rejected, inference based on higher moments necessarily demands more from a finite sample than standard approaches do. Thus, in our view, weak identification issues should be given high priority by applied users.
We conduct a simulation study of Local Projection (LP) and Vector Autoregression (VAR) estimators of structural impulse responses across thousands of data generating processes (DGPs), designed to mimic the properties of the universe of U.S. macroeconomic data. Our analysis considers various structural identification schemes and several variants of LP and VAR estimators, and we pay particular attention to the role of the researcher's loss function. A clear bias-variance trade-off emerges: Because our DGPs are not exactly finite-order VAR models, LPs have lower bias than VAR estimators; however, the variance of LPs is substantially higher than that of VARs at intermediate or long horizons. Unless researchers are overwhelmingly concerned with bias, shrinkage via Bayesian VARs or penalized LPs is attractive.
We construct robust empirical Bayes condence intervals (EBCIs) in a normal means problem. The intervals are centered at the usual linear empirical Bayes estimator, but use a critical value accounting for shrinkage. Parametric EBCIs that assume a normal distribution for the means (Morris, 1983) may substantially undercover when this assumption is violated, and we derive a simple rule of thumb for gauging the potential coverage distortion. In contrast, our EBCIs control coverage regardless of the means distribution, while remaining close in length to the parametric EBCIs when the means are indeed Gaussian. If the means are treated as fixed, our EBCIs have an average coverage guarantee: the coverage probability is at least 1-α on average across the n EBCIs for each of the means. Our empirical applications consider effects of U.S. neighborhoods on intergenerational mobility, and structural changes in a large dynamic factor model for the Eurozone.
Calibration, the practice of choosing the parameters of a structural model to match certain empirical moments, can be viewed as minimum distance estimation. Existing standard error formulas for such estimators require a consistent estimate of the correlation structure of the empirical moments, which is often unavailable in practice. Instead, the variances of the individual empirical moments are usually readily estimable. Using only these variances, we derive conservative standard errors and confidence intervals for the structural parameters that are valid even under the worst-case correlation structure. In the over-identified case, we show that the moment weighting scheme that minimizes the worst-case estimator variance amounts to a moment selection problem with a simple solution. Finally, we develop tests of over-identifying or parameter restrictions. We apply our methods empirically to a model of menu cost pricing for multi-product firms and to a heterogeneous agent New Keynesian model.
We develop a generally applicable full-information inference method for heterogeneous agent models, combining aggregate time series data and repeated cross sections of micro data. To handle unobserved aggregate state variables that affect cross-sectional distributions, we compute a numerically unbiased estimate of the model-implied likelihood function. Employing the likelihood estimate in a Markov Chain Monte Carlo algorithm, we obtain fully efficient and valid Bayesian inference. Evaluation of the micro part of the likelihood lends itself naturally to parallel computing. Numerical illustrations in models with heterogeneous households or firms demonstrate that the proposed full-information method substantially sharpens inference relative to using only macro data, and for some parameters micro data is essential for identification.
Macroeconomists increasingly use external sources of exogenous variation for causal inference. However, unless such external instruments (proxies) capture the underlying shock without measurement error, existing methods are silent on the importance of that shock for macroeconomic fluctuations. We show that, in a general moving average model with external instruments, variance decompositions for the instrumented shock are interval-identified, with informative bounds. Various additional restrictions guarantee point identification of both variance and historical decompositions. Unlike SVAR analysis, our methods do not require invertibility. Applied to U.S. data, they give a tight upper bound on the importance of monetary shocks for inflation dynamics.
Applied macroeconomists often compute confidence intervals for impulse responses using local projections, i.e., direct linear regressions of future outcomes on current covariates. This paper proves that local projection inference robustly handles two issues that commonly arise in applications: highly persistent data and the estimation of impulse responses at long horizons. We consider local projections that control for lags of the variables in the regression. We show that lag-augmented local projections with normal critical values are asymptotically valid uniformly over (i) both stationary and non-stationary data, and also over (ii) a wide range of response horizons. Moreover, lag augmentation obviates the need to correct standard errors for serial correlation in the regression residuals. Hence, local projection inference is arguably both simpler than previously thought and more robust than standard autoregressive inference, whose validity is known to depend sensitively on the persistence of the data and on the length of the horizon.
We prove that local projections (LPs) and Vector Autoregressions (VARs) estimate the same impulse responses. This nonparametric result only requires unrestricted lag structures. We discuss several implications: (i) LP and VAR estimators are not conceptually separate procedures; instead, they are simply two dimension reduction techniques with common estimand but different finite-sample properties. (ii) VAR-based structural identification - including short-run, long-run, or sign restrictions - can equivalently be performed using LPs, and vice versa. (iii) Structural estimation with an instrument (proxy) can be carried out by ordering the instrument first in a recursive VAR, even under non-invertibility. (iv) Linear VARs are as robust to non-linearities as linear LPs.
This paper empirically evaluates the potentially non-linear nexus between financial indicators and the distribution of future GDP growth, using a rich set of macroeconomic and financial variables covering 13 advanced economies. We evaluate the out-of-sample forecast performance of financial variables for GDP growth, including a fully real-time exercise based on a flexible non-parametric model. We also use a parametric model to estimate the moments of the time-varying distribution of GDP and evaluate their in-sample estimation uncertainty. Our overall conclusion is pessimistic: Moments other than the conditional mean are poorly estimated, and no predictors we consider provide robust and precise advance warnings of tail risks or indeed about any features of the GDP growth distribution other than the mean. In particular, financial variables contribute little to such distributional forecasts, beyond the information contained in real indicators.
We propose a 'dominant currency paradigm' with three key features: dominant currency pricing, pricing complementarities, and imported inputs in production. We test this paradigm using a new data set of bilateral price and volume indices for more than 2,500 country pairs that covers 91% of world trade, as well as detailed firm-product-country data for Colombian exports and imports. In strong support of the paradigm we find that: (1) Non-commodities terms of trade are uncorrelated with exchange rates. (2) The dollar exchange rate quantitatively dominates the bilateral exchange rate in price pass-through and trade elasticity regressions, and this effect is increasing in the share of imports invoiced in dollars. (3) U.S. import volumes are significantly less sensitive to bilateral exchange rates, compared to other countries' imports. (4) A 1% U.S. dollar appreciation against all other currencies predicts a 0.6% decline within a year in the volume of total trade between countries in the rest of the world, controlling for the global business cycle. We characterize the transmission of, and spillovers from, monetary policy shocks in this environment.
We show empirically that the variation across country pairs in exchange rate pass-through and trade elasticity is meaningfully explained by the dollar's dominance as invoicing currency. We use a hierarchical Bayesian approach to directly and flexibly model pass-through heterogeneity conditional on the invoicing currency share. We estimate that the importer's country-level dollar invoicing share explains 15 percent of the overall variance across trading pairs in dollar exchange rate pass-through into bilateral prices.
I propose to estimate structural impulse responses from macroeconomic time series by doing Bayesian inference on the Structural Vector Moving Average representation of the data. This approach has two advantages over Structural Vector Autoregressions. First, it imposes prior information directly on the impulse responses in a flexible and transparent manner. Second, it can handle noninvertible impulse response functions, which are often encountered in applications. Rapid simulation of the posterior distribution of the impulse responses is possible using an algorithm that exploits the Whittle likelihood. The impulse responses are partially identified, and I derive the frequentist asymptotics of the Bayesian procedure to show which features of the prior information are updated by the data. The procedure is used to estimate the effects of technological news shocks on the U.S. business cycle.
Simultaneous confidence bands are versatile tools for visualizing estimation uncertainty for parameter vectors, such as impulse response functions. In linear models, it is known that that the sup-t confidence band is narrower than commonly used alternatives, for example Bonferroni and projection bands. We show that the same ranking applies asymptotically even in general nonlinear models, such as VARs. Moreover, we provide further justification for the sup-t band by showing that it is the optimal default choice when the researcher does not know the audience's preferences. Complementing existing plug-in and bootstrap implementations, we propose a computationally convenient Bayesian sup-t band with exact finite-sample simultaneous credibility. In an application to SVAR impulse response function estimation, the sup-t band - which has been surprisingly overlooked in this setting - is at least 35% narrower than other off-the-shelf simultaneous bands.
This dissertation consists of three independent chapters on econometric methods for macroeconomic analysis. In the first chapter, I propose to estimate structural impulse response functions from macroeconomic time series by doing Bayesian inference on the Structural Vector Moving Average representation of the data. This approach has two advantages over Structural Vector Autoregression analysis: It imposes prior information directly on the impulse responses in a flexible and transparent manner, and it can handle noninvertible impulse response functions. The second chapter, which is coauthored with B. J. Bates, J. H. Stock, and M. W. Watson, considers the estimation of dynamic factor models when there is temporal instability in the factor loadings. We show that the principal components estimator is robust to empirically large amounts of instability. The robustness carries over to regressions based on estimated factors, but not to estimation of the number of factors. In the third chapter, I develop shrinkage methods for smoothing an estimated impulse response function. I propose a data-dependent criterion for selecting the degree of smoothing to optimally trade off bias and variance, and I devise novel shrinkage confidence sets with valid frequentist coverage.
We review the main identification strategies and empirical evidence on the role of expectations in the New Keynesian Phillips curve, paying particular attention to the issue of weak identification. Our goal is to provide a clear understanding of the role of expectations that integrates across the different papers and specifications in the literature. We discuss the properties of the various limited-information econometric methods used in the literature and provide explanations of why they produce conflicting results. Using a common dataset and a flexible empirical approach, we find that researchers are faced with substantial specification uncertainty, as different combinations of various a priori reasonable specification choices give rise to a vast set of point estimates. Moreover, given a specification, estimation is subject to considerable sampling uncertainty due to weak identification. We highlight the assumptions that seem to matter most for identification and the configuration of point estimates. We conclude that the literature has reached a limit on how much can be learned about the New Keynesian Phillips curve from aggregate macroeconomic time series. New identification approaches and new datasets are needed to reach an empirical consensus.
This paper considers the estimation of approximate dynamic factor models when there is temporal instability in the factor loadings. We characterize the type and magnitude of instabilities under which the principal components estimator of the factors is consistent and find that these instabilities can be larger than earlier theoretical calculations suggest. We also discuss implications of our results for the robustness of regressions based on the estimated factors and of estimates of the number of factors in the presence of parameter instability. Simulations calibrated to an empirical application indicate that instability in the factor loadings has a limited impact on estimation of the factor space and diffusion index forecasting, whereas estimation of the number of factors is more substantially affected.
When risk averse forecasters are presented with risk neutral proper scoring rules, they report probabilities whose ratios are shaded towards 1. If elicited probabilities are used as inputs to decision-making, naive elicitors may violate first-order stochastic dominance.
Danmarks Nationalbank regularly publishes an index of the development in the strength of the krone, the effective krone-rate index, and an index of the competitiveness of the Danish manufacturing sector, the real effective krone-rate index. Changing trade patterns make it necessary to revise the weights of the currencies in the index from time to time. The 2009 weights are presented below. The most recent revision of the weights is documented in Pedersen (2004).