Standard Error Estimation with Overlapping Samples
1. Introduction
In this post, I show how to compute corrected standard errors for a predictive regression with overlapping samples as in Hodrick (1992). First, in Section 2, I walk through a simple example which outlines the general empirical setting and illustrates why we would need to correct the standard errors on the coefficient estimates when faced with overlapping samples. Then, in Section 3 I compute the estimator for the standard errors proposed in Hodrick (1992). I conclude in Section 4 with a numerical simulation to verify that the mathematics below in fact computes a sensible estimate of the standard deviation of .
2. An Illustrative Example
Suppose that you are a mutual fund manager who has to allocate capital amongst stocks and you want to know which stocks will earn the highest returns over the next months where stands for your investment horizon. To start with, you might consider and run a bunch of regressions with the form below where is the log month excess return, is a current state variable and is the residual:
(1)
For example, Fama and French (1988) pick to be the log price to dividend ratio while Jegadeesh and Titman (1993) pick to be a dummy variable for a stock’s inclusion or exclusion from a momentum portfolio. We can vectorize the expression above to clean up the algebra and obtain the regression equation below:
(2)
However, just as Fama and French (1988) and Jegadeesh and Titman (1993) are interested in investment horizons of , you could also set up the regression from above with by making the adjustments:
(3)
Here, expression for comes from the null hypothesis that has no predictive power assuming that each of the terms are white noise. Thus, if you run a new set of regressions at the month investment horizon, you would have the vectorized regression equation:
(4)
However, while estimating this equation and trying to compute the standard errors for , you notice something troubling: even though each of the terms is distributed and act as white noise, the and terms each contain the shock. Thus, while the step by step shocks are white noise, the regression residuals are autocorrelated in a non trivial way:
(5)
Thus, in order to properly account for the variability of your estimate of , you will have to compute standard errors for the regression that take this autocorrelation and conditional heteroskedasticity into account.
3. Hodrick (1992) Solution
Standard econometric theory tells us that we can estimate using GMM yielding the distributional result:
(6)
with the variance covariance matrix given by the expression:
(7)
Thus, the autocovariance of the regression residuals will be captured by the or spectral density term. A natural way to account for this persistence in errors would be to compute the would be to compute something like the average of the autocovariances:
(8)
However, this estimator for the spectral density has bad small sample properties as autocovariance matrices are only garraunteed to be positive semi-definite leading to large amounts of noise as your computer attempts to invert a nearly singular matrix. The insight in Hodrick (1992) is to use stationarity of the time series and to switch from summing autocovariances to variances:
(9)
4. Simulation Results
In this section, I conclude by verifying my derivations with a simulation (code). First, I compute a data set of month returns using a discretized version of an Ornstein-Uhlenbeck process with :
(10)
with an standard normal variable. I use the annualized moments below taken from Cochrane (2005):
(11)
I also simulate a completely unrelated process which represents draws from a standard normal distribution. Thus, I check my computations under the null hypothesis that has no predictive power. I then run simulations in which I compute the data series above, estimate the regression:
(12)
and then report the distribution of , as well as the naive and Hodrick (1992) implied standard errors:
I report the mean values from the simulations below:
(13)
Co-Movement Between Bond and Stock Risk Premia
1. Introduction
I compare the covariance between the bond risk premium as captured by the Cochrane and Piazzesi (2005) factor and the stock risk premium as captured by the logarithm of the price to dividend ratio as used in, say, Shiller (2006). This covariance measure gives a rough view of whether or not the same risk factors are driving the excess returns in each market.
In section 2, I recreate the Cochrane and Piazzesi (2005) factor and verify its properties. In section 3, I compute the log price to dividend ratio from Robert Shiller’s online data. Finally, in section 4, I conclude by computing the covariances between the Cochrane and Piazzesi (2005) factor, the log price to dividend ratio and the excess returns on the S&P 500.
2. Bond Risk Premium
In this section, I recreate the Cochrane and Piazzesi (2005) bond risk factor. The raw data are through year zero coupon bond prices which come from the CRSP Fama-Bliss risk free bond prices hosted on WRDS with run from January 1964 to December 2003. From these raw series, I convert the price as and then compute bond yields, forward rates and returns as described below where the subscript represents the current date and the argument represents the bond maturity:
(1)
Next, I run a regression of the average excess returns for the through year maturity bonds over the year spot rate:
(2)
Using monthly data from 1964-2003, I find the regression results below which match up closely with those reported in Table 1 of Cochrane and Piazzesi (2005):
These results say that, for instance, a increase in the year forward rate predicts an increase in the average excess return over maturities from to years by , while a increase in the year forward rate predicts a decrease in the same measure. The figure below plots the coefficients above along with a confidence interval.
I then use this regression estimate to compute the Cochrane and Piazzesi (2005) factor as the predicted value of the average excess returns:
(3)
I also compute the analogous measure of the bond risk premium implied by Gabaix (2011):
(4)
Below I report the summary statistics of the both bond risk premium measures. This table reads that the average excess return on bonds at maturities – years is per year in this monthly sample from January 1964 to December 2003 with a standard deviation of .
3. Stock Risk Premium
In this section, I compute the logarithm of the price to dividend ratio on the S&P 500 as a proxy for the risk premium in the US stock market. I use data from Robert Shiller’s website which reports the real price level, real dividends and real earnings on the S&P 500 on a monthly frequency dating back to the 1800’s. Using this data, I compute the annual cum dividend excess return on the S&P 500 as:
(5)
This data delivers the summary statistics below for the monthly sample running from January 1964 to December 2003 corresponding to the Cochrane and Piazzesi (2005) interal. The excess returns are annualized while the log price to dividend and log price to earnings ratios, and respectively, are on a month by month basis. This table reads that the average excess return on the stock market over the next year was per year with a volatility of , while in any given month the average of the logarithm of the price to dividend ratio is with an average month to month increase of .
Below I plot the annualized excess return on the S&P 500 along with the log price to dividend and log price to earnings ratios in both levels and changes.
I regress the excess return on the S&P 500 over the next year on the current log price to dividend ratio using monthly observations over the period from January 1964 to December 2003. I report these regression results below. The standard errors have not been corrected for overlapping samples. The point estimates indicate that a increase in the price to dividend ratio in the current month is associated with a or decrease in the returns on the S&P 500 over the next year.
I also regress the level and the change in the log price to dividend ratio on the level and the change in the annual real interest rate using monthly observations over the period from January 1964 to December 2003 and report the regression coefficients below. The standard errors have not been corrected for overlapping samples. The annual real interest rate each month is the difference between the Fama-Bliss riskless rate over that year and the log change in inflation using the data from the CPI located here.
The raw correlations here are given in the tables below where the denotes the real riskless rate over the next year computed using an implied inflation rate.
We can see from the plots below that this negative correlation between the log price to dividend ratio and the real riskless rate comes entirely from a spike during the 1970’s.
4. Co-Movement
Finally, to get a sense of how much the risk premia in the bond and stock markets co-move, I report the correlation between the Cochrane and Piazzesi (2005) bond risk factor, the Gabaix (2011) implied bond risk premium analogue, the log price to dividend and priec to earnings ratios and the excess returns on the S&P 500 in both levels and changes below:
Pearson-Wong Diffusions
1. Introduction
I introduce the concept of Pearson-Wong diffusions and then show how this mathematical object can be put to use in macro-finance.
Roughly speaking, Pearson-Wong diffusions link properties of stochastic processes to properties of cross-sectional distributions in the resulting population. For example, suppose you have in mind a stochastic process that governs the total sales of each firm in the US. If this stochastic process is a Pearson-Wong diffusion you would also know what the steady state cross-sectional distribution of firm sales would be. Conversely, if you observed a particular cross-sectional distribution of firm sales, then if you assumed all firms had a similar sales growth process and that the economy was in a steady state, you could then back out which Pearson-Wong diffusion was governing sales growth in the economy up to an affine transformation.
First, in the Section 2, I define the Pearson (1895) system of distributions. Next, in Section 3, I elaborate on work by Wong (1964) and show that a broad class of diffusion processes with polynomial volatility called Pearson-Wong diffusions lead to steady state distributions in the Pearson system. I show that these distributions are uniquely defined by their polynomial coefficients. In Section 4, I show how a broad set of common continuous time processes in macro-finance such as Ornstein-Uhlenbeck processes and Feller square root processes sit in this class of Pearson-Wong diffusions. Finally, I conclude in Section 5 by returning to this sales volume example above taken from Gabaix (2011) and showing that variation in the cross-sectional distribution of firm sales volume implies variation in the functional form of the stochastic process governing each firm’s aggregate sales.
2. The Pearson System of Distributions
In this section I motivate and define the Pearson system of distributions. Karl Pearson developed the Pearson system of distributions as a taxonomy for understanding the skewed distributions he was finding in the biological data he was studying. For instance, Pearson had access to data on dimensions of crabs caught off the coast of Naples as illustrated in the figure below1. When studying the ratio of the length of the crabs to their breadth, he found a distribution that was non-normal, and almost seemed to be a mixture of normal distributions.
In order to manipulate these data analytically, Pearson then searched out a simple functional form that would capture the main features of these skewed distributions with only a handful of parameters. In particular, he was after a formulation that fit continuous, single peaked distributions over various ranges with varying levels of skewness and kurtosis. Through guess and check, he settled on the definition below2:
Definition (Pearson System): A continuous, univariate, probability distribution over is a member of the Pearson System if it satisfies the differential equation below with constants , , and :
(1)
What are the features of this formulation? First, if is not a root of the polynomial then is finite. Next, we see that characterizes the signle peak of the distribution as when . What’s more, we know that has to be single peaked as and , so and must tend towards as goes to .
Heuristically speaking, we can think about as parameterizing the peak of the distribution and the quadratic polynomial as characterizing the rate of descent from this peak in either direction as a function of . Importantly, the solution to this differential equation will depend on the character of the roots of the quadratic polynomial:
(2)
In his original 1895 paper, Pearson spent most of his time actually classifying different types of distributions based on the nature of their respective polynomials. Below is a short list of distributions that fall into the Pearson class:
Below, I walk through an example showing how the normal distribution fits into the Pearson system:
Example (Gaussian Distribution): When we get the Gaussian PDF. First, note that given these assumptions, the differential equation above can be written as:
(3)
Thus, by integrating up we see that the solution has the form:
(4)
If we choose such that the probability mass over the real line is , we get .
3. Main Results
In this section I define the class of Pearson-Wong diffusions and outline the mapping between the coefficients of the stochastic process and the parameters of the cross-sectional distribution. In the analysis below, I consider time homogeneous diffusion processes; i.e., the coefficients of the stochastic process can only depend on time through the value of :
Definition (Time Homogeneous Diffusion): Let and be real valued functions that are Lipschitz on the interval with . Then a diffusion is a time-homogeneous diffusion if there exists a unique solution to the equation:
(5)
Next, I define the class of Pearson-Wong diffusions:
Definition (Pearson-Wong Diffusion): A Pearson-Wong polynomial diffusion is stationary, time homogeneous solution to a stochastic differential equation of the form below, where , is a Brownian motion and the triple of coefficients are such that the square root is well defined when is in the state space :
(6)
What sorts of processes fit inside this class of diffusions? For one example, consider an Ornstein-Uhlenbeck process which would arise if we set , and . In this setting, . In the next section, I show how more exotic process also fit into this box of Pearson-Wong diffusions.
Now, given this definition, I need to derive a mapping between the values of the polynomial coefficients and the form of the resulting cross-sectional distribution . I do this in steps. First, I characterize the scale function and speed density of the diffusion . Next, I link the infinitesimal generator of the diffusion process to the scale function and speed density of . Finally, I show that given the mapping between the infinitesimal generator and stochastic processes in the class of Pearson diffusions, if is an ergodic process then this mapping is unique.
Below, I define the scale function for a stochastic process which captures how much the probability of reaching different points and in the domain of varies with the starting point :
Definition (Scale Function): Let be a dimensional diffusion on the open interval . A scale function for is an increasing function such that for all with , we have that:
(7)
where .
For instance, if is a scale function for , then we say that is in its natural scale. By definition, is a local martingale and satisfies the equation:
(8)
This is a linear first order differential equation of with variable coefficients leading to a standard solution:
(9)
where is a fixed point such that . Next, I define a speed measure which captures the probability that will exceed a certain value in finite time; i.e., will ever reach a value:
Definition (Speed Measure): The speed measure is the measure such that the infinitesimal generator of can be written as:
(10)
where we have that:
(11)
Thus, it is in fact the of the cross-sectional distribution as we consider this probability as . This measure has a particularly nice functional form which allows for easy analytical computations in the case of Pearson-Wong diffusions. The lemma below characterizes this formulation:
Lemma (Speed Measure): The speed density of a Pearson-Wong diffusion is given by the fomula below where is a fixed point such that :
(12)
with and .
The proof of this lemma stems from the definition of the infinitesimal generator:
Proof (Speed Measure): On one hand, from the definition of the speed measure, we have that:
(13)
where is some well behaved function of . On the other hand, from the definition of an infinitesimal generator, we have that:
(14)
Thus, we have that .
Thus, we have now marched through the framework for first steps of the construction of the link between a stochastic process in the class of Pearson-Wong diffusions and their corresponding cross-sectional distributions. All I need to do now is flesh out the requirements for uniqueness. In order to attain this property, I need an additional assumption on the class of Pearson-Wong diffusions: ergodicity. Below, I give a formal definition of this additional assumption:
Definition (Ergodic Pearson-Wong Diffusion): If is an interval such that for all for a Pearson-Wong diffusion , then is ergodic if:
(15)
If , then the boundary can be reached in finite time with positive probability.
Proposition (Pearson-Wong Mapping): For all ergodic diffusions in the Pearson-Wong class parameterized by the coefficient vector , there exists a unique invariant distribution in the Pearson system.
Ergodicity ensures that there are no eddies in the state space where multiple diffusions can get trapped yielding observationally equivalent cross-sectional distributions for different diffusion processes.
Proof (Pearson-Wong Mapping): From the lemma above, we know that the scale measure as the density:
(16)
where is a point such that . What’s more, we know that:
(17)
Differentiating yields:
(18)
4. Examples
In this section I work through examples which illustrate how to fit the Vasicek process and a reflecting process that generates a cross-sectional distribution that satisfies Zipf’s law.
In a Vasicek model returns follow an Ornstein-Uhlenbeck process:
(19)
with . Thus, in the functional notation of the Pearson-Wong diffusion, we have that , and . Using the formulation above, we see that:
(20)
This is the exact same formulation as the Ornstein-Uhlenbeck example from the first section. Thus, we have that:
(21)
by solving for the constant via the boundary condition that .
Next, consider a more complicated reflecting process that is defined only on the positive half-line in the form of a power law distribution with reflecting boundary at . Specifically, suppose that you have a cross-sectional probability density defined as:
(22)
which is defined on . We see that the cummulative probability density is proportional to so that Zipf’s law3 holds. However, note that there is no term in the numerator of the differential equation defining :
(23)
Thus, the power law cross-sectional distribution acts as a limiting case of the class of Pearson-Wong diffusions with and :
(24)
This solution works given the reflecting boundary as, for large enough the second term on the right hand side will be roughly and the in the first term will be negligible.
5. Conclusions
In the text above, I outline the topic of Pearson-Wong diffusions and also relate these results in continuous time mathematics to topics in macro-finance.
I conclude by looking at a final application in a recent Econometrica article, Gabaix (2011), on the granular origins of aggregate macroeconomic fluctuations. The core idea of this paper is that, when the cross-sectional distribution of firm production, , is Gaussian or some other thin-tailed distribution, shocks to the largest firms won’t matter as the number of firms . However, if firm production is distributed according to a fat tailed distribution, then shocks to the production of the largest firms will matter.4
Proposition 2 of Gabaix (2011) gives the central result. Namely, that if firm size is distributed according to a power law,
(25)
then as , if shocks to large firms won’t matter, while if shocks to large firms will matter.
Interestingly, the Pearson-Wong diffusion mathematics above gives us a new results for the implications of switching from the limiting case of to the case of . With , there will now be a new parameter to estimate. Thus, variation in how dispersed firms are in terms of their output reveals meaningful information about the structure of the stochastic process to which each firm’s output adheres.
- Source: R mixdist package. ↩
- Background info comes from Ord (1985). ↩
- Gabaix (1999) or Tao (2009). ↩
- In practice, shocks to the largest couple of firms do seem to have an impact on even large economies. For example, in December 2004, Microsoft issued a one-time dividend which boosted the growth in average personal income that year from to in the United States. ↩
The Predictability and Volatility of Returns in the Presence of Rare Disasters
1. Introduction
I characterize the relationship between the variance premium at time and the excess returns over the next months in an economy with variable rare disasters as outlined in Gabaix (2011) using the parameter estimates given in Bollerslev, Tauchen and Zhou (2009). Hao Zhou has been very kind and posted his data1 for me to use in this analysis.
First, in section 2, I relate the equity premium conditional on no disasters occurring to the value of a put option on the market given that a disaster does occur. Then, in section 3, I relate the variance premium–i.e., the difference between the Black-Scholes implied variance and the realized variance–to this equity premium over the next months. Finally, in section 4, I conclude by using the parameter estimates in Bollerslev et al. (2009) to compute the coefficient which links the equity premium to the variance premium predicted by Gabaix (2011).
2. The Put Option Premium
Intuitively, in an economy with rare disasters, the equity premium should be given by the sum of the premium due to normal times risk and the premium due to disaster risk:
(1)
where is the premium due to Gaussian noise and is the premium due to disasters. If we looked at an economy in which there was no Gaussian noise as in the main sections of Gabaix (2011), then the entire risk premium would be due to disaster risk. However, here I am going to study a world in which there exists normal times Gaussian noise. Specifically, consider an asset with price that evolves according to the rule,
(2)
where as . The functional form of is given in Gabaix (2011), but will not be important here.
Let be the value of a -period European put option on this asset with strike price ,
(3)
Proposition 3 in Gabaix (2011) tells us how to compute the value of this put option:
(4)
where is the Black-Scholes value of a put option with volatility , initial price , maturity and interest rate . In the proposition below, I relate the equity premium due to disaster risk to the value of the disaster component of the put option:
Proposition (Disaster Risk Premium): For a put option with strike price , the disaster risk premium can be written as,
(5)
This result reads that, given a sufficiently high strike price such that the fraction of dividend lost due to a disaster will always be less than the discounted strike price value of , the contribution of disaster risk to the put option value is equal to the disaster risk premium. More intuitively, the value of the disaster risk premium must equal the value of an asset that pays out in the event of a disaster; i.e., the value of .
Proof (Equity Premium): From Proposition 1 in Gabaix (2011), we have that in a world with no Gaussian noise,
(6)
If we can drop the operator. Thus, for we have that is equal to the disaster risk premium.
3. What’s Vol Got to Do With It?
Next, I want to use the result above to link the difference between in the implied variance given by the price of the period European put option and the realized variance of the underlying asset to the excess rate of return on the market over the next months. I denote this variance premium as,
(7)
I then want to be able to run the regression below where is the annualized excess return on the S&P 500 and is an error term,
(8)
Proposition (Return Predictability): Let be the of the standard Gaussian distribution:
(9)
Then, given a strike price , we can relate the variance premium to the risk premium over the next months via the relationship below,
(10)
with the coefficients,
(11)
This result says that, conditional on the realized variance , increasing an economy’s variance premium will increase the realized excess returns over the next months in a manner that is proportional to scaled by the -statistic for . Note that the coefficient will be highly non-linear in the realized volatility . On one hand, with regards to the term, decreasing will always increase the value of the equity premium. However, on the other hand, with regards to the Gaussian term, as gets very small the input to the function gets large leading to a very unlikely realization.
Proof (Return Predictability): Suppose that we observe in the data. Then, we can approximate the implied volatility using the Black-Scholes (i.e., “vega”) via the relationship:
(12)
By definition, the starting price is . What’s more, since is near the money , we have that .
Plugging in the expression for given in Proposition 3 of Gabaix (2011) and working backwards yields the desired result:
(13)
I use the approximation:
(14)
This proposition yields a corollary linking the variance premium to innovations in asset resilience :
Corollary (Implied Volatility and Asset Resilience): Given that from Proposition 1 in Gabaix (2011), we know that,
(15)
we know that the variance premium must be linearly related to innovations in asset resilience with slope coefficient given below:
(16)
4. Matching Bollerslev et al. (2009)
In this section, I conclude by computing a model derived estimate of using the parameter values given in Bollerslev et al. (2009) which estimates the regression given in Equation (8) above.
In their original regression results in Table 2, Bollerslev et al. (2009) use a somewhat unconventional choice of units. Namely, the excess returns are annualized while the variance premium is computed as a monthly estimate. To map the estimates from the original paper over to model implied values, I convert all of the data to annualized log values as outlined below,
(17)
After converting the variables to natural units, I find the summary statistics listed below where is the standard deviation operator,
The sample runs from Jan. 1990 to December 2007. These estimates read that the S&P 500 outperformed -month T-Bills by an average of a year. This gap had a volatility of a year. What’s more, the average distance between the VIX implied variance and the realized variance for the S&P 500 was .
Using these newly converted variables, I estimate a of with a -stat of . Below I report the remainder of the regression results where the values in square brackets represent standard errors,
*** QuickLaTeX cannot compile formula: \begin{align*} \begin{array}{l|ccccc} & h=1 & h=3 & h=6 & h=9 & h=12 \\ \hline \hline \alpha & -0.0024 & -0.023 & 0.0097 & 0.033 & 0.042 \\ \text{s.e.} & <sup class='footnote'><a href='#fn-1150-2' id='fnref-1150-2' onclick='return fdfootnote_show(1150)'>2</a></sup> & <sup class='footnote'><a href='#fn-1150-3' id='fnref-1150-3' onclick='return fdfootnote_show(1150)'>3</a></sup> & <sup class='footnote'><a href='#fn-1150-4' id='fnref-1150-4' onclick='return fdfootnote_show(1150)'>4</a></sup> & <sup class='footnote'><a href='#fn-1150-5' id='fnref-1150-5' onclick='return fdfootnote_show(1150)'>5</a></sup> & <sup class='footnote'><a href='#fn-1150-6' id='fnref-1150-6' onclick='return fdfootnote_show(1150)'>6</a></sup> \\ \text{s.e.}_{H1992} & <sup class='footnote'><a href='#fn-1150-7' id='fnref-1150-7' onclick='return fdfootnote_show(1150)'>7</a></sup> & <sup class='footnote'><a href='#fn-1150-8' id='fnref-1150-8' onclick='return fdfootnote_show(1150)'>8</a></sup> & <sup class='footnote'><a href='#fn-1150-9' id='fnref-1150-9' onclick='return fdfootnote_show(1150)'>9</a></sup> & <sup class='footnote'><a href='#fn-1150-10' id='fnref-1150-10' onclick='return fdfootnote_show(1150)'>10</a></sup> & <sup class='footnote'><a href='#fn-1150-11' id='fnref-1150-11' onclick='return fdfootnote_show(1150)'>11</a></sup> \\ \hline \beta & 3.21 & 4.00 & 2.54 & 1.53 & 1.17 \\ \text{s.e.} & <sup class='footnote'><a href='#fn-1150-12' id='fnref-1150-12' onclick='return fdfootnote_show(1150)'>12</a></sup> & <sup class='footnote'><a href='#fn-1150-13' id='fnref-1150-13' onclick='return fdfootnote_show(1150)'>13</a></sup> & <sup class='footnote'><a href='#fn-1150-14' id='fnref-1150-14' onclick='return fdfootnote_show(1150)'>14</a></sup> & <sup class='footnote'><a href='#fn-1150-15' id='fnref-1150-15' onclick='return fdfootnote_show(1150)'>15</a></sup> & <sup class='footnote'><a href='#fn-1150-16' id='fnref-1150-16' onclick='return fdfootnote_show(1150)'>16</a></sup> \\ \text{s.e.}_{H1992} & <sup class='footnote'><a href='#fn-1150-17' id='fnref-1150-17' onclick='return fdfootnote_show(1150)'>17</a></sup> & <sup class='footnote'><a href='#fn-1150-18' id='fnref-1150-18' onclick='return fdfootnote_show(1150)'>18</a></sup> & <sup class='footnote'><a href='#fn-1150-19' id='fnref-1150-19' onclick='return fdfootnote_show(1150)'>19</a></sup> & <sup class='footnote'><a href='#fn-1150-20' id='fnref-1150-20' onclick='return fdfootnote_show(1150)'>20</a></sup> & <sup class='footnote'><a href='#fn-1150-21' id='fnref-1150-21' onclick='return fdfootnote_show(1150)'>21</a></sup> \\ \hline \textit{Adj. } R^2 & 0.010 & 0.070 & 0.057 & 0.027 & 0.018 \end{array} \end{align*} *** Error message: You can't use `macro parameter character #' in math mode. leading text: \end{align*}
Alternatively, using the data Drechler and Yaron (2011), I find the coefficient estimates below:
*** QuickLaTeX cannot compile formula: \begin{align*} \begin{array}{l|ccccc} & h=1 & h=3 & h=6 & h=9 & h=12 \\ \hline \hline \alpha & -0.017 & -0.030 & 0.012 & 0.039 & 0.042 \\ \text{s.e.} & <sup class='footnote'><a href='#fn-1150-22' id='fnref-1150-22' onclick='return fdfootnote_show(1150)'>22</a></sup> & <sup class='footnote'><a href='#fn-1150-23' id='fnref-1150-23' onclick='return fdfootnote_show(1150)'>23</a></sup> & <sup class='footnote'><a href='#fn-1150-24' id='fnref-1150-24' onclick='return fdfootnote_show(1150)'>24</a></sup> & <sup class='footnote'><a href='#fn-1150-4' id='fnref-1150-4' onclick='return fdfootnote_show(1150)'>4</a></sup> & <sup class='footnote'><a href='#fn-1150-26' id='fnref-1150-26' onclick='return fdfootnote_show(1150)'>26</a></sup> \\ \text{s.e.}_{H1992} & <sup class='footnote'><a href='#fn-1150-27' id='fnref-1150-27' onclick='return fdfootnote_show(1150)'>27</a></sup> & <sup class='footnote'><a href='#fn-1150-28' id='fnref-1150-28' onclick='return fdfootnote_show(1150)'>28</a></sup> & <sup class='footnote'><a href='#fn-1150-29' id='fnref-1150-29' onclick='return fdfootnote_show(1150)'>29</a></sup> & <sup class='footnote'><a href='#fn-1150-10' id='fnref-1150-10' onclick='return fdfootnote_show(1150)'>10</a></sup> & <sup class='footnote'><a href='#fn-1150-7' id='fnref-1150-7' onclick='return fdfootnote_show(1150)'>7</a></sup> \\ \beta & 6.30 & 7.18 & 4.09 & 2.17 & 1.31 \\ \text{s.e.} & <sup class='footnote'><a href='#fn-1150-32' id='fnref-1150-32' onclick='return fdfootnote_show(1150)'>32</a></sup> & <sup class='footnote'><a href='#fn-1150-33' id='fnref-1150-33' onclick='return fdfootnote_show(1150)'>33</a></sup> & <sup class='footnote'><a href='#fn-1150-34' id='fnref-1150-34' onclick='return fdfootnote_show(1150)'>34</a></sup> & <sup class='footnote'><a href='#fn-1150-35' id='fnref-1150-35' onclick='return fdfootnote_show(1150)'>35</a></sup> & <sup class='footnote'><a href='#fn-1150-36' id='fnref-1150-36' onclick='return fdfootnote_show(1150)'>36</a></sup> \\ \text{s.e.}_{H1992} & <sup class='footnote'><a href='#fn-1150-37' id='fnref-1150-37' onclick='return fdfootnote_show(1150)'>37</a></sup> & <sup class='footnote'><a href='#fn-1150-38' id='fnref-1150-38' onclick='return fdfootnote_show(1150)'>38</a></sup> & <sup class='footnote'><a href='#fn-1150-39' id='fnref-1150-39' onclick='return fdfootnote_show(1150)'>39</a></sup> & <sup class='footnote'><a href='#fn-1150-40' id='fnref-1150-40' onclick='return fdfootnote_show(1150)'>40</a></sup> & <sup class='footnote'><a href='#fn-1150-41' id='fnref-1150-41' onclick='return fdfootnote_show(1150)'>41</a></sup> \\ \hline \textit{Adj. } R^2 & 0.0098 & 0.054 & 0.035 & 0.011 & 0.018 \end{array} \end{align*} *** Error message: You can't use `macro parameter character #' in math mode. leading text: \end{align*}
I want to compute the estimates implied by the variable rare disasters model given in Gabaix (2011) which requires that I have estimates for , and . I take the annualized values of and from Gabaix (2011). I take the estimate of the annualized realized variance from Bollerslev et al. (2009) of . Using these values, I find that,
(18)
This estimate implies that if the variance premium rises by per year, then the equity premium will rise by per year.
You must be logged in to post a comment.