Research Notebook

Geometric Interpretation of Noisy Rational Expectations Equilibrium

August 27, 2011 by Alex

1. Introduction

In this post, I solve a simple noisy rational expectations equilibrium model from Grossman and Stiglitz (1980) and then give a geometric interpretation of their result. First, in Section 2 I set up and solve a noisy rational expectations model. Then, in Section 3 I show how to display the 2 linear projections embedded in the model on a 3D figure. I found the figure to be a useful way of keeping track of the assumptions in more complicated settings.

2. Solution

Consider a world with a single period and only 1 asset with price p and aggregate demand x:

(1)   \begin{align*} p &= a + b \cdot x \\ &= a + b \cdot \left( z + \varepsilon \right) \end{align*}

The coefficient a denotes the average price while the coefficient b represents the price’s responsiveness to aggregate demand changes. i.e., b represents the amount by which a restaurant would change its prices if all of the sudden twice as many people started showing up each evening. Suppose that there are some traders with knowledge of the true value of the asset v, but also other traders who trade randomly and demand an amount \varepsilon \sim \mathtt{N}(0,\sigma_{\varepsilon}^2). Suppose that the value of the asset is drawn from a distribution \mathtt{N}(\mu_v,\sigma_v^2). The coefficients a and b in the equation above are equilibrium objects which I will solve for below as is the value z which is the amount demanded by the informed agents which is also linear in the asset value with coefficients c and d:

(2)   \begin{align*} z = c + d \cdot v \end{align*}

There is a market maker who sets the equilibrium price in order to break even. Let \Pi denote the informed agent’s utility from trading:

(3)   \begin{align*} \Pi &= \max_{z} \left\{ \mathbb{E} \left[ (v - p) \cdot z \mid v \right] \right\} \\ &= \max_{z} \left\{ \mathbb{E} \left[ (v - a - b \cdot z - b \cdot \varepsilon) \cdot z \mid v \right] \right\} \end{align*}

Differentiating yields an expression for the optimal holdings of the informed traders given the true asset value:

(4)   \begin{align*} z &= - \frac{a}{2 \cdot b} + \left( \frac{1}{2 \cdot b} \right) \cdot v \end{align*}

Next, to solve for the coefficient values (a, b, c, d) as functions of the model primitives (\mu_v, \sigma_v, \sigma_{\varepsilon}), I enforce the break even condition for the market maker which demands that the price of the asset be equal to the expected value of the asset conditional on observing the aggregate asset demand:

(5)   \begin{align*} p &= \mathbb{E} \left[ v \mid x \right] \\ &= \mathbb{E} \left[ v \right] + \frac{\mathbb{C} \left[ x, v\right]}{\mathbb{V}\left[ x \right]} \cdot \left( x - \mathbb{E}[x] \right) \\ &= \mu_v + \frac{\mathbb{C} \left[ - \frac{a}{2 \cdot b} + \left( \frac{1}{2 \cdot b} \right) \cdot v + \varepsilon, v\right]}{\mathbb{V}\left[ - \frac{a}{2 \cdot b} + \left( \frac{1}{2 \cdot b} \right) \cdot v + \varepsilon \right]} \cdot \left( x - \left[ - \frac{a}{2 \cdot b} + \frac{\mu_v}{2 \cdot b} \right] \right) \\ &= \mu_v + \left( \frac{\left( \frac{1}{2 \cdot b} \right) \cdot \sigma_v^2}{ \left( \frac{1}{2 \cdot b} \right)^2 \cdot \sigma_v^2 + \sigma_{\varepsilon}^2} \right) \cdot \left( \frac{a}{2 \cdot b} - \frac{\mu_v}{2 \cdot b} \right) \\ &\qquad \qquad + \left( \frac{\left( \frac{1}{2 \cdot b} \right) \cdot \sigma_v^2}{ \left( \frac{1}{2 \cdot b} \right)^2 \cdot \sigma_v^2 + \sigma_{\varepsilon}^2} \right) \cdot x \end{align*}

Enforcing this condition leads to an expression for p that is linear in the aggregate asset demand x. Thus, I observe the b must be equal to the coefficient on x from the equation above and solve for b:

(6)   \begin{align*} b &= \frac{\left( \frac{1}{2 \cdot b} \right) \cdot \sigma_v^2}{ \left( \frac{1}{2 \cdot b} \right)^2 \cdot \sigma_v^2 + \sigma_{\varepsilon}^2} \\ &= \frac{\sigma_v}{2 \cdot \sigma_{\varepsilon}} \end{align*}

We see that the market maker tends to change prices more in response to an aggregate demand shock when the asset value is more volatile and when the noise trader demand is less volatile–i.e., when asset value changes aer more likely and when there is less demand noise for the informed traders to hide behind.

3. Geometric Interpretation

The plot below captures the essence of the intuition embedded in the noisy rational expectations model. The core idea is that the market maker cannot precisely distinguish a random fluctuation in demand from a shift in demand due to a shift in the asset value. Thus, when the market maker sets the price, he looks at the aggregate demand and shades the price a bit higher if he observes a high demand or a bit lower if he observes a surprisingly low demand schedule, but does not do so on a one for one schedule: 0 < b < 1. By using normally distributed random variables as well as linear pricing and informed demand rules, we can then get nice expressions for the coefficients of interest.

To read this plot, step into the shoes of the market maker and have a look at the blue side of the figure which shows the relationship between the price (i.e., the market maker’s expectation of the value on the y-axis) and the aggregate demand (x-axis). The line p = a + b \cdot x shows the price you will set if you observe an aggregate demand of x. Note that is x = 0, you will set the price of a–the y-intercept.

Where does this pricing rule come from? When you observe the aggregate demand of x, your best guess for the informed demand schedule is z as \mathbb{E} [ \varepsilon ] = 0. This best guess is displayed in the plot by the mapping through the z = u + v \cdot x line on the green floor of the figure over to the z-axis. On the red wall of the figure, we see that this choice of z has to map over to a realized value v that is a linear function of z and is equal to your choice of p. This is the double projection and fixed point problem that pins down the equilibrium values. For instance, note that at z=0, it must be the case that both the v and p functionals cross the y-axis at the same place as \mathbb{E}[\varepsilon]=0.

Filed Under: Uncategorized

Storing CRSP-COMPUSTAT Data Using MongoDB

August 22, 2011 by Alex

In this note, I show how to set up a local Mongo DB database to house CRSP-COMPUSTAT data. I grew tired of having to use SAS to access these data on the WRDS server. The coding language is difficult to use and the server is not particularly responsive. By using Mongo DB, I can access the data in a variety of languages such as Python and R as well as parallelize the queries to speed up any computations.

I downloaded monthly CRSP data and annual COMPUSTAT data from WRDS over the time period from 1950-2010. To set up the new database system, I downloaded MongoDB, PyMongo and Python. I am running Ubuntu 11.04 (“Natty Narwhal”) so just selected “mongodb” and “python-pymongo” from the Synaptic Package Manager. Downloading all of the software took less than 5 minutes. I then used the short piece of Python code located here to populate a new database.

Filed Under: Uncategorized

Standard Error Estimation with Overlapping Samples

August 13, 2011 by Alex

1. Introduction

In this post, I show how to compute corrected standard errors for a predictive regression with overlapping samples as in Hodrick (1992). First, in Section 2, I walk through a simple example which outlines the general empirical setting and illustrates why we would need to correct the standard errors on the coefficient estimates when faced with overlapping samples. Then, in Section 3 I compute the estimator for the standard errors proposed in Hodrick (1992). I conclude in Section 4 with a numerical simulation to verify that the mathematics below in fact computes a sensible estimate of the standard deviation of \beta.

2. An Illustrative Example

Suppose that you are a mutual fund manager who has to allocate capital amongst stocks and you want to know which stocks will earn the highest returns over the next H months where H stands for your investment horizon. To start with, you might consider H=1 and run a bunch of regressions with the form below where r_{t \to (t+1)} is the log 1 month excess return, z_t is a current state variable and \varepsilon_{t \to (t+1)} is the residual:

(1)   \begin{align*} r_{t \to (t+1)} &= \theta_1 + \theta_z \cdot z_t + \varepsilon_{t \to (t + 1)} \end{align*}

For example, Fama and French (1988) pick z_t to be the log price to dividend ratio while Jegadeesh and Titman (1993) pick z_t to be a dummy variable for a stock’s inclusion or exclusion from a momentum portfolio. We can vectorize the expression above to clean up the algebra and obtain the regression equation below:

(2)   \begin{align*} \underbrace{\begin{bmatrix} r_{1 \to 2} \\ r_{2 \to 3} \\ r_{3 \to 4} \\ \vdots \\ r_{(T-1) \to T} \end{bmatrix}}_{Y_{T-1}(1)} &= \underbrace{\begin{bmatrix} 1 & z_1 \\  1 & z_2 \\ 1 & z_3 \\ \vdots & \vdots \\ 1 & z_{T-1} \end{bmatrix}}_{X_{T-1}} \underbrace{\begin{pmatrix} \theta_1 \\ \theta_z \end{pmatrix}}_{\Theta(1)} + \underbrace{\begin{bmatrix} \varepsilon_{1 \to 2} \\ \varepsilon_{2 \to 3} \\ \varepsilon_{3 \to 4} \\ \vdots \\ \varepsilon_{(T-1) \to T} \end{bmatrix}}_{\mathcal{E}_{T-1}(1)} \end{align*}

However, just as Fama and French (1988) and Jegadeesh and Titman (1993) are interested in investment horizons of H>1, you could also set up the regression from above with H>1 by making the adjustments:

(3)   \begin{align*} r_{t \to (t+H)} &= \sum_{h=1}^H r_{(t+h-1) \to (t+h)} \\ \varepsilon_{t \to (t+H)} &= \sum_{h=1}^H \varepsilon_{(t+h-1) \to (t+h)} \end{align*}

Here, expression for \varepsilon_{t \to (t+H)} comes from the null hypothesis that z_t has no predictive power assuming that each of the \varepsilon_{t \to (t+1)} terms are \mathtt{iid} white noise. Thus, if you run a new set of regressions at the H=2 month investment horizon, you would have the vectorized regression equation:

(4)   \begin{align*} Y_{T-2}(2) &= X_{T-2} \ \Theta(2) + \mathcal{E}_{T-2}(2) \end{align*}

However, while estimating this equation and trying to compute the standard errors for \theta_z, you notice something troubling: even though each of the \varepsilon_{t \to (t+1)} terms is distributed \mathtt{iid} and act as white noise, the \varepsilon_{t \to (t+2)} and \varepsilon_{(t+1) \to (t+3)} terms each contain the \varepsilon_{(t+1) \to (t+2)} shock. Thus, while the step by step shocks are white noise, the regression residuals are autocorrelated in a non trivial way:

(5)   \begin{align*} r_{t \to (t+2)} &= 2 \cdot \theta_1 + \varepsilon_{t \to (t + 1)}  + \varepsilon_{(t+1) \to (t + 2)} \end{align*}

Thus, in order to properly account for the variability of your estimate of \theta_z, you will have to compute standard errors for the regression that take this autocorrelation and conditional heteroskedasticity into account.

3. Hodrick (1992) Solution

Standard econometric theory tells us that we can estimate \Theta(H) using GMM yielding the distributional result:

(6)   \begin{align*} \hat{\Theta}(H) - \Theta(H) &\sim \mathtt{N}\left( 0, V(H) \right) \end{align*}

with the variance covariance matrix given by the expression:

(7)   \begin{align*} V(H) &= \frac{1}{T-H} \cdot \mathbb{E}[X_{T-H} X_{T-H}^{\top}]^{-1} S_{T-H} \mathbb{E}[X_{T-H} X_{T-H}^{\top}]^{-1} \end{align*}

Thus, the autocovariance of the regression residuals will be captured by the S_{T-H} or spectral density term. A natural way to account for this persistence in errors would be to compute the S_{T-H} would be to compute something like the average of the autocovariances:

(8)   \begin{align*} S_{T-H} &= \sum_{j=-H+1}^{H-1} \mathbb{E} \left[ \left( \varepsilon_{t+H} \cdot \begin{bmatrix} 1 & z_t \end{bmatrix}^{\top} \right) \left( \varepsilon_{t+H-j} \cdot \begin{bmatrix} 1 & z_t \end{bmatrix}^{\top} \right)^{\top} \right] \end{align*}

However, this estimator for the spectral density has bad small sample properties as autocovariance matrices are only garraunteed to be positive semi-definite leading to large amounts of noise as your computer attempts to invert a nearly singular matrix. The insight in Hodrick (1992) is to use stationarity of the time series Y_{T-H}(H) and X_{T-H} to switch from summing autocovariances to variances:

(9)   \begin{align*} &= \mathbb{E} \left[ \varepsilon_{t+1}^2 \cdot \left( \sum_{h=0}^{H-1} \begin{bmatrix} 1 & z_{t - h} \end{bmatrix}^{\top} \right) \left( \sum_{h=0}^{H-1} \begin{bmatrix} 1 & z_{t - h} \end{bmatrix}^{\top} \right)^{\top} \right] \end{align*}

4. Simulation Results

In this section, I conclude by verifying my derivations with a simulation (code). First, I compute a data set of 1 month returns using a discretized version of an Ornstein-Uhlenbeck process with \Delta t = 1/12:

(10)   \begin{align*} r_{t \to (t+1)} &= \theta \cdot (\mu - r_{(t-1) \to t}) \cdot \Delta t + \sigma \cdot \sqrt{\Delta t} \cdot \varsigma_{t \to (t+1)} \end{align*}

with \varsigma_{t \to (t+1)} an \mathtt{iid} standard normal variable. I use the annualized moments below taken from Cochrane (2005):

(11)   \begin{align*} \begin{array}{l|c} & \textit{Value} \\ \hline \hline \mu & 0.08 \\ \theta & 0.70 \\ \sigma & 0.16 \end{array} \end{align*}

I also simulate a completely unrelated process z_t which represents T \mathtt{iid} draws from a standard normal distribution. Thus, I check my computations under the null hypothesis that z_t has no predictive power. I then run 500 simulations in which I compute the data series above, estimate the regression:

(12)   \begin{align*} Y_{T-6}(6) &= X_{T-6} \ \Theta(6) + \mathcal{E}_{T-6}(6) \end{align*}

and then report the distribution of \theta_z, as well as the naive and Hodrick (1992) implied standard errors:

Estimated coefficients for 500 simulated draws.

Estimated standard errors for 500 simulated draws using both the naive and Hodrick (1992) approaches.

I report the mean values from the simulations below:

(13)   \begin{align*} \begin{array}{l|c} \hat{\sigma}_{\mathtt{Naive}} & 0.00312 \\ \hline \hat{\sigma}_{\mathtt{H1992}} & 0.00326 \end{array} \end{align*}

Filed Under: Uncategorized

Co-Movement Between Bond and Stock Risk Premia

August 2, 2011 by Alex

1. Introduction

I compare the covariance between the bond risk premium as captured by the Cochrane and Piazzesi (2005) factor and the stock risk premium as captured by the logarithm of the price to dividend ratio as used in, say, Shiller (2006). This covariance measure gives a rough view of whether or not the same risk factors are driving the excess returns in each market.

In section 2, I recreate the Cochrane and Piazzesi (2005) factor and verify its properties. In section 3, I compute the log price to dividend ratio from Robert Shiller’s online data. Finally, in section 4, I conclude by computing the covariances between the Cochrane and Piazzesi (2005) factor, the log price to dividend ratio and the excess returns on the S&P 500.

2. Bond Risk Premium

In this section, I recreate the Cochrane and Piazzesi (2005) bond risk factor. The raw data are 1 through 5 year zero coupon bond prices which come from the CRSP Fama-Bliss risk free bond prices hosted on WRDS with run from January 1964 to December 2003. From these raw series, I convert the price as p_t = \ln \left( P_t/100 \right) and then compute bond yields, forward rates and returns as described below where the t subscript represents the current date and the \tau argument represents the bond maturity:

(1)   \begin{align*} y_t(\tau) \ &= \ - \dfrac{p_t(\tau)}{\tau} \\ f_t(\tau) \ &= \ p_t(\tau - 1) \ - \ p_t(\tau) \\ r_t(\tau) \ &= \ p_t(\tau - 1) \ - \ p_{t-12}(\tau) \end{align*}

Next, I run a regression of the average excess returns for the 2 through 5 year maturity bonds over the 1 year spot rate:

(2)   \begin{align*} \overline{r}^x_{t+1} &= \begin{bmatrix} \gamma_0 \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \gamma_4 \\ \gamma_5 \end{bmatrix}' \begin{pmatrix} 1 \\ y_t(1) \\ f_t(2) \\ f_t(3) \\ f_t(4) \\ f_t(5) \end{pmatrix} + \varepsilon_{t+1} \\ &= \Gamma' F_t + \varepsilon_{t+1} \end{align*}

Using monthly data from 1964-2003, I find the regression results below which match up closely with those reported in Table 1 of Cochrane and Piazzesi (2005):

    \begin{align*} \begin{array}{l|cc} & \textit{est} & \textit{s.e.} \\ \hline \hline \gamma_0 & -0.03 & 0.0056 \\ \gamma_1 & -2.06 & 0.255 \\ \gamma_2 & 0.59 & 0.523 \\ \gamma_3 & 2.92 & 0.440 \\ \gamma_4 & 0.90 & 0.323 \\ \gamma_5 & -1.95 & 0.267 \\ \hline R^2 & 0.34 \end{array} \end{align*}

These results say that, for instance, a 1\% increase in the 3 year forward rate predicts an increase in the average excess return over maturities from 2 to 5 years by 3\%, while a 1\% increase in the 5 year forward rate predicts a 2\% decrease in the same measure. The figure below plots the coefficients above along with a 95\% confidence interval.

This figure reports the estimates of gamma from the regression depicted above of average excess returns on bonds of maturities 2 through 5 on forward rates at maturities 1 to 5 as well as a constant corresponding to Panel A of Figure 1 in Cochrane and Piazzesi (2005).

I then use this regression estimate to compute the Cochrane and Piazzesi (2005) factor as the predicted value of the average excess returns:

(3)   \begin{align*} \textit{CP}_t &= \Gamma' F_t \end{align*}

Don't click on me!!!

This figure plots the level and first difference of the Cochrane and Piazzesi (2005) bond risk premium measure using monthly observations of annual returns from January 1964-December 2003 as well as the analogous bond risk premium measure implied in Gabaix (2011).

I also compute the analogous measure of the bond risk premium implied by Gabaix (2011):

(4)   \begin{align*} \textit{CP}_t^G &= \frac{-y_t(1) + 2 \cdot f_t(3) - f_t(5)}{4} \end{align*}

Below I report the summary statistics of the both bond risk premium measures. This table reads that the average excess return on bonds at maturities 2–5 years is 0.84\% per year in this monthly sample from January 1964 to December 2003 with a standard deviation of 2.4\%.

    \begin{align*} \begin{array}{l|cc} & \mathbb{E}[ \cdot ] & \mathbb{S}[ \cdot ] \\ \hline \hline \textit{CP}_t & 0.0084  & 0.024 \\ \textit{CP}_t^G & 0.0014 & 0.0027 \end{array} \end{align*}

3. Stock Risk Premium

In this section, I compute the logarithm of the price to dividend ratio on the S&P 500 as a proxy for the risk premium in the US stock market. I use data from Robert Shiller’s website which reports the real price level, real dividends and real earnings on the S&P 500 on a monthly frequency dating back to the 1800’s. Using this data, I compute the annual cum dividend excess return on the S&P 500 as:

(5)   \begin{align*} r_{t+12}^e &= \frac{P_{t+12} + D_{t+12}}{P_t} - 1 \end{align*}

This data delivers the summary statistics below for the monthly sample running from January 1964 to December 2003 corresponding to the Cochrane and Piazzesi (2005) interal. The excess returns are annualized while the log price to dividend and log price to earnings ratios, p_t - d_t and p_t - y_t respectively, are on a month by month basis. This table reads that the average excess return on the stock market over the next year was 6.6\% per year with a volatility of 16\%, while in any given month the average of the logarithm of the price to dividend ratio is 3.496 with an average month to month increase of 0.0013.

    \begin{align*} \begin{array}{l|cc} & \mathbb{E}[ \cdot ] & \mathbb{S}[ \cdot ] \\ \hline \hline r_{t+12}^e & 0.066  & 0.16 \\ p_t - d_t & 3.496 & 0.413 \\ p_t - y_t & 2.759 & 0.423 \\ \Delta (p_t - d_t) & 0.0013 & 0.0359 \\ \Delta (p_t - y_t) & 0.00039 & 0.041 \end{array} \end{align*}

Below I plot the annualized excess return on the S&P 500 along with the log price to dividend and log price to earnings ratios in both levels and changes.

This figure shows monthly observations of the annual excess returns on the S&P 500 along with the log price to dividend and price to earnings ratio from January 1964 to December 2003 using data from Shiller (2006).

This figure shows monthly observations of the annual excess returns on the S&P 500 along with the first difference of the log price to dividend and price to earnings ratio from February 1964 to December 2003 using data from Shiller (2006).

I regress the excess return on the S&P 500 over the next year on the current log price to dividend ratio using monthly observations over the period from January 1964 to December 2003. I report these regression results below. The standard errors have not been corrected for overlapping samples. The point estimates indicate that a 1\% increase in the price to dividend ratio in the current month is associated with a 0.06\% or 6\textit{b.p.} decrease in the returns on the S&P 500 over the next year.

    \begin{align*} \begin{array}{l|cc} & \textit{est} & \textit{s.e.} \\ \hline \hline \alpha_{p_t - d_t} & 0.24 & 0.049 \\ \beta_{p_t - d_t} & -0.063 & 0.018 \\ \hline R^2 & 0.026 \end{array} \end{align*}

I also regress the level and the change in the log price to dividend ratio on the level and the change in the annual real interest rate using monthly observations over the period from January 1964 to December 2003 and report the regression coefficients below. The standard errors have not been corrected for overlapping samples. The annual real interest rate each month is the difference between the Fama-Bliss riskless rate over that year and the log change in inflation using the data from the CPI located here.

    \begin{align*} \begin{array}{l|cc} \mathtt{LHS:} \ p_t - d_t & \textit{est} & \textit{s.e.} \\ \hline \hline \alpha & 3.60 & 0.024 \\ \beta & -2.52 & 0.78 \\ \hline R^2 & 0.42 \end{array} \end{align*}

    \begin{align*} \begin{array}{l|cc} \mathtt{LHS:} \ \Delta_1 (p_t - d_t) & \textit{est} & \textit{s.e.} \\ \hline \hline \alpha_{\Delta_1 r_{t+12}^f} & 0.00065 & 0.0015 \\ \beta_{\Delta_1 r_{t+12}^f} & -1.70 & 0.26 \\ \hline R^2 & 0.034 \end{array} \end{align*}

    \begin{align*} \begin{array}{l|cc} \mathtt{LHS:} \ \Delta_{12} (p_t - d_t) & \textit{est} & \textit{s.e.} \\ \hline \hline \alpha_{\Delta_{12} r_{t+12}^f} & 0.012 & 0.02 \\ \beta_{\Delta_{12} r_{t+12}^f} & -2.08 & 1.07 \\ \hline R^2 & 0.13 \end{array} \end{align*}

The raw correlations here are given in the tables below where the \tilde{r}^f denotes the real riskless rate over the next year computed using an AR(2) implied inflation rate.

    \begin{align*} \begin{array}{l||cccc} & p_t - d_t & p_t - y_t &  r_{t+12}^f &  \tilde{r}_{t+12}^f \\ \hline \hline p_t - d_t & 1 & 0.88 & -0.14 & -0.14 \\ p_t - y_t & & 1 & -0.15 & -0.15 \\ r_{t+12}^f & &  & 1 & 0.995 \\ \tilde{r}_{t+12}^f & & & & 1 \end{array} \end{align*}

    \begin{align*} \begin{array}{l||cccc} & \Delta (p_t - d_t) & \Delta (p_t - y_t) &  \Delta r_{t+12}^f  & \Delta \tilde{r}_{t+12}^f \\ \hline \hline \Delta (p_t - d_t) & 1 & 0.88 & -0.28 & -0.25 \\ \Delta (p_t - y_t) & & 1 & -0.29 & -0.26 \\ \Delta r_{t+12}^f & &  & 1 & 0.820 \\ \Delta \tilde{r}_{t+12}^f  & & & & 1 \end{array} \end{align*}

We can see from the plots below that this negative correlation between the log price to dividend ratio and the real riskless rate comes entirely from a spike during the 1970’s.

Level and month over month changes in the log price to dividend ratio and annual real riskless rate.

4. Co-Movement

Finally, to get a sense of how much the risk premia in the bond and stock markets co-move, I report the correlation between the Cochrane and Piazzesi (2005) bond risk factor, the Gabaix (2011) implied bond risk premium analogue, the log price to dividend and priec to earnings ratios and the excess returns on the S&P 500 in both levels and changes below:

    \begin{align*} \begin{array}{l||cccccc} & \textit{CP}_t & \textit{CP}_t^G & p_t - d_t & p_t - y_t & r_{t+12}^e & r_{t+12}^f \\ \hline \hline \textit{CP}_t & 1 & 0.885 &  - 0.155 & - 0.033 & 0.311 & 0.39 \\ \textit{CP}_t^G &  &  1 &  0.135 & 0.286 & 0.246 & 0.19 \\ p_t - d_t & & & 1 & 0.908 & -0.169 & -0.14 \\ p_t - y_t & & & & 1 & -0.163 & -0.15 \\ r_{t+12}^e & & & & & 1 & 0.44 \\ r_{t+12}^f & & & & & & 1 \end{array} \end{align*}

    \begin{align*} \begin{array}{l||ccccc} & \Delta \textit{CP}_t & \Delta \textit{CP}_t^G & \Delta (p_t - d_t)  & r_{t+12}^e & r_{t+12}^f \\ \hline \hline \Delta \textit{CP}_t & 1 & 0.930 & 0.018 & -0.029 & - 0.00035 \\ \Delta \textit{CP}_t^G & & 1 & 0.011 & -0.0121 & -0.00053 \\ \Delta (p_t - d_t) & & & 1  & 0.081 & 0.044 \\ r_{t+12}^e & &  & & 1 & 0.44 \\ r_{t+12}^f & & & & & 1 \end{array} \end{align*}

Filed Under: Uncategorized

Pearson-Wong Diffusions

July 30, 2011 by Alex

1. Introduction

I introduce the concept of Pearson-Wong diffusions and then show how this mathematical object can be put to use in macro-finance.

Roughly speaking, Pearson-Wong diffusions link properties of stochastic processes to properties of cross-sectional distributions in the resulting population. For example, suppose you have in mind a stochastic process that governs the total sales of each firm in the US. If this stochastic process is a Pearson-Wong diffusion you would also know what the steady state cross-sectional distribution of firm sales would be. Conversely, if you observed a particular cross-sectional distribution of firm sales, then if you assumed all firms had a similar sales growth process and that the economy was in a steady state, you could then back out which Pearson-Wong diffusion was governing sales growth in the economy up to an affine transformation.

First, in the Section 2, I define the Pearson (1895) system of distributions. Next, in Section 3, I elaborate on work by Wong (1964) and show that a broad class of diffusion processes with polynomial volatility called Pearson-Wong diffusions lead to steady state distributions in the Pearson system. I show that these distributions are uniquely defined by their polynomial coefficients. In Section 4, I show how a broad set of common continuous time processes in macro-finance such as Ornstein-Uhlenbeck processes and Feller square root processes sit in this class of Pearson-Wong diffusions. Finally, I conclude in Section 5 by returning to this sales volume example above taken from Gabaix (2011) and showing that variation in the cross-sectional distribution of firm sales volume implies variation in the functional form of the stochastic process governing each firm’s aggregate sales.

2. The Pearson System of Distributions

In this section I motivate and define the Pearson system of distributions. Karl Pearson developed the Pearson system of distributions as a taxonomy for understanding the skewed distributions he was finding in the biological data he was studying. For instance, Pearson had access to data on dimensions of crabs caught off the coast of Naples as illustrated in the figure below1. When studying the ratio of the length of the crabs to their breadth, he found a distribution that was non-normal, and almost seemed to be a mixture of 2 normal distributions.

The data give the ratio of "forehead" breadth to body length for 1000 crabs sampled at Naples. Source: R mixdist package (http://goo.gl/CaGkt).

In order to manipulate these data analytically, Pearson then searched out a simple functional form that would capture the main features of these skewed distributions with only a handful of parameters. In particular, he was after a formulation that fit continuous, single peaked distributions over various ranges with varying levels of skewness and kurtosis. Through guess and check, he settled on the definition below2:

Definition (Pearson System): A continuous, univariate, probability distribution p(x) over x \in (\underline{x}, \overline{x}) is a member of the Pearson System if it satisfies the differential equation below with constants r, a, b and c:

(1)   \begin{align*} \frac{d}{dx} p(x) &= \frac{x-r}{a \cdot x^2 + b \cdot x + c} \cdot p(x) \end{align*}

What are the features of this formulation? First, if r is not a root of the polynomial a \cdot x^2 + b \cdot x + c then p(x) is finite. Next, we see that r characterizes the signle peak of the distribution as dp(x)/dx = 0 when x = r. What’s more, we know that p(x) has to be single peaked as p(x) \geq 0 and \int_{\underline{x}}^{\overline{x}} p(x) dx = 1, so p(x) and d p(x)/dx must tend towards 0 as x goes to \pm \infty.

Heuristically speaking, we can think about r as parameterizing the peak of the distribution and the quadratic polynomial as characterizing the rate of descent from this peak in either direction as a function of x. Importantly, the solution to this differential equation will depend on the character of the roots of the quadratic polynomial:

(2)   \begin{align*} 0 &= a \cdot x^2 + b \cdot x + c \end{align*}

In his original 1895 paper, Pearson spent most of his time actually classifying different types of distributions based on the nature of their respective polynomials. Below is a short list of distributions that fall into the Pearson class:

    \begin{align*} \begin{array}{|l|} \text{Distribution} \\ \hline \hline \textit{Normal} \\ \textit{Gamma} \\ \textit{Inverse-Gamma} \\ \textit{Student t} \\ \textit{Beta} \end{array} \end{align*}

Below, I walk through an example showing how the normal distribution fits into the Pearson system:

Example (Gaussian Distribution): When a = b = 0 we get the Gaussian PDF. First, note that given these assumptions, the differential equation above can be written as:

(3)   \begin{align*} \frac{d}{dx} \ln p(x) &= -\frac{x-r}{c} \end{align*}

Thus, by integrating up we see that the solution has the form:

(4)   \begin{align*} p(x) &= k \cdot e^{-\frac{(x-r)^2}{2 \cdot c}} \end{align*}

If we choose k such that the probability mass over the real line is 1, we get k = \sqrt{2 \cdot \pi \cdot c}.

3. Main Results

In this section I define the class of Pearson-Wong diffusions and outline the mapping between the coefficients of the stochastic process and the parameters of the cross-sectional distribution. In the analysis below, I consider time homogeneous diffusion processes; i.e., the coefficients of the stochastic process can only depend on time t through the value of X_t:

Definition (Time Homogeneous Diffusion): Let m(x) and \sigma(x) be real valued functions that are Lipschitz on the interval (\underline{x}, \overline{x}) with \sigma(x) > 0. Then a diffusion X_t is a time-homogeneous diffusion if there exists a unique solution to the equation:

(5)   \begin{align*} X_t &= x_0 + \int_0^t m(X_s) \cdot ds + \int_0^t \sigma(X_s) \cdot dB_s \end{align*}

Next, I define the class of Pearson-Wong diffusions:

Definition (Pearson-Wong Diffusion): A Pearson-Wong polynomial diffusion is stationary, time homogeneous solution to a stochastic differential equation of the form below, where \theta > 0, B_t is a Brownian motion and the triple of coefficients (\alpha, \beta, \gamma) are such that the square root is well defined when X_t is in the state space (\underline{x}, \overline{x}):

(6)   \begin{align*} dX_t &= \theta \cdot (\mu - X_t) \cdot dt + \sqrt{2 \cdot \theta \cdot (\alpha \cdot X_t^2 + \beta \cdot X_t + \gamma)} \cdot dB_t \end{align*}

What sorts of processes fit inside this class of diffusions? For one example, consider an Ornstein-Uhlenbeck process which would arise if we set \alpha = \beta = 0, \gamma = 1 and \theta \in (0,1). In this setting, \sigma^2 = 2 \cdot \theta. In the next section, I show how more exotic process also fit into this box of Pearson-Wong diffusions.

Now, given this definition, I need to derive a mapping between the values of the polynomial coefficients \begin{bmatrix} \alpha & \beta & \gamma \end{bmatrix} and the form of the resulting cross-sectional distribution p(x). I do this in 3 steps. First, I characterize the scale function U(x) and speed density P(x) of the diffusion X_t. Next, I link the infinitesimal generator of the diffusion process X_t to the scale function and speed density of X. Finally, I show that given the mapping between the infinitesimal generator and stochastic processes in the class of Pearson diffusions, if X_t is an ergodic process then this mapping is unique.

Below, I define the scale function for a stochastic process X_t which captures how much the probability of reaching different points y and w in the domain of X_t varies with the starting point x:

Definition (Scale Function): Let X_t be a 1 dimensional diffusion on the open interval (\underline{x}, \overline{x}). A scale function for X_t is an increasing function U(x): (\underline{x}, \overline{x}) \mapsto \mathbb{R} such that for all w < x < y with w,y \in (\underline{x}, \overline{x}), we have that:

(7)   \begin{align*} \mathtt{Pr}[\tau(y) < \tau(w)] &= \frac{U(x) - U(w)}{U(y) - U(w)} \end{align*}

where \tau(w) = \inf\left\{ t > 0 \mid X_t = w \right\}.

For instance, if U(x) = x is a scale function for X_t, then we say that X_t is in its natural scale. By definition, U(x) is a local martingale and satisfies the equation:

(8)   \begin{align*} 0 &= m(x) \cdot U'(x) + \frac{\sigma(x)^2}{2} \cdot U''(x) \end{align*}

This is a linear first order differential equation of u(x) with variable coefficients leading to a standard solution:

(9)   \begin{align*} \ln u(x) &= \int_{x_0}^{\overline{x}} \frac{s - \mu}{\alpha \cdot s^2 + \beta \cdot s + \gamma} \cdot ds \end{align*}

where x_0 is a fixed point such that \alpha \cdot x_0^2 + \beta \cdot x_0 + \gamma > 0. Next, I define a speed measure P(x) which captures the probability that x will exceed a certain value in finite time; i.e., will ever reach a value:

Definition (Speed Measure): The speed measure P(x) is the measure such that the infinitesimal generator of X_t can be written as:

(10)   \begin{align*} \mathbb{A} f(x) &= \frac{d^2}{dP \cdot dU} f(x) \end{align*}

where we have that:

(11)   \begin{align*} \frac{d}{dU} f(x) &= \lim_{h \to 0} \frac{f(x+h) - f(x)}{U(x+h) - U(x)} \\ \frac{d}{dP} g(x) &= \lim_{h \to 0} \frac{g(x+h) - g(x)}{P(x,x+h)} \end{align*}

Thus, it is in fact the \mathtt{PDF} of the cross-sectional distribution as we consider this probability as t \to \infty. This measure has a particularly nice functional form which allows for easy analytical computations in the case of Pearson-Wong diffusions. The lemma below characterizes this formulation:

Lemma (Speed Measure): The speed density of a Pearson-Wong diffusion is given by the fomula below where x_0 is a fixed point such that \alpha \cdot x_0^2 + \beta \cdot x_0 + \gamma > 0:

(12)   \begin{align*} p(x) &= \frac{1}{u(x) \cdot (\alpha \cdot x^2 + \beta \cdot x + \gamma)} \end{align*}

with P'(x) = p(x) and U'(x) = u(x).

The proof of this lemma stems from the definition of the infinitesimal generator:

Proof (Speed Measure): On one hand, from the definition of the speed measure, we have that:

(13)   \begin{align*} \mathbb{A} f(x) &= \frac{d^2}{dU(x) \cdot dP(x)} f(x) \\ &= \frac{1}{U'(x)} \cdot \frac{d}{dx} \left( \frac{1}{P'(x)} \cdot \frac{d}{dx} f(x) \right) \\ &= \frac{1}{u(x) \cdot p(x)} \cdot f''(x) - k(x) \cdot f'(x) \end{align*}

where k(x) is some well behaved function of x. On the other hand, from the definition of an infinitesimal generator, we have that:

(14)   \begin{align*} \mathbb{A} f(x) &= \frac{1}{2} \cdot \sigma(x)^2 \cdot f''(x) + m(x) \cdot f'(x) \end{align*}

Thus, we have that \left[ u(x) \cdot p(x) \right]^{-1} \propto \sigma(x)^2/2.

Thus, we have now marched through the framework for first 2 steps of the construction of the link between a stochastic process in the class of Pearson-Wong diffusions and their corresponding cross-sectional distributions. All I need to do now is flesh out the requirements for uniqueness. In order to attain this property, I need an additional assumption on the class of Pearson-Wong diffusions: ergodicity. Below, I give a formal definition of this additional assumption:

Definition (Ergodic Pearson-Wong Diffusion): If (\underline{x}, \overline{x}) is an interval such that a \cdot x^2 + b \cdot x + c > 0 for all x \in (\underline{x}, \overline{x}) for a Pearson-Wong diffusion X_t, then X_t is ergodic if:

(15)   \begin{align*} \infty &= \int_{x_0}^{\overline{x}} u(x) \cdot dx = \int_{\underline{x}}^{x_0} u(x) \cdot dx \\ \infty &> \int_{\underline{x}}^{\overline{x}} V(x) \cdot dx \end{align*}

If \int_{x_0}^{\overline{x}} u(x) \cdot dx, then the boundary \underline{x} can be reached in finite time with positive probability.

Proposition (Pearson-Wong Mapping): For all ergodic diffusions in the Pearson-Wong class parameterized by the coefficient vector \begin{bmatrix} \alpha & \beta & \gamma \end{bmatrix}, there exists a unique invariant distribution in the Pearson system.

Ergodicity ensures that there are no eddies in the state space where multiple diffusions can get trapped yielding observationally equivalent cross-sectional distributions p(x) for different diffusion processes.

Proof (Pearson-Wong Mapping): From the lemma above, we know that the scale measure as the density:

(16)   \begin{align*} u(x) &= e^{\int_{x_0}^x \frac{s - \mu}{\alpha \cdot s^2 + \beta \cdot s + \gamma} \cdot ds} \end{align*}

where x_0 \in (\underline{x}, \overline{x}) is a point such that \alpha \cdot x_0^2 + \beta \cdot x_0 + \gamma > 0. What’s more, we know that:

(17)   \begin{align*} p(x) &\propto \frac{1}{u(x) \cdot (\alpha \cdot x^2 + \beta \cdot x + \gamma)} \end{align*}

Differentiating p(x) yields:

(18)   \begin{align*} \frac{d}{dx} p(x) &= - \frac{(2 \cdot \alpha + 1) \cdot x - \mu + \beta}{\alpha \cdot x^2 + \beta \cdot x + \gamma} \cdot p(x) \end{align*}

4. Examples

In this section I work through 2 examples which illustrate how to fit the Vasicek process and a reflecting process that generates a cross-sectional distribution that satisfies Zipf’s law.

In a Vasicek model returns follow an Ornstein-Uhlenbeck process:

(19)   \begin{align*} dX_t &= \theta \cdot (\mu - X_t) \cdot dt + \sigma \cdot dB_t \end{align*}

with \theta, \mu, \sigma > 0. Thus, in the functional notation of the Pearson-Wong diffusion, we have that 2 \cdot \theta = \sigma^2, a,b=0 and c=1. Using the formulation above, we see that:

(20)   \begin{align*} \frac{d}{dx} \ln p(x) &= - \frac{x - \mu}{\sigma} \end{align*}

This is the exact same formulation as the Ornstein-Uhlenbeck example from the first section. Thus, we have that:

(21)   \begin{align*} p(x) &= \frac{1}{\sqrt{2 \cdot \pi \cdot \sigma}} \cdot e^{-\frac{(x - \mu)^2}{2 \cdot \sigma^2}} \end{align*}

by solving for the constant via the boundary condition that \int_{-\infty}^\infty p(x) \cdot dx = 1.

Next, consider a more complicated reflecting process that is defined only on the positive half-line in the form of a power law distribution with reflecting boundary at \underline{x}>0. Specifically, suppose that you have a cross-sectional probability density p(x) defined as:

(22)   \begin{align*} p(x) &\propto \frac{1}{x^2} \end{align*}

which is defined on (\underline{x},\infty). We see that the cummulative probability density is proportional to 1/x so that Zipf’s law3 holds. However, note that there is no x term in the numerator of the differential equation defining p(x):

(23)   \begin{align*} \frac{d}{dx} p(x) &= - \frac{2}{x} \cdot p(x) \end{align*}

Thus, the power law cross-sectional distribution acts as a limiting case of the class of Pearson-Wong diffusions with \beta = \gamma = 0 and \alpha \gg \mu:

(24)   \begin{align*} \frac{d}{dx} \ln p(x) &= - \frac{2}{x} \\ &= - \frac{\overbrace{(2 \cdot \alpha + 1)}^{\approx 2 \cdot \alpha} \cdot x}{\alpha \cdot x^2} - \underbrace{\left(\frac{\mu}{\alpha \cdot x^2}\right)}_{\approx 0} \end{align*}

This solution works given the reflecting boundary \bar{x} > 0 as, for \alpha large enough the second term on the right hand side will be roughly 0 and the +1 in the first term will be negligible.

5. Conclusions

In the text above, I outline the topic of Pearson-Wong diffusions and also relate these results in continuous time mathematics to topics in macro-finance.

I conclude by looking at a final application in a recent Econometrica article, Gabaix (2011), on the granular origins of aggregate macroeconomic fluctuations. The core idea of this paper is that, when the cross-sectional distribution of firm production, S_{t,n}, is Gaussian or some other thin-tailed distribution, shocks to the largest firms won’t matter as the number of firms N \to \infty. However, if firm production is distributed according to a fat tailed distribution, then shocks to the production of the largest firms will matter.4

Proposition 2 of Gabaix (2011) gives the central result. Namely, that if firm size is distributed according to a power law,

(25)   \begin{align*} \mathtt{Pr}[S > \bar{s}] &= a \cdot x^{-\zeta}, \end{align*}

then as N \to \infty, if \zeta \geq 2 shocks to large firms won’t matter, while if \zeta \in [1,2) shocks to large firms will matter.

Interestingly, the Pearson-Wong diffusion mathematics above gives us a new results for the implications of switching from the limiting case of \zeta = 1 to the case of \zeta \in (0,1). With \zeta \in (0,1), there will now be a new parameter \beta to estimate. Thus, variation in how dispersed firms are in terms of their output reveals meaningful information about the structure of the stochastic process to which each firm’s output adheres.

  1. Source: R mixdist package. ↩
  2. Background info comes from Ord (1985). ↩
  3. Gabaix (1999) or Tao (2009). ↩
  4. In practice, shocks to the largest couple of firms do seem to have an impact on even large economies. For example, in December 2004, Microsoft issued a \$24B one-time dividend which boosted the growth in average personal income that year from 0.6\% to 3.7\% in the United States. ↩

Filed Under: Uncategorized

« Previous Page
Next Page »

Pages

  • Publications
  • Working Papers
  • Curriculum Vitae
  • Notebook
  • Courses

Copyright © 2026 · eleven40 Pro Theme on Genesis Framework · WordPress · Log in

 

Loading Comments...
 

You must be logged in to post a comment.