Research Notebook

Notes: Hassan and Mertens (2011)

October 5, 2011 by Alex

1. Introduction

In this note, I outline the main results in Hassan and Mertens (2011)1 for use in a 5min presentation in Prof. Sargent‘s reading group.

This paper asks the question: “Suppose that you air dropped a bit of noise into equity prices, how much worse off would everyone be in the context of a macroeconomic model with production?” As it turns out, adding a bit of noise can have a first order effect on welfare as noising up equity prices causes agents to disinvest in firms in favor of riskless debt, leading to lower production and thus lower consumption as agents can only consume what has been produced. To answer this question, the authors overcome 2 main challenges. First, they define a new equilibrium concept that allows for heterogeneous beliefs. Second, they solve for the equilibrium asset prices which vary non-linearly with their information set.2

To my knowledge, this paper is the first to address the costs of noisy asset prices in a dynamic general equilibrium setting; however, other papers such as Angeletos and Pavan (2007) have looked at the social costs and benefits of correct prices. Veldkamp (2011, Ch 8) also provides a good reference for solving noisy rational expectations models.

2. Model

Consider a standard linear homogeneous production function that produces output Y_t using inputs of capital K_t and labor L_t. \eta_t denotes the total factor productivity and evolves according to a diffusion process with mean - \sigma_\eta^2/2 and variance \sigma_\eta^2:

(1)   \begin{align*} Y_t &= e^{\eta_t} \cdot F(K_t, L_t) \end{align*}

Capital evolves according to the law of motion below where \delta is the depreciation rate and I_t is the aggregate investment level:

(2)   \begin{align*} K_{t+1} &= K_t \cdot \left( 1 - \delta \right) + I_t \end{align*}

There are a continuum of households indexed on the real line by i \in [0,1]. At the beginning of every period, each household i gets a signal about tomorrow’s TFP with noise \nu_t(i) that is distributed normally with mean 0 and variance \sigma_\nu^2:

(3)   \begin{align*} s_t(i) &= \eta_{t+1} + \nu_t(i) \end{align*}

I refer to s_t(i) as the “knowledge” of agent i about tomorrow’s productivity. Given this knowledge, at time 0 each agent then has a value function V_t(i) over consumption streams \left\{ C_t(i) \right\}_{t=0}^\infty given his relative investment in stocks rather than bonds \left\{ \omega_t(i) \right\}_{t=0}^\infty:

(4)   \begin{align*} \begin{split} V_t(i) &= \max_{ \left\{ C_t(i) \right\}_{t=0}^\infty, \left\{ \omega_t(i) \right\}_{t=0}^\infty} \mathcal{E}_{i,t} \left[ \sum_{s=t}^\infty \beta^{s-t} \cdot \log C_s(i) \right] \\ &\text{s.t.} \\ W_{t+1} &= \left[ (1 - \omega_t(i)) \cdot (1 + r) + \omega_t(i) \cdot \left( 1 + \tilde{r}_{t+1} \right) \right] \\ &\qquad \times \left( W_t(i) + w_t \cdot L - C_t(i) \right) + \tau_t(i) \end{split} \end{align*}

In the optimization program above, \mathcal{E}_{i,t} denotes agent i‘s subjective views on how the future will unfold given his knowledge s_t(i), W_t(i) denotes agent i‘s financial wealth at time t, \tilde{r}_{t+1} denotes the return on stocks, \omega_t denotes the wage rate for all agents, L denotes the amount of labor supplied by each household and \tau_t(i) denotes the payments from the contingent claims trading. Equity prices are given by Q_t which is related to dividends D_t via the market return \tilde{r}_{t+1}:

(5)   \begin{align*} 1 + \tilde{r}_{t+1} &= \frac{Q_{t+1} \cdot (1 - \delta) + D_{t+1}}{Q_t} \end{align*}

Let \mathbb{E}_{i,t} be the “true” rational expectations operator which is linked to each agent’s subjective expectations via a common mean 0 shock \tilde{\varepsilon}_t which has volatility \sigma_{\varepsilon}^2:

(6)   \begin{align*} \begin{split} \mathbb{E}_{i,t} \left[ \cdot \right] &= \mathbb{E} \left[ \cdot \mid s_t(i), Q_t, K_t, B_{t-1}, \eta_t \right] \\ \mathcal{E}_{i,t} \left[ \eta_{t+1} \right] &= \mathbb{E}_{i,t} \left[ \eta_{t+1} \right] + \tilde{\varepsilon}_t \end{split} \end{align*}

The representative firm solves the maximization program below and tries to produce as much output as possible given the production technology e^{\eta_t} \cdot F(K_t,L_t) and factor prices w_t and R_t:

(7)   \begin{align*} \Pi_t &= \max_{K_t^d,L_t^d} \left\{ e^{\eta_t} \cdot F\left( K_t^d, L_t^d \right) - w_t \cdot L_t^d - R_t \cdot K_t^d \right\} \end{align*}

All profits are dispersed immediately to shareholders. Firms take the market price of capital Q_t as given and then invest in capital accumulation according to the rule below where Q_t \cdot I_t represents the value of acquiring I_t units of capital, -I_t represents cost from selling I_t units of consumption good and - (\chi/2) \cdot (I_t^2/K_t) represents an adjustment cost:

(8)   \begin{align*} A_t &= \max_{I_t} \left\{ Q_t \cdot I_t - I_t  - \left( \frac{\chi}{2} \right) \cdot \frac{I_t^2}{K_t} \right\} \end{align*}

Thus, we can characterize the dividends paid per share as:

(9)   \begin{align*} \begin{split} D_t &= R_t + \left( \frac{\chi}{2} \right) \cdot \frac{I_{t+1}^2}{K_{t+1}^2} \\ I_t &= \left( \frac{K_t}{\chi} \right) \cdot \left( Q_t - 1 \right) \end{split} \end{align*}

With this machinery in place, I can now define an equilibrium in which agents have heterogeneous beliefs about firm productivity:

Definition (Equilibrium): Given a time path of shocks \left\{ \eta_t, \tilde{\varepsilon}_t, \nu_t(i) \right\} an equilibrium in this economy is a time path of quantities \left\{ C_t(i), B_t(i), W_t(i), \omega_t(i), \tau_t(i) \right\} \left\{ C_t, B_t, W_t, \omega_t, K_t^d, L_t^d, Y_t, K_t, I_t \right\}, signals \left\{ s_t(i) \right\} and prices \left\{ Q_t, r, R_t, w_t \right\} with the properies:

  1. \left\{ C_t(i), \omega_t(i) \right\} solve the households’ maximization problem given the vector of prices, initial wealth and the random sequences \left\{ \tilde{\varepsilon}_t, \tilde{\nu}_t(i) \right\};
  2. \left\{ K_t^d, L_t^d \right\} solve the representative firm’s maximization problem given the vector of prices;
  3. \left\{ I_t \right\} is the investment sector’s optimal policy given the vector of prices;
  4. \left\{ w_t \right\} clears the labor market, \left\{ Q_t \right\} clears the stock market and \left\{ R_t \right\} clears the market for capital services;
  5. There is a perfectly elastic supply of the consumption good and of bonds in world markets where bonds pay the interest rate r and the price of the consumption good is normalized to 1;
  6. \left\{ \tau_t(i) \right\} are defined such that all households enter each period with the same amount of wealth; and
  7. \left\{ B_t(i), C_t, B_t, W_t, \omega_t \right\} are given by the identities:

    (10)   \begin{align*} B_t(i) &= \left[ 1 - \omega_t(i) \right] \cdot \left( W_t(i) - C_t(i) \right) \\ X_t &= \int_0^1 X_t(i) \cdot di, \ \ \forall X \in \left\{ B, C, W \right\} \\ \omega_t &= \frac{Q_t \cdot K_{t+1}}{W_t - C_t} \end{align*}

This definition collapses down to the standard noisy rational expectation equilibrium in the case of \sigma_{\tilde{\varepsilon}} = 0:

Definition (Rational Expectations Equilibrium): A rational expectations equilibrium is an equilibrium as defined above in which the volatility of inference shocks \sigma_{\tilde{\varepsilon}} = 0 so that \mathcal{E}_{i,t} = \mathbb{E}_{i,t}.

However, it also has a sensible interpretation even when agent’s have a common component to their belief heterogeneity:

Definition (Near Rational Expectations Equilibrium): A near rational expectations equilibrium allows for \sigma_{\tilde{\varepsilon}} > 0. Such an equilibrium is k\% stable if the welfare gain to an individual household of obtaining rational expectations is less that k\% of consumption.

3. Financial Economy

In this section, I consider a world in which all agents hold beliefs that are slightly wrong in the same way. To formalize this idea, I define a new error term \varepsilon_t which represents the difference between the average expectation of the total factor productivity at time t in a world where \sigma_{{\varepsilon}} > 0 and the average expectation in a world where \sigma_{\tilde{\varepsilon}} = 0:

(11)   \begin{align*} \varepsilon_t &= \gamma \cdot \tilde{\varepsilon}_t \\ &= \int_{\tilde{\varepsilon}_t>0} \mathcal{E}_{i,t} \left[ \eta_{t+1} \right] \cdot di - \int_{\tilde{\varepsilon}_t=0} \mathcal{E}_{i,t} \left[ \eta_{t+1} \right] \cdot di \end{align*}

In particular, by expressing \gamma = \varepsilon/\tilde{\varepsilon} as a ratio we can ask: How much market forces amplify (\gamma > 1) the existing common error?

To solve for the equilibrium values in the financial economy, I start with the 2 Euler equations linking consumption in subsequent periods to the riskless rate of return and the rate of return on equities. Let \Theta_t = \left\{ K_t, B_{t-1}, \eta_t, \eta_{t+1}, \tilde{\varepsilon}_t \right\} denote the state space. Then, I can express these 2 Euler equations as:

(12)   \begin{align*} \begin{split} C_t \left( \Theta_t, \nu_t(i) \right)^{-1} &= \beta \cdot \mathcal{E}_{i,t} \left[ \left( 1 + \tilde{r}_{t+1} \right) \cdot C_{t+1} \left( \Theta_{t+1}, \nu_{t+1}(i) \right)^{-1} \right] \\ C_t \left( \Theta_t, \nu_t(i) \right)^{-1} &= \beta \cdot \mathcal{E}_{i,t} \left[ \left( 1 + r \right) \cdot C_{t+1} \left( \Theta_{t+1}, \nu_{t+1}(i) \right)^{-1} \right] \end{split} \end{align*}

Next, given these 2 Euler equations, I now face a complicated problem of trying to figure out households’ optimal allocation of consumption, stock purchases and bond purchases given that 1) households’ expectations depend non-linearly on their beliefs about the current \eta_t and 2) asset prices depend non-linearly on the average belief about future productivity. In a related paper, Mertens (2009) shows how to execute a non-linear change of variables that transforms the problem into a optimization program that resembles a more standard noisy rational expectations equilibrium. After this change of variables, I write the stock price as:

(13)   \begin{align*} \begin{split} \hat{q}_t &= \int \mathcal{E}_{i,t} \left[ \eta_{t+1} \right] \cdot di \\ &= \int \mathbb{E}_{i,t} \left[ \eta_{t+1} \mid \hat{q}_t , s_t(i) \right] \cdot di + \tilde{\varepsilon}_t \end{split} \end{align*}

i.e., the stock price equals the market expectation of \eta_{t+1}.

I now proceed down the lines of the solution to a standard noisy rational expectations model with value \eta_{t+1} and noise term \tilde{\varepsilon}_t. I conjecture that the price \hat{q}_t is an affine function of the value and the noise term with parameters \pi_0, \pi_1 and \gamma:

(14)   \begin{align*} \hat{q}_t &= \pi_0 + \pi_1 \cdot \eta_{t+1} + \gamma \cdot \tilde{\varepsilon}_t \end{align*}

Thus, we can interpret \gamma as the loading on the noise term in this pricing rule. This is quite natural as the price \hat{q}_t is the average expected value of the total factor productivity \eta_{t+1} and \int_{\tilde{\varepsilon}_t=0} \mathcal{E}_{i,t} \left[ \eta_{t+1} \right] \cdot di is the price in a world with no common error \sigma_{\tilde{\varepsilon}} = 0. Thus, the loading on the noise term should be \gamma, the amplification parameter defined above.

Next, using results from Mertens (2009), I write the true rational expectation of tomorrow’s total factor productivity \mathbb{E}_{i,t} \left[ \eta_{t+1} \right] as an affine function of each agent’s beliefs and the prevailing price using the coefficients A_0, A_1 and A_2 which derive from the change of variables procedure which I do not outline here:

(15)   \begin{align*} \mathbb{E}_{i,t} \left[ \eta_{t+1} \right] &= A_0 + A_1 \cdot s_t(i) + A_2 \cdot \hat{q}_t \end{align*}

By adding in the common noise term and then summing across agents i \in [0,1], I get another affine expression below which links the average expectation of tomorrow’s total factor productivity to its true value and the amplified noise term:

(16)   \begin{align*} \begin{split} \int \mathcal{E}_{i,t} \left[ \eta_{t+1} \right] \cdot di &= \left( A_0 + A_2 \cdot \pi_0 \right) \\ &\qquad \qquad  + \left( A_1 + A_2 \cdot \pi_1 \right) \cdot \eta_{t+1} \\ &\qquad \qquad \qquad \qquad + \left( A_2 \cdot \gamma + 1 \right) \cdot \tilde{\varepsilon}_t \end{split} \end{align*}

Using this expression, I can then show that as households look more and more to the price of capital when forming prices, the amplification of common shocks to expectations via the pricing mechanism grows:

Proposition (Inference Shock Amplifier): The more weight the households place on the market price of capital when forming their expectations about \eta_{t+1}, the larger is the error in market expectations relative to \tilde{\varepsilon}_t:

(17)   \begin{align*} \gamma &= \frac{1}{1 - A_2} \end{align*}

The proof of this proposition follows directly from taking first order conditions:

Proof: Combining the equations for \hat{q}_t and \int \mathcal{E}_{i,t} \left[ \eta_{t+1} \right] \cdot di above yields a system of 3 equations and 3 unknowns:

(18)   \begin{align*} \pi_0 &= A_0 + A_2 \cdot \pi_0 \\ \pi_1 &= A_1 + A_2 \cdot \pi_1 \\ \gamma &= 1 + A_2 \cdot \gamma \end{align*}

Solving for the unknowns yields:

(19)   \begin{align*} \pi_0 &= \frac{A_0}{1-A_2} \\ \pi_1 &= \frac{A_1}{1 - A_2} \\ \gamma &= \frac{1}{1 - A_2} \end{align*}

Next, I look at how much information prices can hold in regimes that vary based on the the magnitude of the underlying common shock. For instance, consider the following thought experiment. Think about a world in which there was no common shock. If you knew nothing about asset values and then looked at prices, how precisely could you now pin down those values? To within \$5? \$1? \$10? Now consider a world in which \tilde{\varepsilon} > 0 so that agents experience a common shock to their preferences. Now, for instance, agents might revise their expectations upward, hold more stocks as a result, and then influence others to do the same for capital gains purposes as the price depends on the average value of agent’s expectations. In such a world with a positive feedback loop, how much worse are prices at conveying the true value of the asset? If in the world with no common shock, you could see a price of \$30 and know that the asset was worth something between \$29 and \$31, in a world with a common shock have the bounds now expanded to \$30 \pm \$5?

The proposition below dictates that the bounds will be increasing in the volatility of the common shock:

Proposition (Information Absorbency): The amount of information aggregated in the stock prices decreases with the volatility of the common shock \sigma_{\varepsilon}:

(20)   \begin{align*} 0 &> \frac{\partial \pi_1}{\partial \sigma_{\tilde{\varepsilon}}} \end{align*}

The proof comes via Mathematica by taking partial derivative of \pi_1 = A_1/(1 - A_2) with respect to \sigma_{\varepsilon} where \sigma_{\varepsilon} is bound up in the A_1 and A_2 terms so I omit it here.

4. Real Economy

In the previous section I showed how to derive equilibrium values for the financial economy and that the market became substantially less efficient as the volatility of the common shock increased due to the feedback loop between beliefs and asset prices. I now turn to the real economy and show that common errors in expectations distort aggregate consumption in the steady state as well. First, suppose that in equilibrium stock prices are log-normally distributed with variance \sigma^2. Then we know that the optimal consumption plan will be myopically proportional to his wealth as agent’s have log utility:

(21)   \begin{align*} C_t(i) &= \left( 1 - \beta \right) \cdot W_t(i) \end{align*}

What’s more, his optimal stock holdings will also follow the standard log utility rule:

(22)   \begin{align*} \omega_t(i) &= \frac{\mathcal{E}_{i,t} \left[ 1 + \tilde{r}_{t+1} \right] - \left( 1 + r\right)}{\sigma^2} \end{align*}

Next, I characterize the steady state levels of stock holdings as well as the marginal product of capital:

Proposition (Steady State): The equilibrium has a unique stochastic steady state if and only if \beta < (1 + r)^{-1}. At the steady state, the aggregate degree of stock holdings is:

(23)   \begin{align*} \omega_{ss} &= \sqrt{\frac{1}{\sigma^2} \cdot \left( \frac{1 - \beta}{\beta} - r \right)} \end{align*}

and the stochastic steady state capital stock is characterized by:

(24)   \begin{align*} F_K(K_{ss},L) &= \left( 1 + \delta \cdot \chi \right) \cdot \left( r + \omega_{ss} \cdot \sigma^2 + \delta \right) \end{align*}

The restriction on the time preference parameter \beta makes sure that agents do not want to perpetually accumulate capital. The steady state stock holdings links the gap between the time preference parameter and the riskless rate to agent’s depend to hold capital with a volatility of \sigma. The full proof of this result is given below:

Proof: First, I characterize the steady state marginal product of capital. To do this, I start by linking it to the dividend payout rate:

(25)   \begin{align*} D_{t+1} &= e^{\eta_{t+1}} \cdot \left( \frac{d}{dK} F(K_{t+1}, L) \right) \\ \mathbb{E}_{ss} \left[ D_{ss} \right] &= \left( \frac{d}{dK} F(K_{ss}, L) \right) \end{align*}

Next, I observe that the excess return on capital can be linked to the marginal product of capital via the relationship:

(26)   \begin{align*} r + \omega_{ss} \cdot \sigma^2 &=  - \delta + \left( \frac{1}{1 + \delta \cdot \chi} \right) \cdot \left( \frac{d}{dK} F(K_{ss},L) \right) \end{align*}

This last statement yield the desired result after rearranging. Next, I turn to the steady state asset holdings which falls out of the budget constraint. First, I rewrite the budget constraint so that it only contains terms from 2 periods:

(27)   \begin{align*} \left( 1 + r \right) \cdot B_{t-1} + &\left( Q_t \cdot \left( 1 - \delta \right) + D_t \right) \cdot K_t = Q_t \cdot K_{t+1} + B_t + C_t \\ &= Q_t \cdot \left( 1 - \delta \right) \cdot K_t + Q_t \cdot I_t + B_t + C_t \end{align*}

Next, I write the aggregate wealth, consumption and borrowing terms as functions of the steady state asset holdings:

(28)   \begin{align*} C_{ss} &= (1 - \beta) \cdot W_{ss} \\ \beta \cdot W_{ss} &= K_{ss} \cdot (1 + \delta \cdot \chi) + B_{ss} \\ B_{ss} &= \beta \cdot W_{ss} \cdot \left( 1 - \omega_{ss} \right) \end{align*}

Finally, I substitute these terms back in and rearrange to yield a result linking the steady state asset holdings to the constants:

(29)   \begin{align*} \left( 1 + \delta \cdot \chi \right) \cdot K_{ss} &= \beta \cdot W_{ss} \cdot \omega_{ss} \\ B_{ss} &= \left( \frac{1 - \omega_{ss}}{\omega_{ss}} \right) \cdot \left( 1 + \delta \cdot \chi \right) \cdot K_{ss} \end{align*}

Next, I look at the steady state capital accumulation rate of the economy conditional on the volatility of the stock market:

Proposition (Capital Accumulation): An increase in the conditional variance of stock returns decreases the level of capital stock and total output:

(30)   \begin{align*} 0 &> \frac{\partial K_{ss}}{\partial \sigma} \end{align*}

This result says that the economy will acquire less and less capital as the volatility of stock returns increases. This result stems directly from the full derivative of the steady state capital level:

Proof: First, start with the equation above characterizing the steady state stock holdings \omega_{ss} and then take the full derivative:

(31)   \begin{align*} \frac{d}{d\sigma} K_{ss} &= \frac{1 + \delta \cdot \chi}{\frac{d^2}{(dK)^2} F(K_{ss},L)} \cdot \sqrt{\frac{1 - \beta}{\beta} - r} \end{align*}

Rearranging terms and signing the RHS yields the necessary relationship.

Thus, in this section I showed that the economy is worse off when stock market volatility increases as a result of a common error in expectations.

  1. See Tarek Hassan and Thomas Mertens ↩
  2. By contrast, in the standard noisy rational expectations setup (e.g., see here) all outcomes are linearly related. ↩

Filed Under: Uncategorized

Notes: Budish (2011)

September 21, 2011 by Alex

1. Introduction

In this note, I outline the results in Budish (2011)1 for a presentation in Prof. Sargent’s reading group. Here are the slides themselves.

This paper introduces a new concept for clearing markets with indivisible goods and asset complementarities which is similar to a competitive equilibrium with equal incomes (CEEI)2. In particular, Budish (2011) extends this idea from the baseline setting with independent and perfectly divisible goods by allowing for both approximate market clearing as well as slightly unequal incomes. He then shows that the resulting equilibrium and market design concepts have the desirable qualities of being Pareto efficient, fair and strategy-proof (…given that these last 2 concepts have been appropriately adjusted).

2. Terminology and Notation

There are a set of N students denoted by \mathcal{S} = \{1,2,\ldots,N\}. There is a set of M courses, \mathcal{C} = \left\{ 1, \ldots, j, \ldots, M \right\} with q_j \in \mathbb{Z}_+ seats in each course j. A consumption bundle x_i for student i is a binary vector in \{0,1\}^M with 1 meaning that the student took the course and 0 meaning that the student didn’t. In this notation, the set of all possible courses is the powerset of \mathcal{C}: 2^{\mathcal{C}}. Each student i has von Neumann Morgenstern preferences over allocations defined as u_i(\cdot): \{0,1\}^M\mapsto \mathbb{R}_+. Let \Psi \subset 2^{\mathcal{C}} be the set of admissible schedules where the utility from an inadmissible schedule is 0. Formally, an economy is an object:

Suppose that each student gets assigned an endowment b_i \in \mathbb{R}_+ of fake currency and the market clears at prices p_j \in \mathbb{R} for each class j \in M.

3. Approximate CEEI

With this terminology in hand, I now define the equilibrium concept:

Definition (Approximate CEEI): Take an economy \mathcal{E}. For this economy, the allocation \mathbf{x}, budgets \mathbf{b} and prices \mathbf{p} constitute a (\alpha,\beta)-Approximate CEEI if the following 3 conditions hold:

(1)   \begin{align*} x_i &= \arg \max_{x \in 2^{\mathcal{C}}} \left\{ u_i(x) \mid \mathbf{p} \ x \leq b_i \right\} \quad \forall i \in \mathcal{S} \\ \alpha &\geq \left\Vert z_1(\mathbf{p}),z_2(\mathbf{p}),\ldots,z_M(\mathbf{p}) \right\Vert_2 \\ 1 + \beta &\geq \max_{i \in \mathcal{S}} b_i \geq 1 = \min_{i' \in \mathcal{S}} b_{i'} \end{align*}

where z_j(\mathbf{p}) = \sum_{i \in \mathcal{S}} x_{i \to j} - q_j.

In words, \alpha is controlling the approximation of the market clearing condition and \beta is controlling the approximation of the equal income condition. For example, say that \alpha = 10, and then there are at most 2 classes with 7 too many students: \sqrt{7^2 + 7^2} \approx 10. Alternatively, there could be 1 class with 10 students too many students. Turning to \beta, suppose that we have a \beta = 0.05, then the student with the least fake class-seat-buying currency has 1 dollar while the most wealthy student has no more that 1.05 dollars.

Next, I give the central existence result:

Proposition (Equilibrium Existence): Define the following 2 new variables:

(2)   \begin{align*} k &= \max_{i \in \mathcal{S}} \max_{x \in \Psi} \vert x \vert \\ \sigma &= \min \left\{ 2 \cdot k, M \right\} \end{align*}

First, we have that for any \beta > 0, there exists a (\sqrt{\sigma \cdot M}/2 , \beta)-Approx CEEI.

Second, it is also the case that for any \beta > 0, any budget vector \mathbf{b} that lies in the [1,1+\beta] interval and any \varepsilon > 0, there exists a (\sqrt{\sigma \cdot M}/2 , \beta)-Approx CEEI with budgets \mathbf{b}^* that satisfy the condition:

(3)   \begin{align*} \varepsilon &> \vert b_i^* - b_i \vert \quad \forall i \in \mathcal{S} \end{align*}

What is this proposition saying in words? The first part of the result says that for any arbitrarily small distortion of wealth inequalities, there exists an (\alpha,\beta)-approximate CEEI where the market clearing approximation is bounded by the amount \sqrt{\sigma \cdot M}/2. The second part of the result captures the fact that, for any budget vector \mathbf{b} in the [1,1+\beta] interval that a school administrator might choose, there is a (\alpha,\beta)-Approximate CEEI really close too it. Put differently, a school administrator can just randomly assign the initial budget vectors. Thinking ahead, this second clause indicates that the budget dispersion rather than the allocations themselves plays a key role.

Before I sketch out the proof of this result, let me first walk through an illustrative example:

Example (2 Diamonds, 2 Rocks): Suppose that there are 2 buyers, Bob and Alice, who want to split 4 items between them:

(4)   \begin{align*} \mathcal{C} &= \left\{ {\Large \text{Diamond}}, {\small \text{Diamond}}, {\Large\text{Rock}}, {\small\text{Rock}}  \right\} \end{align*}

If both Bob and Alice have an equal endowment, then there will not exist a price vector for each stone that clears the market. At each price point, both buyers would want the exact same bundle bundle making prices a useless tool for dividing up the goods. However, suppose instead that Bob has \$1 while Alice has \$(1+\varepsilon). Then, Alice would be able to buy the big diamond at a price of \$(1 + \varepsilon). Given that Alice would then have no more money, Bob could then afford to purchase both the small diamond and the big rock:

Thus, by noising up the wealth distribution a little bit, prices can become a useful tool for dividing up indivisible goods. What’s more, although Bob still envies Alice’s big diamond, his envy is limited to a single good making this division as fair as possible given the inherent indivisibility of the goods.

I now turn to sketching the existence proof out in more detail below:

Proof: I want to show that there exists a tuple of prices, budgets and allocations (\mathbf{p},\mathbf{b},\mathbf{x}) that satisfy the 3 conditions in the definition above so long as \alpha \leq \sqrt{M \cdot \sigma} / 2 and \beta > 0. If we were in the standard Econ 101 world of Mas-Collel et al. (2001) Ch 16 with Walrasian equilibria, then we could use the smoothness and convexity of the problem to guarantee existence and derive the welfare theorems. Here though, I need to overcome a critical problem: due to indivisibilities and asset complementaries, the excess demand function is discontinuous and possibly non-convex. Thus, I need to bound the size of the aggregate impact of these discontinuities, make the problem convex, and then finally apply well known fixed point theorems to show that an equilibrium must exist:

The result is in essence that, if people are given budgets with a little bit of noise from person to person, then there will exist a price vector \mathbf{p} such that no one would be strictly better off with a pairwise switch and markets almost clear. I prove a restricted version of the result showing the existence of (M \cdot \sqrt{\sigma},\beta)-Approximate CEEI. The jump from this baseline result to the tighter \sqrt{\sigma \cdot M}/2 result is purely mathematical, so for details please refer to the paper.

(Bound Discontinuities) The finiteness of the problem and the income inequality bound the aggregate size of the demand discontinuities. First, note that at worst a small change in prices can make an agent switch his entire portfolio from one bundle x_i to an entirely different bundle x_i' with \emptyset = x_i \cap x_i'. The Euclidean distance in the change in the aggregate demand due to such a shift would be \sqrt{\sigma}. To get a clearer idea of this magnitude, consider the M = 2 case below:

Next, if all agents had the same income b_i = b_{i'} \ \forall i \in \mathcal{S}, such a shift might lead to an N \cdot \sqrt{\sigma} change in the aggregate demand vector. However, if every agent has a unique budget b_i, then in general there will be at most M agents affected by a tiny price fluctuation. Why? Due to the variation in budget sets, there will always be some agent i_j who is closest to his budget constraint hyperplane in the j^{th} direction:

Thus, a little change in prices will in general only cause an M \cdot \sqrt{\sigma} shift in the aggregate demand vector.

(Convexify) Next, I want to convexify the problem. Due to the demand discontinuities and asset complementarities, it may be the case that moving prices by \varsigma might make you demand less of asset j, but moving them by 2 \cdot \varsigma might make you demand more as the good you would have liked to buy is now too expensive and you have to settle. To do this, I use a limiting result. Consider a tatonnement process f(\mathbf{p}) such that:

(5)   \begin{align*} f(\mathbf{p}) &= \mathbf{p} + \mathbf{z}(\mathbf{p}) \end{align*}

where a fixed point of f(\cdot ) would represent a set of market clearing prices. Then convexify the mapping f(\cdot) as follows:

(6)   \begin{align*} F(\mathbf{p}) &= \mathtt{co} \left\{ \mathbf{y} \mid \exists \mathbf{p}_n \to \mathbf{p} \text{ s.t. } f(\mathbf{p}_n) \to \mathbf{y} \right\} \end{align*}

Cromme and Diener (1991) show that F(\cdot) is upper hemi-continuous and convex. Thus, if I can find a fixed point for F(\cdot) I know in some sense that there is a market clearing set of prices for the true economy nearby.

(Apply Fixed Point Theorems) Finally, I can apply standard fixed point theorems to this object F(\cdot) in order to show that an equilibrium exists. In particular Cromme and Diener (1991) also show that if P \subset \mathbb{R}^n is compact and convex and f: P \mapsto P is any mapping, then we have that:

(7)   \begin{align*} \alpha &\geq \left\Vert f(\mathbf{p}) - \mathbf{p} \right\Vert \end{align*}

From intuition above, we know that \sqrt{\sigma} \cdot M > \alpha.

This error of \alpha \propto M \cdot \sqrt{\sigma} is particularly nice because it doesn’t depend on either q_j or N. Thus, even as the market gets really large, the market clearing error remains the same size.

4. Efficiency

Given that the equilibrium exists, we also want to know its relationship to the first best outcome:

Proposition (Analogue to First Welfare Theorem): Suppose \left( \mathbf{x}, \mathbf{b}, \mathbf{p} \right) is an (\alpha,\beta)-Approx CEEI of the economy \mathcal{E}. Then, the allocation \mathbf{x} is a Pareto efficient allocation in \mathcal{E}.

This result comes almost for free and follows the standard First Welfare Theorem line of argument:

Proof: Suppose that there is some feasible allocation \mathbf{x}' that Pareto improves on \mathbf{x} for an agent i. Since we know that x_i was in agent i‘s feasible utility maximizing set, then if x_i' \neq x_i we have that \mathbf{p} \ x_i' > \mathbf{p} \ x_i. However, this means that the consumption bundle \mathbf{x}'is infeasible as:

(8)   \begin{align*} \sum_{i = 1}^N \mathbf{p} \ x_i' &> \sum_{i=1}^N \mathbf{p} \ x_i \end{align*}

yielding a contradiction.

5. Fairness

In some sense, in a world with indivisible goods there will always be some unfairness. Bob or Alice can have the big diamond in the example above, but not both; thus, either Bob or Alice will envy the other’s allocation. In order to properly discuss the relative fairness of different equilibrium concepts, we need to introduce a measuring stick concept in order to compare degrees of unfairness:

Definition (Envy Bounded by a Single Good): An allocation \mathbf{x} satisfie envy bounded by a single good if, for any i,i' \in \mathcal{S}, either:

(9)   \begin{align*} u_i(x_i) &\geq u_i(x_{i'}) \end{align*}

…or, there exists some good j \in x_{i'} such that

(10)   \begin{align*} u_i(x_i) &\geq u_i(x_{i'} \setminus \{ j\}) \end{align*}

If an equilibrium satisfies the condition of envy bounded by a single good, then we could play the following game. You get to pick any 2 agents i and i' in the economy where u_i(x_{i'}) > u_i(x_i) and give them to me. Then, I will show you the good j in agent i'‘s allocation such that if you removed that good, agent i would no longer envy agent i'‘s allocation. For instance, in the example above, we could remove the big diamond from Alice’s allocation and then Bob would be perfectly happy with his little diamond and big rock allocation. Below I give the main result showing that if income inequality is small enough, then an (\alpha,\beta)-Approximate CEEI will display envy bounded by a single good:

Proposition (Bounded Envy): For any economy \mathcal{E}, if \left( \mathbf{x}, \mathbf{b}, \mathbf{p} \right) is an (\alpha, \beta)-Approx CEEI with

(11)   \begin{align*} \beta &< \frac{1}{k-1} \end{align*}

…then \mathbf{x} satisfies the condition of envy bounded by a single good.

To get an idea of where this result is coming from, return to the example with Alice and Bob above. In this example, by giving Alice a little bit more income in the beginning she was able to buy the big diamond; however, because she no longer had any money left, Bob could then buy the small diamond and the big rock. This intuition would break down if we gave Alice, say, 2 + \varepsilon times the amount of money as Bob so that she could buy both the big and small diamonds. Thus, we should suspect that envy bounded by a single good will fall out of an (\alpha,\beta)-Approximate CEEI in a world where the income inequality is not so large. The proof below makes this intuition a bit more precise:

Proof: Suppose a contradiction where agent i envies agent i'‘s allocation and this envy is not bounded by a single good. Let k' \leq k denote the number of objects envied in bundle x_{i'} by agent i. Thus, for each of these goods, we could remove just 1 of them and agent i would still prefer this bundle to his own:

(12)   \begin{align*} u_i(x_{i'} \setminus \left\{ j_1 \right\} ) &> u_i(x_i) \\ &\vdots \\ u_i(x_{i'} \setminus \left\{ j_{k'} \right\} ) &> u_i(x_i) \end{align*}

What’s more, it has to be the case that agent i cannot afford any of these bundles otherwise he would have bought them in the first place:

(13)   \begin{align*} b_{i'} &\geq \mathbf{p} \ (x_{i'} \setminus \left\{ j_1 \right\} ) > b_i \\ &\vdots \\ b_{i'} &\geq \mathbf{p} \ (x_{i'} \setminus \left\{ j_{k'} \right\} ) > b_i \end{align*}

By summing up these budget constraint inequalities we get the result that:

(14)   \begin{align*} (k'-1) \cdot b_{i'} &\geq (k'-1) \cdot (\mathbf{p} \ x_{i'}) > k' \cdot b_i \\ \frac{b_{i'}}{b_i} &\geq \frac{k'}{k'-1} \end{align*}

Since k' > k we can replace k'/(k'-1) in the above inequality with the term k/(k-1). However, we know that at most the ratio of b_{i'}/b_i can be 1+\beta so have that if (k-1)^{-1} > \beta there is a contradiction!

6. Strategy Proof-ness

Finally, I turn to strategy proof-ness. As this is a property of the actual algorithm for assigning agents to allocations and not of the stopping condition for this algorithm (i.e., the equilibrium), I first need  to outline this algorithm:

Definition (Approximate CEEI Mechanism): Define the algorithm below as the Approximate CEEI Mechanism:

  1. Each agent i reports her utility function \hat{u}_i.
  2. Check for (0,0)-Approximate CEEI’s.
  3. If non-empty:
    1. Choose random (\mathbf{x}, \mathbf{b}, \mathbf{p}).
  4. If empty:
    1. Choose target budget \mathbf{b}' uniformly from [1,1+\beta] with \beta < \min \left\{ N^{-1}, (k-1)^{-1} \right\}.
    2. Set \varepsilon \approx 0, \delta < 1 - N \cdot \beta and \alpha \leq \sqrt{\sigma \cdot M}/2.
    3. Compute set of feasible (\alpha,\beta)-Approx.\ CEEI’s.
    4. Choose random (\mathbf{x}, \mathbf{b}, \mathbf{p}) from set with minimum \alpha and \Vert \mathbf{b} - \mathbf{b}' \Vert small.

In words, this algorithm says to first solicit preferences and check for exact market clearing with prices. If no such solution exists, compute the set of (\sqrt{\sigma \cdot M}/2, \beta)-Approximate CEEI with \beta sufficiently small and choose a market clearing set of prices and allocations randomly from this set.

Next, I want to show that this algorithm is strategy proof if all of the agents behave as price takers. This seems like a reasonable assumption given that the equilibrium chooser (…think about a business school administrator) randomly selects from the set of admissible equilibria at the end of the period. One way to formalize this price taker idea would be to think about replacing each agent in the economy with a unit mass of identical agents:

Definition (Continuum Replication): The continuum replication of an economy \mathcal{E} written as:

(15)   \begin{align*} \mathcal{E}^\infty &= \left( \mathcal{S}^\infty, \mathcal{C}, \left( q_j \right)_{j=1}^M, \left( \Psi_i \right) _{i \in \mathcal{S}^\infty}, \left( u_i \right)_{i \in \mathcal{S}^\infty} \right) \end{align*}

can be constructed as by replacing each agent in the original economy with a unit mass of identical agents. \mathcal{S}^\infty = (0,N] so that agent 1 is replaced with the mass (0,1], agent 2 with the mass (1,2], agent 3 with the mass (2,3] and so on…

Then, given this replacement, we can define a strategy proof-ness concept for price taking agents:

Definition (Strategy Proof in the Large): A mechanism is strategy proof in the large if it is exactly strategy proof in the continuum replication of any finite economy.

This yields the final result:

Proposition (Strategy-Proofness): The Approximate CEEI Mechanism is SPITL.

Proof: Pick an economy \mathcal{E} and consider its continuum replication \mathcal{E}^\infty. Consider agent i \in \mathcal{S}^\infty and fix all other agent’s reports. Agent i has measure 0 so cannot affect prices. By definition of approximate-CEEI, agent i does best by truth telling given budget b_i otherwise the maximization condition would be violated.

7. Conclusion

I chose this paper to present because it has 2 interestign take aways: First, this paper outlines the actual mechanics of market clearing in a common real world setting. In standard macroeconomic and financial theory, this process is a bit of black box. While the goods usually considered in these matching models are things like class and job allocations3, elementary school allocations4 and organ donor matching5, you could think about alternative settings that might be fitting in macro-finance like buy-sell trade matching at high frequencies or even capital good allocations at lower frequencies in a market with financial constraints where it is difficult to get loans to make side payments.

Second, the paper presents a model in which in order to constrain the extent of the supply-demand mismatch, the model demands a bit (…a quantity which I define in more detail below) of inequality in order to achieve the equilibrium. Heterogeneity in wealth actually allows agents to equitably decide who gets indivisible assets using prices. Thus, wealth inequality has a new and different role in the effectively clearing markets.

  1. Website: Eric Budish at Chicago Booth. ↩
  2. See Varian (1974). ↩
  3. See Hylland and Zeckhauser (1979), Roth (1982) and Roth and Peranson (1999). ↩
  4. See Abdulkadiroglu et al. (2005), Abdulkadiroglu et al. (2005) or Pathak and Sonmez (2008). ↩
  5. See Roth et al. (2004). ↩

Filed Under: Uncategorized

Financial Econometrics Software

September 13, 2011 by Alex

1. Introduction

In this note I outline the basic facts and rules of thumb about financial econometric software that is relevant for Rob Engle‘s Fall 2011 Financial Econometrics (2) PhD course.1 First, I give a quick introduction to EViews and point to some more in depth examples. Next, I outline the benefits and limitations of using EViews and suggest that econometric derivations should be checked via numerical simulations in a more suitable programming/statistical language such as MATLAB, R or Scientific Python. My general feeling is that EViews is an excellent point of departure for doing exploratory data analysis and initial hypothesis testing; however, when doing more complicated statistical procedures a MATLAB, R or SciPy are better choices. Finally, I conclude with some helpful resources for documenting and warehousing code.

2. Using EViews

EViews is a statistical/econometric software package that is useful for getting quick summaries of financial time series using a WYSIWYG/point-and-click interface. To my mind, EViews is to MATLAB or R what a Google Doc2 is to a LaTeX file. In the same way that you should veer towards LaTeX rather than a Word Document if you are interested in writing a very technical article with lots of equations, you should head towards MATLAB or R if you want to do any serious simulations or custom econometrics; however, no one in their right mind would bother writing a quick note or to do list in TeX. If you want to do an initial exploration of the properties of a time series, EViews is a good place to start.

Below I take a quick look at the sort of procedures EViews is well equipped to tackle. For a set of more in depth examples, I found that Dick Startz and Dave Smant had good introductions and sample code to play with.

First, I load in a \mathtt{.wf1} EViews workspace file containing the adjusted closing price of BNP Paribas. After loading the data and 2 mouse clicks (selecting “show” and then “graph”), I can display the time series along with the unconditional price distribution.

Adjusted close for BNP Paribas from 2003 to 2011.

The summary statistics for this unconditional distribution are available using the “stats” menu icon on the upper right. This option gives ou the standard mean, median, min, max and standard deviation of the observations just as in MATLAB, R or STATA. It also gives you 2 higher moments, skewness and kurtosis, as well as the Jarque-Bera test for normality as the software is tailor made for financial econometrics. As a reference, this is a similar print out to the one available using the RMetrics suite in R.

Summary statistics for the daily adjusted closing price of BNP Paribas over the period from 2003 to 2011.

If you want to look at returns rather than prices, then in the command line you would enter the command below to generate (“\mathtt{genr}“) the new series:

(1)   \begin{equation*} \mathtt{genr} \ \mathtt{ret\_bnp} \ \mathtt{=} \ \mathtt{dlog} \mathtt{(} \mathtt{bnp\_adj\_close} \mathtt{)} \end{equation*}

This new series can then be inspected in the exact same way as the original price time series:

Summary statistics for the daily return series for BNP Paribas for the period from 2003 to 2011.

3. Benefits of Simulation

EViews is a nice tool for exploratory data analysis and quick intuition building experiments. However, for more serious econometric work we need to move to a more complete programming language. What’s more, because EViews includes so many out of the box statistical tests (e.g., the Jarque-Bera test for normality), it is not a great environment within which to learn econometric theory.3 At the end of the day, if you want to learn the econometrics, you need to be able to code it up.

To be more concrete, consider problem 4 in the first homework assignment on blackboard which asks you to consider the ARCH model below:

(2)   \begin{align*} r_t &= \sqrt{h_t} \cdot z_t \\ h_t &= \omega + \delta \cdot h_{t-1} + \alpha \cdot \frac{r_{t-1}^2 - h_{t-1}}{\sqrt{h_{t-1}}} \end{align*}

where h_t is the conditional volatility and r_t is a return, and then work out some properties of the estimator \mathbb{E}_t\left[ r_{t+2}^2\right].

The analytical solution to this problem can be derived by recursively substituting in the 1 step ahead formula for returns and then applying the law of iterated expectations. However, this procedure involves a fair bit of algebra4, which tends to obscure the intuition and lead to silly errors. For this sort of question, you really want to be able to check your work in a constructive way, and this is where numerical simulations come in extremely handy as they allow you to both check your work and visualize comparative statics. There is no reason why you should ever deliver a homework question of this sort with a wrong answer. Use R, MATLAB, SciPy or rocks.

As an example, take a look at my previous post in which I walk through the properties of the Hodrick (1992) standard errors for overlapping returns.

4. Document and Share Your Code!

How many additional citations and article views does John Cochrane get simply because he posts the data and code to all his papers in a easy to find location (here) on his website?

There are a variety of ways to dynamically document your code as you work. For instance see, Doxygen  for Python and Roxygen for R. Use Cell-Mode for MATLAB. If you use Emacs, take a look at Org-Mode with Org-Babel as an IDE for any language you could ever want. For EViews you really don’t have many options. All you can do is copy and paste the output into a word document.

  1. As always, the usual disclaimer applies: these are my thoughts and opinions. ↩
  2. You can actually use LaTeX equations in Google Docs now, though only in a restricted sense. Nevertheless, for short projects this seems like a nice tool for collaborating with distant coauthors as Google can take care of the version control. ↩
  3. This section has been updated in response to a comment. Upon re-reading, I was harsher phrasing than I intended. ↩
  4. Use Mathematica or SageMath for symbolic results. Don’t screw up simple algebra. You want to focus on the economics/finance, not the algebra. ↩

Filed Under: Uncategorized

Understanding Long Run Regressions using the Wave Function

August 29, 2011 by Alex

1. Introduction

In this post, I show how long run predictive regressions like the ones studied in Fama and French (1988) or Campbell (2003) can be understood using the wave function, a second order partial differential equation, rather than sums of correlations as in Cochrane (2005). First, in Section 2 I introduce the idea of a long run regression and relate the coefficient of interest, \beta(h), where h is the horizon to the auto-correlation of the returns. The beta of a regression of the returns over the next h periods on today’s returns is a weighted sum of auto-correlations, so long long run regressions are able to identify minute amounts of return predictability if this predictability is persistent. Then, in Section 3 I show how to model this same phenomenon using the wave function from statistical mechanics.

2. Long Run Regressions

What is a long run regression? Suppose that we are looking at annual data with years indexed by t = 0, 1, 2\ldots Fama and French (1988) then run the regression below where h \geq 1 is the investment horizon:

(1)   \begin{align*} r_{t \to (t+h)} &= \alpha(h) + \beta(h) \cdot r_{(t-1) \to t} + \varepsilon_{t \to (t+h)} \end{align*}

This regression captures the relationship between the realized returns over the past year and the expected returns over the next h years. A \beta(h) > 0 would mean that high returns in the past year would predict high returns over the next h years and vice versa. I want to map this \beta(h) estimate into a formula composed of autocorrelations rather than just variances and covariances as I want to understand how varying the time horizon changes the \beta(h) estimate. To do this, I use results from  Poterba and Summers (1988) who studied the variance ratio \mathtt{VR}(h) of returns at horizon h. Suppose that returns are iid, then we can write:

(2)   \begin{align*} \mathbb{V} \left[ r_{t \to (t+h)} \right] &= \mathbb{V} \left[ r_{t \to (t+1)} +  r_{(t+1) \to (t+2)} + \dots + r_{(t+h-1) \to (t+h)}\right] \\ &= h \cdot \mathbb{V} \left[  r_{t \to (t+1)}\right] \end{align*}

Then, I know that I can relate the variance ratio to a weighted sum of auto-correlations as follows using the rules of variances of sums:

(3)   \begin{align*} \mathtt{VR}(h) &= \frac{1}{h} \cdot \frac{\mathbb{V} \left[ r_{t \to (t+h)} \right]}{\mathbb{V} \left[ r_{t \to (t+1)} \right]} \\ &= \frac{1}{h} \cdot \frac{\mathbb{V} \left[ r_{t \to (t+1)} +  r_{(t+1) \to (t+2)} + \dots + r_{(t+h-1) \to (t+h)}\right]}{\mathbb{V} \left[ r_{t \to (t+1)} \right]} \\ &= \frac{1}{h} \cdot \left( \sum_{i=1}^h \left| h - i \right| \cdot \rho_i \right) \end{align*}

Given that the variance terms drop out, the variance ratio is essentially just a weighted sum of auto-correlations. The \left| h-i \right| term (i.e., the weights) comes from the fact that there are h-1 of the 1 period ahead auto-correlations, h-2 of the 2 period ahead auto-correlations… I use the variable \rho_i to capture the i period ahead auto-correlation:

(4)   \begin{align*} \rho_i &= \mathtt{cor}\left[ r_{(t+i-1) \to (t+i)}, r_{(t-1) \to t} \right] \end{align*}

Now, using the fact the \beta(h) term is just the ratio a covariance and variance term, I can solve for \beta(h) as a sum of auto-correlations using the same method:

(5)   \begin{align*} \beta(h) &= \frac{\mathbb{C} \left[ r_{t \to (t+h)}, r_{(t-1) \to t} \right]}{\mathbb{V}\left[ r_{t \to (t+h)} \right]} \\ &= \frac{h \cdot \mathbb{V} \left[ r_{t \to (t+1)} \right]}{\mathbb{V}\left[ r_{t \to (t+1)} \right]} \cdot \frac{1}{h} \cdot \sum_{i=1}^h \left| h - i \right| \cdot \rho_i \\ &= \sum_{i=1}^h \left| h - i \right| \cdot \rho_i \end{align*}

This formulation tells us that if returns are an auto-regressive process, then the long run regression coefficient \beta(h) is a sum of its auto-correlations at different horizons where the short horizon auto-correlations get the most weight.

3. The Wave Function

It turns out that we can model this same behavior using wave functions (i.e., a well chosen combination of sine and cosine functions) rather than auto-correlation coefficients. As suggestive evidence of why this approach might be plausible, consider Table 20.5 from Cochrane (2005) which finds that the predictability of log returns varies cyclically with the time horizon using annual data from 1926-1996 with coefficients ranging over the interval \pm 0.30 with a period of about 1 cycle every 10 years:

(6)   \begin{align*} \begin{array}{l|cccccc} & 1 & 2 & 3 & 5 & 7 & 10 \\ \hline \hline \beta(h) & 0.08 & -0.15 & -0.22 & -0.04 & 0.24 & 0.08 \end{array} \end{align*}

Using this new formulation is helpful as it makes clear how combinations of return processes vibrating at different frequencies might be added or subtracted from one another via an analogy to Fourier analysis. Waves lie in frequency space which is indexed by horizon h and time t. Instead of thinking about an auto-regression equation, consider the following second order differential equation:

(7)   \begin{align*} \frac{\partial^2 r}{(\partial h)^2} &= \frac{1}{\phi^2} \cdot \frac{\partial^2 r}{(\partial t)^2} \end{align*}

This equation says that the acceleration of returns with respect to the time horizon (i.e., the rate at which returns are increasing with respect to how long you hold onto an asset) is equal to the acceleration of returns with respect to time (i.e., how quickly do the properties of the asset you are holding onto change with respect to time) scaled by a constant term \phi^2. This constant term \phi captures the mean reversion of the return process. Put differently, if you found an asset whose return was increasing at an increasing rate in the holding period, then you would want to hold onto that asset as long as possible. However, this wave equation says that the properties of the return are changing over time and the higher this acceleration of returns with respect to the time horizon, the faster the properties of the return have to be changing. The constant that regulates this relationship is \phi—the rate at which asset properties change in order to maintain the standing wave.

To solve this partial differential equation, I perform the change of variables below:

(8)   \begin{align*} r = H(h) \cdot T(t) \end{align*}

This yields 2 separate equations connected via a negative constant -\theta^2 as dictated by the physical properties of the problem:

(9)   \begin{align*} - \theta^2 &= \frac{1}{H} \cdot \frac{d^2 X}{(dx)^2} \\ &= \frac{1}{\phi^2} \cdot \frac{1}{T} \cdot \frac{d^2 T}{(dt)^2} \end{align*}

From college math courses (e.g., see Boas (2006) Ch: 13, Sec: 4.), we know that differential equations of this type are going to have solutions of a form of either \sin \times \cos or \sin \times \sin; however, I know that \partial r / \partial t has to be 0 at h=0 as the property of no arbitrage dictates that I should never be able to earn excess returns without holding onto risk for some positive increment of time (…even if that increment is really small). Thus, I get the functional form:

(10)   \begin{align*} r &= \sin \left[ \frac{n \cdot \pi \cdot h}{\overline{h}} \right] \cdot \cos \left[ \frac{n \cdot \pi \cdot \phi \cdot t}{\overline{h}} \right] \\ &= \sum_{n=1}^\infty f_n \cdot \sin \left[ \frac{n \cdot \pi \cdot h}{\overline{h}} \right] \cdot \cos \left[ \frac{n \cdot \pi \cdot \phi \cdot t}{\overline{h}} \right] \\ &= \frac{8 \cdot \sigma_r}{\pi^2} \cdot \left( \sin\left[ \frac{\pi \cdot h}{\overline{h}} \right] \cdot \cos \left[ \frac{\pi \cdot \phi \cdot t}{\overline{h}} \right] \right. \\ &\qquad \qquad \left. - \frac{1}{9} \cdot \sin \left[ \frac{3 \cdot \pi \cdot h}{\overline{h}} \right] \cdot \cos \left[ \frac{3 \cdot \pi \cdot \phi \cdot t}{\overline{h}} \right] + \dots \right) \end{align*}

1 period ahead returns at different time horizons from 0 to ‾h.

This solution gives the dynamics of the short rate process process at each time horizon given an initial pluck of length \sigma_r at the horizon \overline{h}/n. Put differently, if you thought about attaching the return process of length \overline{h} to 2 fixed end-points, then this solution relates where the short rate process at each and every horizon h \in [0,\overline{h}] is, given any observation of the short rate process on the interval.

Thus, the predictive regression coefficient will be:

(11)   \begin{align*} \beta(h) &= \frac{\mathbb{C} \left[ r_{t \to (t+h)} \right]}{\mathbb{V} \left[ r_{(t-1) \to t} \right]} \\ &= \frac{\mathbb{C} \left[ \sum_{i=1}^h \sin \left\{ \frac{\pi \cdot i}{10} \right\} \cdot \cos \left\{\frac{\pi \cdot 0.12 \cdot t}{10} \right\}, \sin \left\{\frac{\pi \cdot -1}{10} \right\} \cdot \cos \left\{ \frac{\pi \cdot 0.12 \cdot t}{10} \right\} \right]}{\mathbb{V} \left[ \sin \left\{ \frac{\pi \cdot -1}{10} \right\} \cdot \cos \left\{\frac{\pi \cdot 0.12 \cdot t}{10} \right\} \right]} \end{align*}

Note that for both the covariance and variance operators, any terms without a t in them are constants. Thus, we can see again that the \cos terms will drop out and we will get a sum of \sin terms just as before.

Filed Under: Uncategorized

Geometric Interpretation of Noisy Rational Expectations Equilibrium

August 27, 2011 by Alex

1. Introduction

In this post, I solve a simple noisy rational expectations equilibrium model from Grossman and Stiglitz (1980) and then give a geometric interpretation of their result. First, in Section 2 I set up and solve a noisy rational expectations model. Then, in Section 3 I show how to display the 2 linear projections embedded in the model on a 3D figure. I found the figure to be a useful way of keeping track of the assumptions in more complicated settings.

2. Solution

Consider a world with a single period and only 1 asset with price p and aggregate demand x:

(1)   \begin{align*} p &= a + b \cdot x \\ &= a + b \cdot \left( z + \varepsilon \right) \end{align*}

The coefficient a denotes the average price while the coefficient b represents the price’s responsiveness to aggregate demand changes. i.e., b represents the amount by which a restaurant would change its prices if all of the sudden twice as many people started showing up each evening. Suppose that there are some traders with knowledge of the true value of the asset v, but also other traders who trade randomly and demand an amount \varepsilon \sim \mathtt{N}(0,\sigma_{\varepsilon}^2). Suppose that the value of the asset is drawn from a distribution \mathtt{N}(\mu_v,\sigma_v^2). The coefficients a and b in the equation above are equilibrium objects which I will solve for below as is the value z which is the amount demanded by the informed agents which is also linear in the asset value with coefficients c and d:

(2)   \begin{align*} z = c + d \cdot v \end{align*}

There is a market maker who sets the equilibrium price in order to break even. Let \Pi denote the informed agent’s utility from trading:

(3)   \begin{align*} \Pi &= \max_{z} \left\{ \mathbb{E} \left[ (v - p) \cdot z \mid v \right] \right\} \\ &= \max_{z} \left\{ \mathbb{E} \left[ (v - a - b \cdot z - b \cdot \varepsilon) \cdot z \mid v \right] \right\} \end{align*}

Differentiating yields an expression for the optimal holdings of the informed traders given the true asset value:

(4)   \begin{align*} z &= - \frac{a}{2 \cdot b} + \left( \frac{1}{2 \cdot b} \right) \cdot v \end{align*}

Next, to solve for the coefficient values (a, b, c, d) as functions of the model primitives (\mu_v, \sigma_v, \sigma_{\varepsilon}), I enforce the break even condition for the market maker which demands that the price of the asset be equal to the expected value of the asset conditional on observing the aggregate asset demand:

(5)   \begin{align*} p &= \mathbb{E} \left[ v \mid x \right] \\ &= \mathbb{E} \left[ v \right] + \frac{\mathbb{C} \left[ x, v\right]}{\mathbb{V}\left[ x \right]} \cdot \left( x - \mathbb{E}[x] \right) \\ &= \mu_v + \frac{\mathbb{C} \left[ - \frac{a}{2 \cdot b} + \left( \frac{1}{2 \cdot b} \right) \cdot v + \varepsilon, v\right]}{\mathbb{V}\left[ - \frac{a}{2 \cdot b} + \left( \frac{1}{2 \cdot b} \right) \cdot v + \varepsilon \right]} \cdot \left( x - \left[ - \frac{a}{2 \cdot b} + \frac{\mu_v}{2 \cdot b} \right] \right) \\ &= \mu_v + \left( \frac{\left( \frac{1}{2 \cdot b} \right) \cdot \sigma_v^2}{ \left( \frac{1}{2 \cdot b} \right)^2 \cdot \sigma_v^2 + \sigma_{\varepsilon}^2} \right) \cdot \left( \frac{a}{2 \cdot b} - \frac{\mu_v}{2 \cdot b} \right) \\ &\qquad \qquad + \left( \frac{\left( \frac{1}{2 \cdot b} \right) \cdot \sigma_v^2}{ \left( \frac{1}{2 \cdot b} \right)^2 \cdot \sigma_v^2 + \sigma_{\varepsilon}^2} \right) \cdot x \end{align*}

Enforcing this condition leads to an expression for p that is linear in the aggregate asset demand x. Thus, I observe the b must be equal to the coefficient on x from the equation above and solve for b:

(6)   \begin{align*} b &= \frac{\left( \frac{1}{2 \cdot b} \right) \cdot \sigma_v^2}{ \left( \frac{1}{2 \cdot b} \right)^2 \cdot \sigma_v^2 + \sigma_{\varepsilon}^2} \\ &= \frac{\sigma_v}{2 \cdot \sigma_{\varepsilon}} \end{align*}

We see that the market maker tends to change prices more in response to an aggregate demand shock when the asset value is more volatile and when the noise trader demand is less volatile–i.e., when asset value changes aer more likely and when there is less demand noise for the informed traders to hide behind.

3. Geometric Interpretation

The plot below captures the essence of the intuition embedded in the noisy rational expectations model. The core idea is that the market maker cannot precisely distinguish a random fluctuation in demand from a shift in demand due to a shift in the asset value. Thus, when the market maker sets the price, he looks at the aggregate demand and shades the price a bit higher if he observes a high demand or a bit lower if he observes a surprisingly low demand schedule, but does not do so on a one for one schedule: 0 < b < 1. By using normally distributed random variables as well as linear pricing and informed demand rules, we can then get nice expressions for the coefficients of interest.

To read this plot, step into the shoes of the market maker and have a look at the blue side of the figure which shows the relationship between the price (i.e., the market maker’s expectation of the value on the y-axis) and the aggregate demand (x-axis). The line p = a + b \cdot x shows the price you will set if you observe an aggregate demand of x. Note that is x = 0, you will set the price of a–the y-intercept.

Where does this pricing rule come from? When you observe the aggregate demand of x, your best guess for the informed demand schedule is z as \mathbb{E} [ \varepsilon ] = 0. This best guess is displayed in the plot by the mapping through the z = u + v \cdot x line on the green floor of the figure over to the z-axis. On the red wall of the figure, we see that this choice of z has to map over to a realized value v that is a linear function of z and is equal to your choice of p. This is the double projection and fixed point problem that pins down the equilibrium values. For instance, note that at z=0, it must be the case that both the v and p functionals cross the y-axis at the same place as \mathbb{E}[\varepsilon]=0.

Filed Under: Uncategorized

« Previous Page
Next Page »

Pages

  • Publications
  • Working Papers
  • Curriculum Vitae
  • Notebook
  • Courses

Copyright © 2026 · eleven40 Pro Theme on Genesis Framework · WordPress · Log in

 

Loading Comments...
 

You must be logged in to post a comment.