Research Notebook

The Buckingham Pi Theorem

January 20, 2012 by Alex

1. Introduction

In this post I outline the Buckingham \pi Theorem which shows how to use dimensional analysis to compute answers to seemingly intractable physical problems. For instance, in 1950 Geoffrey Taylor used the theorem to work out the energy payload released by the 1945 Trinity test atomic explosion in New Mexico simply by looking at slow motion video records. My main source for this post is Bluman and Kumei (1989).

Frame from slow motion footage of the Trinity nuclear test with a distance measurement showing the blast radius as well as a time measure showing seconds elapsed since denotation.

2. Basic Framework

The Buckingham \pi Theorem concerns physical problems with the following form: There is a variable of interest, y, which is some unknown function of N different physical quantities x_1, x_2, \ldots, x_N.

(1)   \begin{align*} y &= f(x_1, x_2, \ldots, x_N) \end{align*}

Each of these physical quantities is composed of measurements in only M \leq (N-1) fundamental dimensions labeled c_1, c_2, \ldots, c_M. Thus, I can define a dimension operator which gives the dimensions of an arbitrary variable w \in \{y, x_1, x_2, \ldots, x_N\} and write its output as:

(2)   \begin{align*} \mathtt{dim}[z] &= \prod_{m=1}^{M} c_m^{a_m} \end{align*}

So, for example, if z is measuring pressure on the surface of a table, I could write \mathtt{dim}[z] = \mathtt{lb}/\mathtt{in}^2 where c_1 = \mathtt{lb}, c_2 = \mathtt{in}, a_1 = 1 and a_2 = -2.

Definition (Dimensionless Quantity):
An arbitrary variable w \in \{y, x_1, x_2, \ldots, x_N\} is dimensionless if:

(3)   \begin{align*} \mathtt{dim}[z] &= 1 \end{align*}

3. Main Result

Now, I show how to reformulate this problem and apply linear algebra to the dimensional exponents to derive a characterization of the solution to f(x_1,x_2,\ldots,x_N) - y = 0 as a function of dimensionless quantities. First, I define A as an M \times N matrix of dimensional exponents for x_1,x_2,\ldots,x_N and B as the M \times 1 vector of dimensional exponents for y:

(4)   \begin{align*} A &= \begin{bmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,N} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,N} \\ \vdots & \vdots & \ddots & \vdots \\ a_{M,1} & a_{M,2} & \cdots & a_{M,N} \end{bmatrix}, \quad B = \begin{bmatrix} b_1 & b_2 & \cdots &  b_M  \end{bmatrix}^{\top} \end{align*}

Next, we know that if A has full rank, then there will N - M = K solutions to the system of equations 0 = AU. Define U as this N \times K matrix of solutions.

(5)   \begin{align*} U &= \begin{bmatrix} u_{1,1} & u_{1,2} & \cdots & u_{1,K} \\ u_{2,1} & u_{2,2} & \cdots & u_{2,K} \\ \vdots & \vdots & \ddots & \vdots \\  u_{N,1} & u_{N,2} & \cdots & u_{N,K} \end{bmatrix} \end{align*}

Finally, define V as the solution to system of equations 0 = AV + B:

(6)   \begin{align*} V &= \begin{bmatrix} v_1 & v_2 & \cdots &  v_N  \end{bmatrix}^{\top} \end{align*}

With these objects in hand, I can now state the Buckingham \pi Theorem.

Proposition (Buckingham \pi):
A physical system y = f(x_1,x_2,\ldots,x_N) with M fundamental dimensions can be restated as:

(7)   \begin{align*} y &= \frac{g(\pi_1,\pi_2,\cdots,\pi_K)}{x_1^{u_1} \cdot x_2^{u_2} \cdots x_N^{u_N}}, \end{align*}

where K = N - M, g(\cdot) is an unknown function and \{\pi_1,\pi_2,\cdots,\pi_K\} are dimensionless parameters constructed from the physical parameters \{x_1,x_2,\ldots,x_N\} using equations of the form below:

(8)   \begin{align*} \pi_k &= x_1^{v_1} \cdot x_2^{v_2} \cdots x_N^{v_N}, \end{align*}

4. An Example

I now give an example of how to employ this theorem by working out the nuclear payload example from the introduction. For more information on this example, take a look at this blog post. Suppose that an atomic blast has a shock wave radius of \delta = f(\epsilon,\tau,\rho,\phi) with variables:

  1. \epsilon: Energery released by the explosion,
  2. \tau: Time elapsed since the explosion took place,
  3. \rho: Initial density, and
  4. \phi: Initial pressure.

For this problem N = 4 and M=3 with fundamental dimensions of length l, mass m, and time t yielding the dimensional matrix A written below:

(9)   \begin{align*} A &= \begin{bmatrix} 2 & 0 & -3 & -1 \\ 1 & 0 & 1 & 1 \\ -2 & 1 & 0 & -2 \end{bmatrix} \end{align*}

The energy (force) released by the bomb is the amount of mass accelerating through a unit square on the surface of the blast wave. Since K = N - M = 1, we have only 1 dimensionless constant which can be computed as the solution to the system of equations 0 = AU:

(10)   \begin{align*} \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} &= \begin{bmatrix} 2 & 0 & -3 & -1 \\ 1 & 0 & 1 & 1 \\ -2 & 1 & 0 & -2 \end{bmatrix} \begin{bmatrix} - \alpha \cdot 2 \\ \alpha \cdot 6 \\ - \alpha \cdot 3 \\ \alpha \cdot 5 \end{bmatrix}  \end{align*}

Setting the scalar free parameter \alpha = 1, I can write \pi_1 as:

(11)   \begin{align*} \pi_1 &= \epsilon^{-2} \cdot \tau^{6} \cdot \rho^{-3} \cdot \phi^{5} \end{align*}

The dimension of the shock wave radius \delta is in units of distance yielding the B dimension matrix below:

(12)   \begin{align*} B &= \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}^{\top} \end{align*}

The system of 3 equations 0 = AV + B has 4 unknowns:

(13)   \begin{align*} 0 &= 1 + 2 \cdot v_1 - 3 \cdot v_3 - v_4 \\ 0 &= v_1 + v_3 + v_4 \\ 0 &= -2 \cdot v_1 + v_2 - 2 \cdot v_4 \end{align*}

Thus, the vector V will thus be defined up to a single free parameter \hat{\alpha} \geq 0:

(14)   \begin{align*} V &= \begin{bmatrix} (8 \cdot \hat{\alpha} - 1)/5 \\ 2 \cdot (3 \cdot \hat{\alpha} - 1) / 5 \\ (1 - 3 \cdot \hat{\alpha}) / 5 \\ \hat{\alpha} \end{bmatrix}^{\top} \end{align*}

Using the formula \pi = \delta/\left( \epsilon^{v_1} \cdot \tau^{v_2} \cdot \rho^{v_3} \cdot \phi^{v_4} \right) and tuning \hat{\alpha} = 0, I can compute \pi as:

(15)   \begin{align*} \pi &= \delta \cdot \left[ \frac{\epsilon \cdot \tau^2}{\rho} \right]^{-1/5}  \end{align*}

Combining all of these results yields a formulation for \delta in terms of a constant (\epsilon \cdot \tau^2/\rho)^{1/5} and an unknown function of the dimensionless quantity g(\pi_1):

(16)   \begin{align*} \delta &= \left[ \frac{\epsilon \cdot \tau^2}{\rho} \right]^{1/5} \cdot g \left( \pi_1 \right) \end{align*}

Taylor expanding around g(0) where g(0) \neq 0 yields a formulation where \delta \propto t^{2/5} with scaling constant c given by:

(17)   \begin{align*} c &= \left( \frac{E}{\rho_0} \right)^{1/5} \cdot g(0) \end{align*}

Setting g(0) = 1, the \log \times \log plot of \delta vs \tau yielded an accurate fit where the predicted values fall on the solid line and the (declassified) empirically observed values are denoted by +‘s in the plot below:

Filed Under: Uncategorized

Notes: Glosten and Milgrom (1985)

December 12, 2011 by Alex

1. Introduction

In this post, I replicate the main results from Glosten and Milgrom (1985) using the setup outlined in Back and Baruch (2003). I begin in Section 2 by laying out the continuous time asset pricing framework. I consider the behavior of an informed trader who trades a single risky asset with a market maker that is constrained by perfect competition. Then, in Section 3 I solve for the optimal trading strategy of the informed agent as a system of first order conditions and boundary constraints. Finally, I show how to numerically compute comparative statics for this model.

2. Asset Pricing Framework

There is a single risky asset which pays out v \in \{0,1\} at a random date \tau > 0. There is an informed trader and a stream of uninformed traders who arrive with Poisson intensity \beta. All traders have a fixed order size of \delta  = 1. The model end date \tau is distributed exponentially with intensity \kappa.

Let z_{t-} denote the net position of the noise traders up to but not including date t and let x_{t-} denote the net position of the informed up to but not including date t. The market maker sees an anonymous order flow at each time t of dy_t = dx_t + dz_t so that \{y_s \mid s \leq t\} generates a \sigma-field \mathcal{F}_t^y which represents the market maker’s information set.

Perfect competition dictates that the market maker sets the price of the risky asset p_t = \mathbb{E}\left[ \ v \ \middle| \ \mathcal{F}_t^y \ \right]. Let b_t and a_t denote the bid and ask prices at time t.

(1)   \begin{align*} a_t &= \mathbb{E} \left[ \ v \ \middle| \ \mathcal{F}_{t-}^y, \ dy_t = 1 \ \right] \\ b_t &= \mathbb{E} \left[ \ v \ \middle| \ \mathcal{F}_{t-}^y, \ dy_t = -1 \ \right] \end{align*}

Let p_{t-} be the left limit of the price p at time t. Given that v \in \{0,1\}, we can interpret p_{t-} as the probability of the event v=1 at time t given the information set \mathcal{F}_{t-}^y. The informed trader chooses a trading strategy \{dx_t\}_{t \leq \tau} in order to maximize his end of game wealth at random date \tau with 0 discount rate. Let dx_t^+ = \max\{0,dx_t\} and let dx_t^- = \min\{0,dx_t\}.

(2)   \begin{align*} w &= \max_{\{dx_t\}_{t \leq \tau}} \left\{ \mathbb{E} \left[ \ \int_0^\tau \left( v - a_t \right) \cdot dx_t^+ + \int_0^\tau \left( b_t - v \right) \cdot dx_t^- \ \middle| \ v \ \right] \right\} \end{align*}

In order to guarantee a solution to the optimization problem posed above, I restrict the domain of potential trading strategies to those that generate finite end of game wealth.

(3)   \begin{align*} \infty > \max_{\{dx_t\}_{t \leq \tau}} \left\{ \mathbb{E} \left[ \ \int_0^\tau \left( v - a_t \right) \cdot dx_t^+ \ \middle| \ v = 1 \ \right] \right\} \\ \infty > \max_{\{dx_t\}_{t \leq \tau}} \left\{ \mathbb{E} \left[ \ \int_0^\tau \left( b_t - v \right) \cdot dx_t^- \ \middle| \ v = 0 \ \right] \right\} \end{align*}

I then look for probabilistic trading intensities which make the net position of the informed trader a martingale.

Definition: At each time t \leq \tau, an equilibrium consists of a pair of bid and ask prices,

(4)   \begin{align*} \begin{bmatrix} b & a \end{bmatrix}_{s \leq t} \end{align*}

as well as a vector of trading intensities,

(5)   \begin{align*} \Theta &= \begin{bmatrix} \theta_{H,B} & \theta_{H,S} & \theta_{L,B} & \theta_{L,S} \end{bmatrix}_{s \leq t} \end{align*}

such that the prices equal the conditional expectation of the asset value relative to \mathcal{F}_{t-}^y given a sell or buy order and the trading intensities solve the informed agent’s objective function, satisfy the finiteness conditions and are martingales relative to the informed trader’s information set \mathcal{F}_{t-}^y:

(6)   \begin{align*} 0 &= \mathbb{E} \left[ \ x_t^+ - v \cdot \int_0^t \theta_{H,B} \left( p_{s-} \right) \cdot ds - \left( 1 - v \right) \cdot \int_0^t \theta_{L,B} \left( p_{s-}\right) \cdot ds \ \middle| \ \mathcal{F}_{t-}^y \ \right] \\ 0 &= \mathbb{E} \left[ \ x_t^- - v \cdot \int_0^t \theta_{H,S} \left( p_{s-} \right) \cdot ds - \left( 1 - v \right) \cdot \int_0^t \theta_{L,S} \left( p_{s-}\right) \cdot ds \ \middle| \ \mathcal{F}_{t-}^y \ \right] \end{align*}

In the definition above, the \{H,L\} and \{B,S\} subscripts denote the realized value and trade directions for the informed traders. So, for example, \theta_{H,B} denotes the trading intensity at some time t in the buy direction of an informed trader who knows that the value of the asset is v=1.

3. Optimal Trading Strategies

I now characterize the equilibrium trading intensities of the informed traders. First, observe that since \tau is distributed exponentially, the only relevant state variable is p_{t-} at time t. Thus, in the equations below, I drop the time dependence wherever it causes no confusion.

Since the bid and ask prices are conditional expectations, we can compute their values using Bayes’ rule.

(7)   \begin{align*} a(p) &= \left( \frac{ p \cdot \theta_{H,B} ( p ) }{p \cdot \theta_{H,B}(p) + ( 1 - p ) \cdot \theta_{L,B} (p) + \beta} \right) \cdot 1 \\ &\qquad \qquad + \left( \frac{( 1 - p ) \cdot \theta_{L,B}(p)}{p \cdot \theta_{H,B}(p) + ( 1 - p ) \cdot \theta_{L,B} (p) + \beta} \right) \cdot 0 \\ &\qquad \qquad \qquad \qquad + \left( \frac{\beta}{p \cdot \theta_{H,B} \left( p \right) + ( 1 - p ) \cdot \theta_{L,B} (p) + \beta} \right) \cdot p \\ &= \frac{ p \cdot \theta_{H,B} ( p ) + p \cdot \beta}{p \cdot \theta_{H,B} ( p ) + ( 1 - p ) \cdot \theta_{L,B} ( p ) + \beta} \\ b(p) &= \frac{p \cdot \theta_{H,S} ( p ) + p \cdot \beta(p)}{p \cdot \theta_{H,S} ( p ) + ( 1 - p ) \cdot \theta_{L,S} (p ) + \beta} \end{align*}

I now want to derive a set of first order conditions regarding the optimal decisions of high and low type informed agents as functions of these bid and ask prices which can be used to pin down the equilibrium vector of 4 trading intensities. Let w_H(p) and w_L(p) denote the value functions of the high and low type informed traders respectively.

Condition 1: No arbitrage implies that a(p) > p > b(p) for all p \in (0,1) with a(0) = 0 = b(0) and a(1) = 1 = b(1) since:

(8)   \begin{align*} \mathbb{E} \left[ \ v \ \middle| \ \mathcal{F}_{t-}^y, \ dy_t = 1 \ \right] \geq  \mathbb{E} \left[ \ v \ \middle| \ \mathcal{F}_{t-}^y \  \right] \geq  \mathbb{E} \left[ \ v \ \middle| \ \mathcal{F}_{t-}^y, \ dy_t = -1 \ \right] \end{align*}

Thus, for all p \in (0,1) it must be that \theta_{H,B}(p) > \theta_{L,B}(p) > 0 and \theta_{L,S}(p) > \theta_{H,S}(p) > 0. What’s more, p=1 and p=0 are absorbing points for p meaning that w_L(0) = w_H(1) = 0 while w_L(1) = w_H(0) = \infty.

Condition 2: The around a buy or sell order, the price moves by jumping from p \nearrow a(p) or from p \searrow b(p) so we can think about the stochastic process dp as composed of a deterministic drift component \mu(p) and 2 jump components with magnitudes \{a(p) - p\} and \{b(p) - p\}.

(9)   \begin{align*} \mathbb{E}\left[ dp \right] &= \mu(p) \cdot dt + \lambda_a \cdot \left\{ a(p) - p \right\} \cdot dt + \lambda_b \cdot \left\{ b(p) - p \right\} \cdot dt \end{align*}

The probabilities \lambda_a and \lambda_b can be computed using Bayes’ rule.

(10)   \begin{align*} \lambda_a &= p \cdot \theta_{H,B}(p) + (1 - p) \cdot \theta_{L,B}(p) + \beta \\ \lambda_b &= p \cdot \theta_{H,S}(p) + (1 - p) \cdot \theta_{L,S}(p) + \beta \end{align*}

Substituting in the formulas for a(p) and b(p) from above yields an expression for the price change that is purely in terms of the trading intensities and the price.

(11)   \begin{align*} \mathbb{E}\left[dp\right] &= \mu(p) \cdot dt + p \cdot \left( 1 - p \right) \cdot \left( \theta_{H,B}(p) + \theta_{H,S}(p) - \theta_{L,B}(p) - \theta_{L,S}(p) \right) \cdot dt \end{align*}

However, via the conditional expectation price setting rule, dp must be a martingale meaning that \mathbb{E}[dp] = 0.

(12)   \begin{align*} \mu(p) &= p \cdot \left( 1 - p \right) \cdot \left( \theta_{L,B}(p) + \theta_{L,S}(p) - \theta_{H,B}(p) - \theta_{H,S}(p) \right) \end{align*}

Condition 3: At the time of a buy or sell order, smooth pasting implies that the informed trader was indifferent between placing the order or not. For instance, if he strictly preferred to place the order, he would have done so earlier via the continuity of the price process.

(13)   \begin{align*} w_H(p) &= \left(  1 - a(p) \right) + w_H( a(p) ) \\ w_L(p) &= b(p) + w_L ( b(p) ) \end{align*}

Condition 4: It is not optimal for the informed traders to bluff. i.e., a high type informed trader can never increase his value function by selling at time t and vice versa for a low type informed trader.

(14)   \begin{align*} w_H(p) &\geq \left(  b(p) - 1 \right) + w_H ( b(p) ) \\ w_L(p) &= -a(p) + w_L ( a(p) ) \end{align*}

Condition 5: In all time periods in which the informed trader does not trade, smooth pasting implies that he must be indifferent between trading and delaying an instant dt. There are 2 forces at work here. In each instant dt, there is a \kappa probability that \tau will arrive and the informed trader’s value function will plummet to 0. This cost has to be offset by the value delaying. For the high type informed trader, this value includes the value change due to the price drift (dw_H/dp) \cdot \mu(p), the value change due to an uninformed trader placing a buy order with probability \beta and the value change due to an uninformed trader placing a sell order with probability \beta. Similar reasoning yields a symmetric condition for low type informed traders.

(15)   \begin{align*} \kappa \cdot w_H(p) &= w_H'(p) \cdot \mu(p) + \beta \cdot \left( w_H( a(p)) - w_H(p) \right) + \beta \cdot \left( w_H( b(p) ) - w_H(p) \right) \\ \kappa \cdot w_L(p) &= w_L'(p) \cdot \mu(p) + \beta \cdot \left( w_L( a(p) ) - w_L(p) \right) + \beta \cdot \left( w_L( b(p) ) - w_L(p) \right) \end{align*}

This combination of 5 conditions pins down the equilibrium.

Proposition: If the trading strategies are admissible, w_H is a non-increasing function of p, w_L is a non-decreasing function of p, both value functions satisfy the 5 conditions above, and the trading strategies are continuously differentiable on the interval (0,1), then the trading strategies are optimal for all t.

In the section below, I solve for the equilibrium trading intensities and prices numerically.

4. Numerical Solution

In the results below, I set \beta = 1/2 and \kappa = 1 for simplicity. I compute the value functions w_H and w_L as well as the optimal trading strategies \Theta on a grid over the unit interval with N nodes. Let \mathbf{P} denote the vector of N prices.

(16)   \begin{align*} \mathbf{P} = \begin{bmatrix} p_1 & p_2 & \cdots & p_N \end{bmatrix} \end{align*}

Let \mathbf{W}_H(\mathbf{P};\mathtt{i}) and \mathbf{W}_L(\mathbf{P};\mathtt{i}) denote the vector of value function levels over each point in the price grid \mathbf{P} after iteration \mathtt{i}. I use the teletype style i to denote the number of iterations in the optimization algorithm. w_H(p_n;\mathtt{i}) denotes the level of the value function at price point p_n after \mathtt{i} iterations.

(17)   \begin{align*} \mathbf{W}_H(\mathbf{P};\mathtt{i}) &= \begin{bmatrix} w_H(p_1;\mathtt{i}) & w_H(p_2;\mathtt{i}) & \cdots & w_H(p_N;\mathtt{i}) \end{bmatrix} \\ \mathbf{W}_L(\mathbf{P};\mathtt{i}) &= \begin{bmatrix} w_L(p_1;\mathtt{i}) & w_L(p_2;\mathtt{i}) & \cdots & w_L(p_N;\mathtt{i}) \end{bmatrix} \end{align*}

The algorithm below computes w_H(p), w_L(p), a(p) and b(p). The equilibrium trading intensities can be derived from these values analytically. I seed initial guesses at the values of \mathbf{W}_H(\mathbf{P};\mathtt{0}) and \mathbf{W}_L(\mathbf{P};\mathtt{0}).

(18)   \begin{align*} w_H(p_n;\mathtt{0}) &= e^{10 \cdot (1 - p_n)} - 1 \\ w_L(p_n;\mathtt{0}) &= e^{10 \cdot p_n} - 1 \end{align*}

Then, I iterate on these value function guesses until the adjustment error \Gamma(\mathtt{i}) which I define in Step 5 below is sufficiently small. The estimation strategy uses the fixed point problem in Equation (13) to compute a(p) and b(p) given w_H(p) and w_L(p) and then separately uses the martingale condition in Equation (9) to compute the drift in the price level. The algorithm updates the value function in each step by first computing how badly the no trade indifference condition in Equation (15) is violated, and then lowering the values of w_H(p) for p near 1 when the high type informed trader is too eager to trade and raising them when he is too apathetic about trading and vice versa for the low type trader. Along the way, the algorithm checks that neither informed trader type has an incentive to bluff.

x-axis: Price of risky asset. Panel (a): Value function for the high (red) and low (blue) type informed trader. Panel (b): Bid (red) and ask (blue) prices for the risky asset. Panel (c): Between trade price drift.

Below I outline the estimation procedure in complete detail. Code the for the simulation can be found on my GitHub site.

\mathtt{while} (\Gamma(\mathtt{i}) > \mathtt{tol}) \ \{

Step 1. Numerically compute w_H'(p_n;\mathtt{i}) and w_L'(p_n;\mathtt{i}) at each point.

(19)   \begin{align*} w_H'(p_n;\mathtt{i}) &= \frac{1}{2} \cdot \left( \frac{w_H(p_{n+1};\mathtt{i}) - w_H(p_n;\mathtt{i})}{p_{n+1} - p_n} + \frac{w_H(p_n;\mathtt{i}) - w_H(p_{n-1};\mathtt{i})}{p_n - p_{n-1}} \right) \\ w_L'(p_n;\mathtt{i}) &= \frac{1}{2} \cdot \left( \frac{w_L(p_{n+1};\mathtt{i}) - w_L(p_n;\mathtt{i})}{p_{n+1} - p_n} + \frac{w_L(p_n;\mathtt{i}) - w_L(p_{n-1};\mathtt{i})}{p_n - p_{n-1}} \right) \end{align*}

I fill in each of the boundary derivatives manually.

(20)   \begin{align*} w_H'(0;\mathtt{i}) &= w_L'(1;\mathtt{i}) = \infty \\ w_H'(1;\mathtt{i}) &= w_L'(0;\mathtt{i}) = 0 \end{align*}

Step 2. Solve for bid and ask prices using Equation (13).

(21)   \begin{align*} a(p_n;\mathtt{i}) &= \arg_a \left\{ w_H(p_n;\mathtt{i}) = \left(1 - a\right) - w_H(a;\mathtt{i})  \right\} \\ b(p_n;\mathtt{i}) &= \arg_b \left\{ w_L(p_n;\mathtt{i}) = b - w_L(b;\mathtt{i})  \right\} \end{align*}

I interpolate the value function levels at w_H(a;\mathtt{i}) and w_L(b;\mathtt{i}) linearly. Let p_n be the closest price level to a such that a > p_n and let p_m be the closest price level to b such that p_m > b.

(22)   \begin{align*} w_H(a;\mathtt{i}) &= w_H(p_n;\mathtt{i}) + w_H'(p_n;\mathtt{i}) \cdot \left\{ a - p_n \right\} \\ w_L(a;\mathtt{i}) &= w_L(p_m;\mathtt{i}) + w_L'(p_m;\mathtt{i}) \cdot \left\{ p_m - b \right\} \end{align*}

Step 3. Compute \mu(p_n;\mathtt{i}) using Equation (9).

(23)   \begin{align*} 0 &= \mu(p_n;\mathtt{i}) + \lambda_a(p_n;\mathtt{i}) \cdot \left\{ a(p_n;\mathtt{i}) - p_n \right\} + \lambda_b(p_n;\mathtt{i}) \cdot \left\{ b(p_n;\mathtt{i}) - p_n \right\} \end{align*}

I then plug in Equation (10) to compute lambda_a(p_n;\mathtt{i}) and \lambda_b(p_n;\mathtt{i}).

(24)   \begin{align*} \lambda_a(p_n;\mathtt{i}) &= p_n \cdot \theta_{H,B}(p_n;\mathtt{i}) + \beta \\ \lambda_b(p_n;\mathtt{i}) &= (1 - p_n) \cdot \theta_{L,S}(p_n;\mathtt{i}) + \beta \end{align*}

I then use Equation (7) to solve for \theta_{H,B}(p_n;\mathtt{i}) and \theta_{L,S}(p_n;\mathtt{i}) in terms of only prices.

(25)   \begin{align*} \theta_{H,B} ( p_n;\mathtt{i})  &= \frac{\beta \cdot \left\{ p_n - a(p_n;\mathtt{i}) \right\}}{p_n \cdot \left( a(p_n;\mathtt{i}) - 1 \right)} \\ \theta_{L,S} (p_n;\mathtt{i}) &= \frac{\beta \cdot \left\{ p_n - b(p_n;\mathtt{i}) \right\}}{( 1 - p_n ) \cdot b(p_n;\mathtt{i})} \end{align*}

Combining these equations leaves a formulation for \mu(p_n;\mathtt{i}) which contains only prices.

(26)   \begin{align*} \mu(p_n;\mathtt{i}) &= \left( (1 - p_n) \cdot \theta_{L,S}(p_n;\mathtt{i}) + \beta \right) \cdot \left\{ p_n - b(p_n;\mathtt{i}) \right\} \\ &\qquad \qquad - \left( p_n \cdot \theta_{H,B}(p_n;\mathtt{i}) + \beta \right) \cdot \left\{ a(p_n;\mathtt{i}) - p_n \right\} \\ &= \left( \frac{\beta \cdot \left\{ p_n - b(p_n;\mathtt{i}) \right\}}{b(p_n;\mathtt{i})} + \beta \right) \cdot \left\{ p_n - b(p_n;\mathtt{i}) \right\} \\ &\qquad \qquad - \left(\frac{\beta \cdot \left\{ p_n - a(p_n;\mathtt{i}) \right\}}{a(p_n;\mathtt{i}) - 1} + \beta \right) \cdot \left\{ a(p_n;\mathtt{i}) - p_n \right\} \\ &= \frac{\beta \cdot p_n \cdot \left\{ p_n - b(p_n;\mathtt{i}) \right\}}{b(p_n;\mathtt{i})} - \frac{\beta \cdot \left( 1 - p_n \right) \cdot \left\{ a(p_n;\mathtt{i}) - p_n \right\}}{1 - a(p_n;\mathtt{i})} \end{align*}

Step 4. At each p_n for n \in N, set \alpha = 0.10 and ensure that Equation (14) is satisfied. If the high type informed traders want to sell at price p_n, increase their value function at price p_n by \alpha = 10\%.

(27)   \begin{align*} &\mathtt{if} \Big[ w_H(p_n;\mathtt{i}) < \left(  b(p_n;\mathtt{i}) - 1 \right) + w_H \left( b(p_n;\mathtt{i});\mathtt{i}\right) \Big] \ \{ \\ &\qquad \qquad \mu(p_n;\mathtt{i}) = \left( 1 + \alpha \right) \cdot \mu(p_n;\mathtt{i}) \\ &\} \end{align*}

If the low type informed traders want to buy at price p_n, decrease their value function at price p_n by \alpha = 10\%.

(28)   \begin{align*} &\mathtt{if} \Big[ w_L(p_n;\mathtt{i}) < -a(p_n;\mathtt{i}) + w_L \left( a(p_n;\mathtt{i});\mathtt{i} \right) \Big] \ \{ \\ &\qquad \qquad \mu(p_n;\mathtt{i}) = \left( 1 - \alpha \right) \cdot \mu(p_n;\mathtt{i}) \\ &\} \end{align*}

Step 5. Update w_H(p_n;\mathtt{i}) and w_L(p_n;\mathtt{i}) by adding \varsigma = 5\% times the between trade indifference error from Equation (15).

(29)   \begin{align*} w_H(p_n;\mathtt{i+1}) &= w_H(p_n;\mathtt{i}) + \varsigma \cdot \left\{w_H'(p_n;\mathtt{i}) \cdot \mu(p_n;\mathtt{i})  - \kappa \cdot w_H(p_n;\mathtt{i}) \right\} \\ &\qquad \qquad + \varsigma \cdot \beta \cdot \left\{ w_H\left( a(p_n;\mathtt{i});\mathtt{i} \right) + w_H\left( b(p_n;\mathtt{i});\mathtt{i} \right) - 2 \cdot w_H(p_n;\mathtt{i}) \right\} \\ w_L(p_n;\mathtt{i+1}) &= w_L(p_n;\mathtt{i}) + \varsigma \cdot \left\{w_L'(p_n;\mathtt{i}) \cdot \mu(p_n;\mathtt{i})  - \kappa \cdot w_L(p_n;\mathtt{i}) \right\} \\ &\qquad \qquad + \varsigma \cdot \beta \cdot \left\{ w_L\left( a(p_n;\mathtt{i});\mathtt{i} \right) + w_L\left( b(p_n;\mathtt{i});\mathtt{i} \right) - 2 \cdot w_L(p_n;\mathtt{i}) \right\} \end{align*}

Step 6. Evaluate update error.

(30)   \begin{align*} \Gamma(\mathtt{i}) &= \sqrt{ \frac{1}{N} \cdot \left\{ \sum_{n=1}^N \left(w_L(p_n;\mathtt{i+1}) - w_L(p_n;\mathtt{i}) \right)^2 + \sum_{n=1}^N \left(w_H(p_n;\mathtt{i+1}) - w_H(p_n;\mathtt{i}) \right)^2 \right\} } \end{align*}

\}

Filed Under: Uncategorized

Protected: Notes: Levy (2010)

November 30, 2011 by Alex

This content is password-protected. To view it, please enter the password below.

Filed Under: Uncategorized

CRSP Data Summary Statistics by Industry

November 21, 2011 by Alex

1. Introduction

In this post, I compute industry level summary statistics the CRSP monthly file using 2 different industry classification schemes:

  1. Fama and French (1988)
  2. Moskowitz and Grinblatt (1999)

All of the code for the results below as well as a JSON file containing the industry classification schemes can be found at my GitHub page. I use the Zoom.it API to make it convenient to scroll around and inspect the large summary statistic plots I create. Each of these plots can be expanded to full screen mode using the controls at the lower right hand corner of the figure.

2. Data

In this section, I describe my data sources for the plots below.

CRSP Monthly File

I gather my stock data from the CRSP monthly file via the WRDS database. Thus, the unit of observation is a firm \times month pair. I restrict my attention to the time period from January 1988 to December 2010 to focus on the period of time over which the Fama and French (1988) industry classification scheme would have been widely known. I keep only actively traded firms listed on the NYSE, NASDAQ and AMEX exchanges. I require that the firm reports a non-missing price, return, share count and SIC code for a given month. I also remove any observations which lack valid data in the previous month. This leaves me with 1,916,707 total firm \times month observations covering 20,686 firms. The figure below plots the total number of firms in the dataset each month.

Number of firms in the monthly CRSP database from January 1988 to December 2010.

Industry Classifications

I created a JSON file to house CRSP-COMPUSTAT industry classification data. The data can be found in various places throughout the web; e.g., see Ken French’s website used in Fama and French (1988). However, everywhere I looked, the data came as a txt file with quirky formatting. For example, below is the first industry coding from the file on Ken French’s site:

1
2
3
4
5
6
7
 1 Agric  Agriculture
          0100-0199 Agric production - crops
          0200-0299 Agric production - livestock
          0700-0799 Agricultural services
          0910-0919 Commercial fishing
          2048-2048 Prepared feeds for animals
 ...

1 Agric Agriculture 0100-0199 Agric production - crops 0200-0299 Agric production - livestock 0700-0799 Agricultural services 0910-0919 Commercial fishing 2048-2048 Prepared feeds for animals ...

This format is particularly difficult to read as it is irregularly spaced and little mark-up around the data. In response to this problem, I used Emacs Regexp to convert the file on Ken French’s website into a JSON format. I also coded up the 20 firm industry classification used by Moskowitz and Grinblatt (1999). The JSON file contains 2 major directories, one for the Fama and French (1988) industry classification scheme using 49 different clusters and one for the Moskowitz and Grinblatt (1999) scheme with 20 different clusters. The industry groupings are based on the SIC codes. Below I post a sample entry for the \mathtt{Agriculture} industry from the Fama and French (1988) scheme:

1
2
3
4
5
6
7
8
9
10
{"Fama and French (1988)": {
    "Agriculture": {
	"Agric production - crops": {"start":100, "end":199},
	"Agric production - livestock": {"start":200, "end":299},
	"Agricultural services": {"start":700, "end":799},
	"Commercial fishing": {"start":910, "end":919},
	"Prepared feeds for animals": {"start":2048, "end":2048}
    },
    ...
}

{"Fama and French (1988)": { "Agriculture": { "Agric production - crops": {"start":100, "end":199}, "Agric production - livestock": {"start":200, "end":299}, "Agricultural services": {"start":700, "end":799}, "Commercial fishing": {"start":910, "end":919}, "Prepared feeds for animals": {"start":2048, "end":2048} }, ... }

Note that under the main heading there are several subindustry headings. The \mathtt{start} and \mathtt{end} tags denote the initial and ending SIC codes for each subindustry. The Moskowitz and Grinblatt (1999) scheme is less complex. There is a simple start and stop date for each of the 20 broad industry groupings:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
 "Moskowitz and Grinblatt (1999)": {
     "Mining": {"start":1000, "end":1499},
     "Food": {"start":2000, "end":2099},
     "Apparel": {"start":2200, "end":2399},
     "Paper": {"start":2600, "end":2699},
     "Chemical": {"start":2800, "end":2899},
     "Petroleum": {"start":2900, "end":2999},
     "Construction": {"start":3200, "end":3299},
     "Prim. Metals": {"start":3300, "end":3399},
     "Fab. Metals": {"start":3400, "end":3499},
     "Machinery": {"start":3500, "end":3599},
     "Electrical Eq.": {"start":3600, "end":3699},
     "Transportation Eq.": {"start":3700, "end":3799},
     "Manufacturing": {"start":3800, "end":3999},
     "Railroads": {"start":4000, "end":4099},
     "Other Transport.": {"start":4100, "end":4799},
     "Utilities": {"start":4900, "end":4999},
     "Retail": {"start":5000, "end":5299},
     "Dept. Stores": {"start":5300, "end":5399},
     "Retail": {"start":5400, "end":5999},
     "Financial": {"start":6000, "end":6999}
 }

"Moskowitz and Grinblatt (1999)": { "Mining": {"start":1000, "end":1499}, "Food": {"start":2000, "end":2099}, "Apparel": {"start":2200, "end":2399}, "Paper": {"start":2600, "end":2699}, "Chemical": {"start":2800, "end":2899}, "Petroleum": {"start":2900, "end":2999}, "Construction": {"start":3200, "end":3299}, "Prim. Metals": {"start":3300, "end":3399}, "Fab. Metals": {"start":3400, "end":3499}, "Machinery": {"start":3500, "end":3599}, "Electrical Eq.": {"start":3600, "end":3699}, "Transportation Eq.": {"start":3700, "end":3799}, "Manufacturing": {"start":3800, "end":3999}, "Railroads": {"start":4000, "end":4099}, "Other Transport.": {"start":4100, "end":4799}, "Utilities": {"start":4900, "end":4999}, "Retail": {"start":5000, "end":5299}, "Dept. Stores": {"start":5300, "end":5399}, "Retail": {"start":5400, "end":5999}, "Financial": {"start":6000, "end":6999} }

3. Fama and French (1988) Classification

In this section, I plot 4 different summary plots of the CRSP data split by the Fama and French (1988) industry classification. First, I plot the number of firms in each industry. In all of the plots, I omit the “Other” industry containing firms with no clear industry classification. All of the 48 industries except for Candy and Soda, Coal, Non-Metalic and Industrial Mining, Pharmaceutical Products, Precious Metals and Trading display a single peaked pattern indicating that the number of firms in each industry dramatically expanded around 2000.

Number of firms in the monthly CRSP database from January 1988 to December 2010 by Fama and French (1988) industry classification system.

Next, I break down this firm count by industry plot even further into sub-industries in the figure below. This plot reveals that there is wide variation in the number of subindustries. What’s more, this single peaked pattern does not persist as strongly at the sub-industry level.

Number of firms in the monthly CRSP database from January 1988 to December 2010 by Fama and French (1988) industry classification system split by sub-industry.

I then turn to market capitalization by industry rather than firm counts. In the figure below, we see that while the number of firms in most industries has been shrinking since 2000, each industry’s market capitalization has been rising steadily. Thus, the combination of the first figure with the figure below reveals that industries have been consolidating.

Market capitalization in the monthly CRSP database from January 1988 to December 2010 by Fama and French (1988) industry classification system.

Finally, I look at the distribution of monthly excess returns defined as r_{a,t} - r_{f,t} where r_{f,t} is the 3 month T-Bill by industry. Due to space constraints, it was not possible to plot 1 box plot for each month of observations, so instead I first computed the mean monthly excess return for each firm in each year and then computed yearly box plots. Thus, a data point in the plot below is a mean monthly return for a particular firm over the whole year. This figure reveals that there are large outliers in the return distribution that need to be addressed before any further data work can be done. For instance, in 1992 a firm in the entertainment industry earned an average monthly excess return of over 800\%.

Mean return by year for firms in the monthly CRSP database from January 1988 to December 2010 by Fama and French (1988) industry classification system.

4. Moskowitz and Grinblatt (1999) Classification

I also create similar plots for the industry classification system used in Moskowitz and Grinblatt (1999) which contains only 20 industries rather than the 48 in the Fama and French (1988) system. These charts generally mirror the insights from above—just at a much more granular level. First, I plot the number of firms in each of the 20 industries. These industry grouping were chosen in part to balance out the partition of firms across industries and, as a results, display a much more even cross-sectional distribution.

Number of firms in the monthly CRSP database from January 1988 to December 2010 by Moskowitz and Grinblatt (1999) industry classification system.

Again, the market capitalization plot reveals that firms have been consolidating within each industry since 2000.

Market capitalization in the monthly CRSP database from January 1988 to December 2010 by Moskowitz and Grinblatt (1999) industry classification system.

Finally, I plot the distribution of excess returns for each firm within industry as above.

Mean return by year for firms in the monthly CRSP database from January 1988 to December 2010 by Moskowitz and Grinblatt (1999) industry classification system.

Filed Under: Uncategorized

Notes: Vazquez (2011)

October 26, 2011 by Alex

1. Introduction

In this note, I outline the main results in Scale Invariance, Bounded Rationality and Non-Equilibrium Economics (WP, 2009) by Sam Vazquez for use in a 5min presentation in Prof. Sargent‘s reading group.

This paper presents an agent based model (i.e., there are a finite number of agents) which makes 2 basic assumptions:

  1. Agents have a scale invariant utility function–i.e., consuming 1 gallon of milk is as satisfying as consuming 3.8 litres of milk.
  2. Agents are boundedly rational and have beliefs about the exchange rates between a limited number of assets.

From these 2 assumption, the author defines an ensemble of economic models using a variety of tools from physics (in particular, symmetry analysis, coarse graining and operator methods). This paper presents a new set of tools to address well known economic problems and illustrates the mathematical symmetry across the class of economic operators commonly employed by economic analysts.

2. An Analogy

To motivate this analysis for people with an economics background, I begin this note with an illustrative analogy. Physicists are used to modeling complex phenomena, e.g., think about the task of computing the pressure a gas such as oxygen exerts against the walls of a box such as a classroom. There are far too many individual oxygen atoms to count and keep track of at a micro level; however, scientists can compute the macro level properties of the gas such as the average pressure exerted on its chamber walls by subscribing to 2 main modeling tricks.

First, they look for a key symmetry of the problem. This is a bit more of an art than a science. In the thermodynamics example above, this symmetric would be the fact that a particular atom of oxygen ought to behave in the exact same way regardless of where it is in the room. For instance, there is no such thing as a “near-the-floor” or “by-the-window” oxygen atom. This symmetry puts restrictions on the functional form of the equations we can use to model the movements and interactions of oxygen atoms and reduces the number of potential state variables.

Second, they look for an appropriate reference neighborhood within which to study the movement of oxygen atoms. For example, we know that the behavior of oxygen atoms in 1 corner of the room are going to have a negligible impact on the behavior the oxygen atoms at the far corner of the room. Thus, we can study the local behavior of the atoms taking the boundaries of its neighborhood as given, and then integrate up across all neighborhoods. This procedure is called coarse graining and will affect the scope of the approximations rather than the functional form of the equations.

Now, let’s consider how to apply these principles to a financial model and follow the lead of Vazquez (2009). First, consider the problem of finding a symmetry. Vazquez chooses a scale symmetry whereby the unit of measure should not affect the utility of an agent. This assumption will put a functional form restriction on the space of viable utility functions. Next, consider the problem of coarse graining. In standard economic models, agents see and trade all assets. By analogy, this would be equivalent to directly linking all oxygen atoms in a room regardless of their distance. To break these connections and allow for coarse graining, Vazquez assumes that agents only carry around an information set containing a subset of all asset pairs which he calls a “what-by-what” matrix. Thus, these 2 key assumptions allow Vazquez to use the statistical mechanics of fields to characterize the behavior of economic agents.

3. Economic Model

In this section, I tackle the basic modeling framework. Time is discrete and moves in integer steps.

3.1 Assets

There are \bar{a} different kinds of assets labeled by a = 1, 2, \ldots, \bar{a} with \mathcal{A} denoting the set of all assets. Agents get (possibly time dependent) utility from holding different combinations of the \bar{a} assets; however, this utility generating mechanism is left unspecified.

3.2 Agents

There are a finite number of \bar{n} agents indexed by n= 1, 2, \ldots, \bar{n} with the set of all agents denoted by \mathcal{N}. Every agent has an inventory of products in the quantity x_{n,a} for asset a with X_n denoting the vector of holdings for agent n. The state space of the economy at any point in time is given by the matrix \mathbf{X} = \left\{ X_n \mid n \in \mathcal{N} \right\}. Let \mathcal{X} denote the set of all possible states with \mathcal{X} = \mathbb{R}_+^{\bar{a} \times \bar{n}}.

Let \zeta_a \in \mathbb{R}_+ denote an asset specific positive scalar constant. Then, by scale invariance I mean that the economy should be unchanged if for every agent n \in \mathcal{N}, we multiply the agent’s holdings of asset a by \zeta_a:

(1)   \begin{align*} x_{n,a} \mapsto \zeta_a \cdot x_{n,a} \end{align*}

This is tantamount to saying that, if we counted all lengths in centimeters instead of meters so that \zeta_a = 100, no actual real outcomes should be changed. This restriction will imply that all essential functions will be homogeneous of degree 1.

3.3 Information

Agents have (possibly different) beliefs about how future of the economy will play out; i.e., about how \mathcal{X} will evolve. Let \mathcal{I}_n denote the beliefs of agent n. Each agent’s information may be biased, narrow or wrong. At each point in time, agents have in their mind an exchange rate matrix \mathbf{M}_n \in \mathcal{I}_n with entries denoted by m_{n,a:b} which denotes the number of units of good a that agent n would accept in exchange for a unit of good b. Let \mathcal{A}_n denote the set of assets for which agent n has an entry in her \mathbf{M}_n matrix. Thus, \mathcal{I}_n is a \sigma-algebra over matrices.

Each agent’s \mathbf{M}_n matrix has the following 3 properties:

  1. Reciprocality: Each agent is willing to buy and sell at the same price. i.e., there is no bid ask spread.

    (2)   \begin{align*} m_{n,a:b} &= \frac{1}{m_{n,b:a}} \end{align*}

  2. Transitivity: There are no profit generating trade combinations. i.e., there are no arbitrage opportunities.

    (3)   \begin{align*} m_{n,a:b} &= m_{n,c:b} \cdot m_{n,a:c} \end{align*}

  3. Scale Symmetry: Adjusting prices by the ratio \zeta_a/\zeta_b of scalar constants used to renormalize the asset units leaves the equilibrium allocations unchanged.

    (4)   \begin{align*} m_{n,a:b} &\mapsto \left( \frac{\zeta_a}{\zeta_b} \right) \cdot m_{n,a:b} \end{align*}

The \mathbf{M}_n matrices capture the idea that each agent’s field of vision or attention is bounded.

3.4 Preferences

Given his information set \mathcal{I}_n, each agent behaves rationally in accordance with von Neumann and Morgenstern axioms of decision theory1 yielding an index of satisfaction V_n as defined below:

(5)   \begin{align*} V_n &: \mathcal{X}_n \times \mathcal{I}_n \mapsto \mathbb{R} \end{align*}

where V_n = \mathbb{E} \left[ U_n \mid \mathcal{I}_n \right] with U_n as agent n‘s utility function. V_n has the properties that for each agent n \in \mathcal{N}, the partial derivatives are \partial_a V_n > 0 and \partial_a^2 V_n < 0 for each asset a \in \mathcal{A} where \partial_a \equiv \partial / \partial x_{n,a} for brevity. Also, suppose that between period t and t+1, an agent n changes his asset holdings from X_n to X_n', then I will abbreviate the corresponding change in happiness as:

(6)   \begin{align*} \Delta V_n &= V_n \left( X_n' \right) - V_n \left( X_n \right) \end{align*}

The fact that the economy must be scale invariant implies additional restrictions on the utility function of each agent. In particular, it must be the case that the utility function can vary at most by a constant due to a change in scale. i.e., we have that the utility function must be CRRA/log-like in nature such as:

(7)   \begin{align*} U_n \left( X_n \right) &= \phi \cdot \ln \left[ \Psi_n^{\top} X_n \right] \end{align*}

where \Psi_n is a \bar{a} \times 1 vector of free parameters. For instance, consider the utility specification below which equates agent n‘s utility with the value of his asset holdings in terms of a numeraire good denoted by b=1:

(8)   \begin{align*} U_n &= \ln \left[ \sum_{a \in \mathcal{A}} \left( x_{n,a} \cdot m_{n,a:1} \right)^{\phi} \right] \end{align*}

Here, note that scaling up the numeraire on 1 of the assets will have no affect on the first order condition. However, a key assumption in this class of models is that utility is myopic and depends only on the current period’s asset holding which fits nicely with Vazquez’s log utility assumption.

3.5 Economic Operators

Economic movements are classified as operators which are mappings which change agents’ portfolio holdings. For instance, if \mathbb{H} is an arbitrary economic operator, then we have that:

(9)   \begin{align*} \mathbb{H} &: \mathcal{X} \mapsto \mathcal{X} \end{align*}

For example, think about the following examples:

  1. A consumption operator \mathbb{C} which removes asset holdings but increases utility,
  2. A production operator \mathbb{Y} which recombines asset holdings at a net surplus,
  3. A trader operator \mathbb{T} which exchanges asset holdings between agents, or…
  4. A depreciation operator \mathbb{D} which removes asset holdings but does not compensate agents with a utility boost.

Using this general framework, we can talk then about similarities and differences across economic operators. For instance, the trade operator \mathbb{T} preserves the aggregate asset proportions as it simply transfers assets between 2 agents. Thus, we can think about trade as a conservative operator. Alternatively, the consumption, production and depreciation operators are not conservative with respect to the aggregate asset proportions.

In the analysis below, I overload the \Delta terminology used to denote changes in satisfaction due to changes in holdings to be operator specific. In particular, for an arbitrary economic operator \mathbb{H} I define \Delta_{\mathbb{H}} as:

(10)   \begin{align*} \Delta_{\mathbb{H}} V_n &= \mathbb{E} \left[ U \left( \mathbb{H} X_n \right)  - U \left( X_n \right) \mid \mathcal{I}_{\alpha} \right] \end{align*}

4. Examples

In Section 2 above, I defined the basic elements of a class of economic models and then stopped just short of giving an equilibrium definition. In this section, I now look at 2 examples of equilibria from this class of models. Each of these examples will essentially represent a different take on what the equilibrium price object will look like. I do not consider any models with production, consumption or depreciation decisions.

4.1 Fixed Exchange Rate

First consider a world with a single exchange rate for each asset wich is set by fiat. In this world, we have an equilibrium definition:

Definition (Equilibrium): An equilibrium is a \bar{n} \times \bar{a} matrix of allocations \mathbf{X} as well as a \bar{a} \times \bar{a} symmetric matrix of exchange rates \mathbf{M} with a unit diagonal representing \bar{a} \cdot (\bar{a} - 1)/2 unique elements such that given the exchange rate matrix \mathbf{M}, we have that:

  1. For each agent n \in \mathcal{N}, the allocation X_n satisfies:

    (11)   \begin{align*} 0 &= \frac{\partial \left( \Delta_{\mathbb{T}} V_n \right)}{\partial \left( \Delta x_{n,a}^* \right)} \end{align*}

  2. Markets clear such that for each a \in \mathcal{A}:

    (12)   \begin{align*} 0 &= \sum_{n \in \mathcal{N}} \Delta x_{n,a} \end{align*}

In such an economy, we can characterize the equilibrium allocations as follows:

Proposition (Equilibrium w/ Single Exchange Rate): Near the equilibrium, the amount of assets a and b that agent n would trade given the agent independent exchange rate m_{a:b} is given by:

(13)   \begin{align*} \Delta x_{n,a} &\approx \kappa_{n,a:b} \cdot \left\{ \partial_a V_n - m_{a:b} \cdot \partial_b V_n \right\}^2 \\ \Delta x_{n,b} &\approx - m_{a:b} \cdot \Delta x_{n,a} \end{align*}

where \kappa_{n,a:b} is a agent dependent positive constant.

This result follows directly from a first order Taylor expansion of the utility gain to trading around the fixed point of \Delta x_{n,a} = 0:

Proof: Suppose that the trade operator \mathbb{T} changes agent n‘s asset holdings in asset a by depositing \Delta x_{n,a}. Then, an equilibrium would represent a fixed point such that the following 2 properties hold:

(14)   \begin{align*} \frac{\partial \left( \Delta_{\mathbb{T}} V_n \right)}{\partial \left( \Delta x_{n,a}^* \right)} &= \left\{ \partial_a V_n - m_{a:b} \cdot \partial_b V_n  \right\} = 0 \\ \frac{\partial^2 \left( \Delta_{\mathbb{T}} V_n \right)}{\left\{\partial \left( \Delta x_{n,a}^* \right) \right\}^2} &< 0 \end{align*}

The first order condition says that agent n no longer wants to trade and the second order condition says that we are at a local optimum. A Taylor expansion of \Delta_{\mathbb{T}} V_n around its true fixed point yields first and second order terms:

(15)   \begin{align*} \Delta_{\mathbb{T}} V_n &\approx \Delta x_{n,a} \cdot \left\{ \partial_a V_n - m_{a:b} \cdot \partial_b V_n  \right\} \Big\vert_{\Delta x_{n,a} = 0} \\ &\qquad \qquad + \frac{\left(\Delta x_{n,a}\right)^2}{2} \cdot \left\{ \partial_a^2 V_n + m_{a:b}^2 \cdot \partial_b^2 V_n - 2 \cdot m_{a:b} \cdot \partial_a \partial_b V_n \right\}\Big\vert_{\Delta x_{n,a} = 0} + \ldots \end{align*}

Taking the first order condition with respect to asset a and solving for \Delta x_{n,a} and \Delta x_{n,b} in its mirror image yields the equations above.

4.2 Barter

Next consider the case of pairwise trading between agents which I refer to as bartering. In this world, I assume that the bargaining process is exogenously specified and the agents take the split of the gains to trade as given. What’s more, because agents are perfectly myopic in their utility specifications, they have no concern for the matching process which assigns traders to new partners each period.

Definition (Equilibrium): An equilibrium is a \bar{n} \times \bar{a} matrix of allocations \mathbf{X} as well as a set of no more than \bar{n}! pairwise exchange rates m_{n:n',a:b} such that given each exchange rate, we have that:

  1. For each agent n \in \mathcal{N}, the allocation X_n satisfies:

    (16)   \begin{align*} 0 &= \frac{\partial \left( \Delta_{\hat{\mathbb{T}}} V_n \right)}{\partial \left( \Delta x_{n,a}^* \right)} \end{align*}

  2. Each pairwise market clears:

    (17)   \begin{align*} 0 &= \Delta x_{n,a} + \Delta x_{n',a} \end{align*}

In such a world, we have the following perturbation equilibrium reult:

Proposition (Equilibrium w/ Barter): Near the equilibrium, the amount of product a and b that 2 agents n and n' exchange as a result of a bargaining process \hat{\mathbb{T}} is given by:

(18)   \begin{align*} \Delta x_{n,a} &\approx = \hat{\kappa}_{n,a:b} \cdot \left\{ \partial_a \left( V_n - V_{n'} \right) \cdot \partial_b \left( V_n + V_{n'} \right) -  \partial_a \left( V_n + V_{n'} \right) \cdot \partial_b \left( V_n - V_{n'} \right) \right\} \\ \Delta x_{n,b} &\approx - \left( \frac{\partial_a \left( V_n + V_{n'} \right)}{\partial_b \left( V_n + V_{n'} \right)} \right) \cdot \Delta x_{n,a} \end{align*}

This result builds on the equilibrium characterization from above:

Proof: Since the agents are still (somewhat unrealistically) price takers in this world, we can use the equilibrium formulae from the proposition above; however, now we have a additional information which we can use to further restrict the demand functionals. In particular, we know that every pairwise trade nets out to 0:

(19)   \begin{align*} 0 &= \Delta x_{n,a} + \Delta x_{n',a} \end{align*}

As a result, we can solve for the fixed exchange rate by treating agents n and n'‘s equilibrium demand functionals as a system of 2 equations with 1 unknown:

(20)   \begin{align*} m_{n:n',a:b} &= \frac{\kappa_{n,a:b} \cdot \partial_a V_n + \kappa_{n',a:b} \cdot \partial_a V_{n'}}{\kappa_{n,a:b} \cdot \partial_b V_n + \kappa_{n',a:b} \cdot \partial_b V_{n'}} \end{align*}

Substituting this formula back into the equation for each agent’s demand function given an exogenous exchange rate yields the equilibrium result.

  1. See von Neumann and Morgenstein (1944). ↩

Filed Under: Uncategorized

« Previous Page
Next Page »

Pages

  • Publications
  • Working Papers
  • Curriculum Vitae
  • Notebook
  • Courses

Copyright © 2026 · eleven40 Pro Theme on Genesis Framework · WordPress · Log in

 

Loading Comments...
 

You must be logged in to post a comment.