1. Motivation
How persistent has IBM’s daily trading volume been over the last month? How persistent have Apple’s monthly stock returns been over the last years of trading? What about the US’s annual GDP growth over the last century? To answer these questions, why not just run an OLS regression,
(1)
where denotes the estimated auto-correlation of the relevant data series? The Gauss-Markov Theorem says that an OLS regression will give a consistent unbiased estimate of the persistence parameter, , right?
Wrong.
Although OLS estimates are still consistent when using time-series data (i.e., they converge to the correct value as the number of observations increases), they are no longer unbiased in finite samples (i.e., they may be systematically too large or too small when looking at rather than observations). To illustrate the severity of this problem, I simulate data of lengths , , and ,
(2)
and estimate the simple auto-regressive model from Equation (1) to recover . The figure below shows the results of this exercise. The left-most panel reveals that, when the true persistence parameter approaches one, , the bias approaches , or of the true coefficient size. In other words, if you simulate a time series of data points using , then you’ll typically estimate a !
What is it about time series data that induces the bias? Why doesn’t this problem exist in a cross-sectional regression? How can it exist even when the true coefficient is ? This post answers these questions. All of the code can be found here.
2. Root of the Problem
Here is the short version. The bias in comes from having to estimate the sample average of the time series:
(3)
If you knew the true mean, , then there’d be no bias in . Moreover, the bias goes away as you see more and more data (i.e., the estimator is consistent) because your estimated mean gets closer and closer to the true mean, . Let’s now dig into why not knowing the mean of a time series induces a bias in OLS estimate of the slope.
Estimating the coefficients in Equation (1) means choosing the parameters and to minimize the mean squared error between the left-hand-side variable, , and the right-hand-side variable, :
(4)
Any parameter choice, , that minimizes this error must also satisfy the first-order condition below:
(5)
Substituting in the true functional form, , then gives:
(6)
From here, it’s easy to solve for the expected difference between the estimated slope coefficient, , and the true slope coefficient, :
(7)
Note that is just the regression residual. So, this equation says that the estimated persistence parameter, , will be too high if big ‘s tend to follow periods in which is above its mean. Conversely, your estimated persistence parameter, , will be too low if big ‘s tend to follow periods in which is below its mean. How can estimating the time-series mean, , induce this correlation while knowing the true mean, , not?
Clearly, we need to compute the average of the time series given in Equation (3). For simplicity, let’s assume that the true mean is and the initial value is . Under these conditions, each successive term of the time series is just a weighted average of shocks:
(8)
So, the sample average given in Equation (3) must contain information about future shock realizations, . Consider the logic when the true persistence parameter is positive, , and the true mean is zero, . If the current period’s realization of is below the estimated mean, , then future ‘s have to be above the estimated mean by definition—otherwise, wouldn’t be the mean. Conversely, if the current period’s realization of is above the estimated mean, then future ‘s have to be below the estimated mean. As a result, the sample covariance between and must be negative:
(9)
As a result, when the true slope parameter is positive, the OLS estimate will be biased downward.
3. Cross-Sectional Regressions
Cross-sectional regressions don’t have this problem because estimating the mean of the right-hand-side variable,
(10)
doesn’t tell you anything about the error terms. For example, imagine you had data points generated by the following model,
(11)
where each observation of the right-hand-side variable, , is independently drawn from the same distribution. In this setting, the slope coefficient from the associated cross-sectional regression,
(12)
won’t be biased because isn’t a function of any of the error terms, :
(13)
So, estimating the mean won’t induce any covariance between the residuals, , and the right-hand-side variable, . All the conditions of the Gauss-Markov Theorem hold. If you only have a small number of observations, then may be a noisy estimate, but at least it will be unbiased.
4. Bias At Zero
One of the more interesting things about the slope coefficient bias in time series regressions is that it doesn’t disappear when the true parameter value is . For instance, in the figure above, notice that the expected bias disappears at and negative at . Put differently, if you estimated the correlation between Apple’s returns in successive months and found a parameter value of , then the true coefficient is likely . In fact, Kendall (1954) derives an approximate expression for this bias when the true data generating process is an :
(14)
A simple two-period example illustrates why this is so. Imagine a world where the true coefficient is , and you see the pair of data points in the left panel of the figure below, with the first observation lower than the second. If as well, then we have that:
(15)
I plot this sample mean with the green dashed line. In the right panel of the figure, I show the distances and in red and blue respectively. Clearly, if the first observation, , is below the line, then the second observation is above the line. But, since , the second observation is just , so you will observe a negative correlation between and . will be downward biased.
You must be logged in to post a comment.