Research Notebook

Recurrence in 1D, 2D and 3D Brownian Motion

June 26, 2011 by Alex

Introduction

I show that Brownian motion is recurrent for dimensions d=1 and  d=2 but transient for dimensions d \geq 3. Below, I give the  technical definition of a recurrent stochastic process:

Definition: (Recurrent Stochastic Process) Let X(t) be a stochastic process. We say that X(t) is recurrent  if for any \varepsilon > 0 and any point \bar{x} \in  \mathtt{Dom}(X) we have that:

(1)   \begin{align*} \infty \ &= \ \int_0^\infty \ \mathtt{Pr} \left[ \ \left\Vert X(t) - \bar{x} \right\Vert < \varepsilon  \mid X(0) = \bar{x} \ \right] \cdot dt \end{align*}

In words, this definition says that if the stochastic process X(t)  starts out at a point \bar{x}, then if we watch the process  forever it will return again and again to within some tiny region of  a an infinite number of times.

Motivating Example

Before I go about proving that Brownian motion is recurrent or   transient in different dimensions, I first want to nail down the   intuition of what it means for a stochastic process to be recurrent   in a more physical sense. To do this, I use the standard real world   example for random walks: a drunk leaving a bar.

Arnold’s lattice world for the case of 2 dimensions.

Example: (A Drunkard’s Flight) Suppose that Arnold is drunk and leaving his local bar. What’s   more, Arnold is really inebriated and can only muster enough   coordination to move 1 step backwards or 1 step forward each   second. Because he is so drunk, he doesn’t have any control which   direction he stumbles so you can think about him moving backwards   and forwards each second with equal probability \pi = 1/2. Thus, Arnold’s position relative to the door of the bar is a   stochastic process with independent \pm 1 increments. This   process is recurrent if Arnold returns to the bar an infinite   number of times as we allow him to stumble around all night. Put   differently, if Arnold ever has a last drink for the evening and   exits the bar for good, then his stumbling process will be   transient.

In the context of this toy example, I show that as I allow Arnold   to stumble in more and more different directions (backwards   vs. forwards, left vs. right, up vs. down, etc…), his   probability of returning to the bar decreases. Namely, if Arnold   can only move backwards and forwards, then his stumbling will lead   him back to his bar an infinite number of times. If he can move   backwards and forwards as well as left and right, he will still   wander back to the bar an infinite number of times. However, if   Arnold either suddenly grows wings (i.e., can move up or down) or   happens to be the Terminator (i.e., can time travel to the future   or past), at some point his wandering will lead him away from the   bar forever.

 

Outline

First, I state and prove   Polya’s Theorem which characterizes whether or not a random walk on   a lattice is recurrent in each dimension d=1,2,3\ldots. Then, I show how to extend this result to continuous time Brownian motion using the   Central Limit Theorem. I attack this recurrence result for continuous time Brownian motion   via Polya’s Recurrence Theorem because I think the intuition is   much clearer along this route. I find the direct proof in   continuous time which relies on Dynkin’s lemma a bit obscure;   whereas, I have a very good feel for what it means to count paths   (i.e., possible random walk trajectories) on a grid.

 

Polya’s Recurrence Theorem

Below, I formulate and prove Polya’s Recurrence Theorem for  dimensions d \in \{1,2,3\}.

Theorem: (Polya Recurrence Theorem) Let p(d) be the probability that a random walk on a d  dimensional lattice ever returns to the origin. Then, we have that  p(1)=p(2) = 1 while p(3) < 1.

 

Intuition

Before I go any further into the maths, I walk through the physical   intuition behind the result. First, imagine the case where drunk   Arnold can only move forwards and backwards. In order for Arnold to   return to the bar door in 2 \cdot s steps1, he must take the exact   same number of forward and backwards steps. i.e., he has to choose   a sequence of 2 \cdot s steps such that exactly s of them are   forward. There are 2 \cdot s choose s ways I could do   this:

(2)   \begin{align*} \mathtt{\# \ returning \ paths} \ &= \ \begin{pmatrix} 2 \cdot s \\ s \end{pmatrix} \end{align*}

What’s more, I know that the probability of each of the paths Arnold could take is just 1 divided by the total number of paths 2^{2   \cdot s}:

(3)   \begin{align*} \mathtt{Pr[each \ path]} \ &= \ \frac{1}{2^{2 \cdot s}} \end{align*}

Now consider drunk Arnold’s situation in 2-dimensions. Here, he   must take the exact same number of steps forward and backwards as   well as the exact same number of steps left and right. Thus, there   are 2 \cdot s choose (k,k,s-k,s-k) ways for Arnold to return to the bar:

(4)   \begin{align*} \mathtt{\# \ returning \ paths} \ &= \ \sum_{k=0}^s \ \begin{pmatrix} 2 \cdot s \\ k,k,(s-k),(s-k) \end{pmatrix} \end{align*}

What is this sum computing in words? First, suppose that Arnold   takes no steps in the left or right directions, then set k=0 and   the number of paths he could take back to the bar is equal to the   number in the 1-dimensional case. Conversely, if Arnold takes no   steps forwards or backwards, set k=s and again you get the   1-dimensional case. Thus, the number of possible paths Arnold can   take back to the bar in 2-dimensions is strictly larger than in   1-dimension. However, Arnold can also take paths which mover   along both axes. This sum first counts up the number of ways he can   make to end up back at his starting point in the left or right   directions. Then, it takes the remaining number of steps, and   counts the number of ways he can use those steps to return to the   starting point in the forwards and backwards direction.

Note that this process doesn’t add that many new returning paths   for each new dimension. Every time I add a new dimension, I’m   certainly adding fewer than 2^s new paths as:

(5)   \begin{align*} m^n \ &= \ \sum_{k_1 + k_2 + \ldots + k_m = n} \ \begin{pmatrix} n \\ k_1, k_2, \ldots, k_m \end{pmatrix} \end{align*}

However, each path only happens with probability 4^{- 2 \cdot s}   now. The probability of realizing each possible path is decreasing   at a rate of 2 \cdot s:

(6)   \begin{align*} \mathtt{Pr[each \ path]}(d) \ &= \ \left(\frac{1}{2 \cdot d}\right)^{2 \cdot s} \end{align*}

Thus, the Polya’s Recurrence Theorem stems from the fact that the number of possible paths back to the origin in growing at a rate   that in less than the number of all paths; i.e., the wilderness of   paths that do not loop back to the origin is increasing faster than   the set of paths which do loop back as we add dimensions.

 

Proof

Below, I prove this result 1 dimension at a time:

Proof: (d=1) The probability that Arnold will return to the origin in 2 \cdot s   steps is the number of possible paths times the probability that   each 1 of those paths occurs:

(7)   \begin{align*} p_{2 \cdot s}(1) \ &= \ \left( \frac{1}{2} \right)^{2 \cdot s} \cdot \begin{pmatrix} 2 \cdot s \\ s \end{pmatrix} \end{align*}

Next, in order to derive an analytical characterization of this   probability, I use Stirling’s approximation to handle the factorial   terms in the binomial coefficient:

(8)   \begin{align*}   s! \ &\approx \ \sqrt{2 \cdot \pi \cdot s} \cdot e^{-s} \cdot s^s \end{align*}

Using this approximation and simplifying, I find that:

(9)   \begin{align*} \begin{split} p_{2 \cdot s}(1) \ &= \ \left( \frac{1}{2} \right)^{2 \cdot s} \cdot \frac{(2 \cdot s)!}{s! \cdot (2 \cdot s - s)!} \\ &\approx \ \frac{1}{(\pi \cdot s)^{1/2}} \end{split} \end{align*}

Thus, if I sum over all possible periods, I get the expected number   of times that drunk Arnold will return to the bar for another night   cap. I find that this infinite sum diverges:

(10)   \begin{align*} \begin{split} p(1) \ &= \ \sum_{s=0}^\infty \ p_{2 \cdot s}(1) \\ &= \ \sum_{s=0}^\infty \ \frac{1}{(\pi \cdot s)^{1/2}} \\ &= \ \infty \end{split} \end{align*}

 

Proof: (d=2) Next, I follow all of the same steps through for the d=2   dimensional case:

(11)   \begin{align*} \begin{split} p_{2 \cdot s}(2) \ &= \ \left( \frac{1}{4} \right)^{2 \cdot s} \cdot \sum_{k=0}^s \ \begin{pmatrix} 2 \cdot s \\ k,k,(n-k),(n-k) \end{pmatrix} \\ &= \ \left( \frac{1}{4} \right)^{2 \cdot s} \cdot \sum_{k=0}^s \ \frac{(2 \cdot s)!}{k! \cdot k! \cdot (s - k)! \cdot (s - k)!} \\ &= \ \left( \frac{1}{4} \right)^{2 \cdot s} \cdot \sum_{k=0}^s \ \frac{(2 \cdot s)!}{s! \cdot s!} \cdot \frac{s! \cdot s!}{k! \cdot k! \cdot (s - k)! \cdot (s - k)!} \\ &= \ \left( \frac{1}{4} \right)^{2 \cdot s} \cdot \begin{pmatrix} 2 \cdot s \\ s \end{pmatrix} \cdot \sum_{k=0}^s \ \begin{pmatrix} s \\ k \end{pmatrix}^2 \\ &= \ \left[ \left( \frac{1}{2} \right)^{2 \cdot s} \cdot \begin{pmatrix} 2 \cdot s \\ s \end{pmatrix} \right]^2 \\ &= \ \left[ p_{2 \cdot s}(1) \right]^2 \end{split} \end{align*}

Summing over all possible path lengths yields a divergent series:

(12)   \begin{align*} \begin{split} p(2) \ &= \sum_{s=0}^\infty \ p_{2 \cdot s}(2) \\ &= \ \sum_{s=0}^\infty \ \frac{1}{\pi \cdot s} \\ &= \ \infty \end{split} \end{align*}

 

Proof: (d=3) The result for d=3 is a bit more complicated as there isn’t a   nice closed form expression for each of the p_{2 \cdot s}(3)   terms. I start by simplifying as far as I can:

(13)   \begin{align*} \begin{split} p_{2 \cdot s}(3) \ &= \ \left( \frac{1}{6} \right)^{2 \cdot s} \cdot \begin{pmatrix} 2 \cdot s \\ k,k, j,j,  (s-k-j),(s-k-j) \end{pmatrix} \\ &= \ \left( \frac{1}{6} \right)^{2 \cdot s} \cdot \sum_{j,k \mid j+k \leq s} \ \frac{(2 \cdot s)!}{k! \cdot k! \cdot j! \cdot j! \cdot (s-j-k)! \cdot (s-j-k)!} \\ &= \ \left( \frac{1}{2} \right)^{2 \cdot s} \cdot \begin{pmatrix} 2 \cdot s \\ s \end{pmatrix} \cdot \sum_{j,k \mid j+k \leq s} \ \left( \frac{1}{3^s} \cdot \frac{s!}{k! \cdot j! \cdot (s-j-k)!} \right)^2 \end{split} \end{align*}

Next, I apply the Multinomial Theorem and note that this   probability is maximized when j=k=n/3. Thus, if I substitute in   this value, I will have an upper bound on the probability p_{2   \cdot s}(3):

(14)   \begin{align*} \begin{split} p_{2 \cdot s}(3) \ &\leq \ \left( \frac{1}{2} \right)^{2 \cdot s} \cdot \begin{pmatrix} 2 \cdot s \\ s \end{pmatrix} \cdot \left( \frac{1}{3^s} \cdot \frac{s!}{\left[ \left( \frac{s}{3} \right)! \right]^3} \right) \\ &\leq \ \frac{C}{(\pi \cdot s)^{3/2}} \end{split} \end{align*}

Summing over all possible path lengths leads to a convergent   series, so I know that Arnold may have a final drink at some point   during the evening:

(15)   \begin{align*} \begin{split} p(3) \ &= \ \sum_{s=0}^\infty \ p_{2 \cdot n}(3) \\ &< \ \infty \end{split} \end{align*}

 

Extension to Brownian Motion

Below, I define Brownian motion in d>1 dimensions and then show  how to extend the results from Polya’s Recurrence Theorem from  random walks on a lattice to continuous time Brownian  motion.

Brownian motion for d>1 dimensions is a natural extension of the  d=1 dimensional case. I give the formal definition below:

Definition: (Multi-Dimensional Brownian Motion) Brownian motion in \mathcal{R}^d is the vector valued process:

(16)   \begin{align*} \mathbf{B}(t) \ &= \ \begin{bmatrix} B_1(t) & B_2(t) & \ldots & B_d(t) \end{bmatrix} \end{align*}

To extend Polya’s Recurrence Theorem to continuous time Brownian  motion, I just need to apply the Central Limit Theorem and then  construct the Brownian motion from the resulting independent  Gaussian increments:

Theorem: (deMoivre-Laplace) Let k_s be the number of successful draws from a binomial  distribution in s tries. Then, when \mathbb{E}(k_s) \approx s  \cdot \pi, we can approximate the binomial distribution with the  Gaussian distribution with the approximation becoming exact as s  \to \infty:

(17)   \begin{align*} \mathtt{Bin}(s,\pi) \ &\sim \ \mathtt{Norm}\left(s \cdot \pi, \sqrt{s \cdot \pi \cdot (1-\pi)}\right) \end{align*}

Lemma: (Levy’s Selector) Suppose that s<t and X(s) and X(t) are random variables  defined on the same sample space such that X(t) - X(s) has a  distribution which is \mathtt{Norm}(0,t-s). Then there exists a  random variable X(\frac{t+s}{2}) such that X(\frac{t+s}{2}) -  X(s) and X(t) - X(\frac{t+s}{2}) are independent with a common  \mathtt{Norm}(0,\frac{t-s}{2}) distribution.

  1. Sanity Check:   Why 2 \cdot s and not just s here? ↩

Filed Under: Uncategorized

Pages

  • Publications
  • Working Papers
  • Curriculum Vitae
  • Notebook
  • Courses

Copyright © 2025 · eleven40 Pro Theme on Genesis Framework · WordPress · Log in

 

Loading Comments...
 

You must be logged in to post a comment.