# Random Walk on Cube

A particle performs a random walk on the eight corners of a unit cube. At each step it can either remain where it is with probability $1/2$ or it can move to one of its three nearest neighbors, each with probability $1/6$. Let $u$ and $v$ be opposite corners of the cube (so $|u-v|=\sqrt3$), and suppose the walk starts at $u$. Find (a) the expected number of steps until its first return to $u$, and (b) the expected number of steps until its first visit to $v$.

1.) This is a Markov process, so we can use the theory of Markov chains to help solve the problem. First let's simplify the process: we don't really care about most of the details of the geometry of the cube, so we'd like to find a simpler process with fewer states where we can do our computations. Recall that the unit cube has vertices $\sum_{i = 1}^3 \delta_i e_i$ where $\delta_i \in \{0, 1\}$ this gives a bijection with the set of binary sequences $(\delta_1, \delta_2, \delta_3)$. Now note that two vertices only have an edge between them if their 3-digit binary sequences differ by a single digit, so we can make a simpler process. Define $f(\delta_1, \delta_2, \delta_3) = \sum_i \delta_i$, pushing forward this Markov process to $\{0, 1, 2, 3\}$ we get a simpler process with only $4$ states with transition matrix:

$$T = \begin{pmatrix} \frac{1}{2} & \frac{1}{6} & 0 & 0 \\\\ \frac{1}{2} & \frac{1}{2} & \frac{2}{6} & 0 \\\\ 0 & \frac{2}{6} & \frac{1}{2} & \frac{1}{2} \\\\ 0 & 0 & \frac{1}{6} & \frac{1}{2} \end{pmatrix}$$

Because the states $\{0, 3\}$ correspond to unique states on the unit cube, all the data we are interested in can be computed equivalently with this simpler process. One easily observes that this matrix $T$ has a $1$-eigenvalue at the vector $(1, 3, 3, 1) = \pi$, so $\frac{1}{8} \pi$ gives the stationary distribution of $T$. By the fundamental theorem of Markov processes the component of the stationary distribution corresponding to the state $i$ gives the reciprocal of the expected return time of $i$. Thus we see that the reciprocal of the first component of $\frac{1}{8} \pi$ gives our answer, and this is $8$.

2.) For the second part of this question it is once again important to simplify the question. Replace $T$ with the matrix $$Q = \begin{pmatrix} \frac{1}{2} & \frac{1}{6} & 0 & 0 \\\\ \frac{1}{2} & \frac{1}{2} & \frac{2}{6} & 0 \\\\ 0 & \frac{2}{6} & \frac{1}{2} & 0 \\\\ 0 & 0 & \frac{1}{6} & 1 \end{pmatrix}$$ this is essentially the same Markov process except now once a random walk reaches the state $3$ corresponding to the vertex $(1, 1, 1)$ on the cube it will stay there forever. For the purpose of computing when a process first reaches $(1, 1, 1)$ this process is simpler and equally useful. Because paths can never return from the state $3$ we can use the matrix in the upper left corner to compute the behavior of walks before they reach $3$. $$Q' = \begin{pmatrix} \frac{1}{2} & \frac{1}{6} & 0 \\\\ \frac{1}{2} & \frac{1}{2} & \frac{2}{6} \\\\ 0 & \frac{2}{6} & \frac{1}{2} \end{pmatrix}.$$ We call $N = (Id - Q')^{-1} = \sum_{n = 0}^\infty$ the fundamental matrix of the absorbing process $Q$. Note that $$e_j^T \cdot N \cdot e_i = \sum_{n = 0}^\infty e_j^T \cdot (Q')^n \cdot e_i,$$ where we note that $$e_j^T \cdot (Q')^n \cdot e_i = P(\text{A random walk starting at state i hits state j at the nth step})$$, thus $e_j^T \cdot N \cdot e_i$ computes the expected number of times a random walk starting at $i$ will land on state $j$ before being absorbed. Adding up these times we see that $\sum_j n_{j, 0}$ gives the expected number of steps a random walk will take inside of the transient states $\{0, 1, 2\}$ before being absorbed by state $3$, which is exactly what we want to compute. Now we compute: $$N = \begin{pmatrix} 5 & 3 & 2 \\\\ 9 & 9 &6 \\\\ 6 & 6 & 6 \end{pmatrix},$$ summing the first column gives us 20, the expected number of steps a random walk at $0$ will take before hitting state $3$.

• The display equation that was lost into the margin was supposed to finish: "...at the nth step)"