\documentstyle[12pt,twoside,alltt]{article}
\textwidth=6in
\oddsidemargin=0.25in
\evensidemargin=0.25in
\topmargin=-0.1in
\footskip=0.8in
\parindent=0.0cm
\parskip=0.3cm
\textheight=8.00in
\setcounter{tocdepth} {3}
\setcounter{secnumdepth} {2}
\sloppy

\newenvironment{proof}{\noindent{\bf Proof:}}{\qed\bigskip}
\newenvironment{proof_sketch}{\noindent{\bf Sketch of 
Proof}\hspace*{1em}}{\qed\bigskip}

\newtheorem{theorem}{Theorem}
\newtheorem{corollary}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{claim}{Claim}
\newtheorem{fact}{Fact}
\newtheorem{definition}{Definition}
\newtheorem{assumption}{Assumption}
\newtheorem{observation}{Observation}
\newtheorem{example}{Example}
\newcommand{\qed}{\rule{7pt}{7pt}}

\newcommand{\lecture}[4]{
\pagestyle{headings}
\thispagestyle{plain}
\newpage
\noindent
\begin{center}
\framebox{
\vbox{
\hbox to 5.75in { {\bf 6.893 Randomness in Computation
\hfill Spring Term, 1996}  }
\vspace{4mm}
\hbox to 5.75in { {\Large \hfill Lecture #1 : #2 \hfill} }
\vspace{2mm}
\hbox to 5.75in { {\it Lecturer: #3 \hfill Scribe: #4} }
}
}
\end{center}
\markboth{Lecture #1 : #2}{Lecture #1 : #2}
\vspace*{4mm}
}

\begin{document}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Lecture command takes 4 arguments:
%		ordinal number of the lecure
%		date of the lecure
%		lecturer
%		scribe---that is you
%
\lecture{3}{March 5, 1996}{Ronitt Rubinfeld}{Yoav Yerushalmi} 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% For theorems, lemmas, definitions, remarks, etc. use commands
% \begin{theorem} .... \end{theorem} 
% \begin{definition} ... \end{definition} 
% \begin{proof} ... \end{proof}
% and so on ...
%
% If you want to insert figures in your notes do the following: 
% 1) Prepare your figures using your favourite graphic editor
% 	for example idraw or xfig
% 2) Save your pictures in PostScript form using the following convention
% 	notes-nn-fig-mm.ps where nn is the number of the lecture you 
% 	are scribing and mm is the sequential number of the picture in the 
% 	notes
% 3) Insert the following block in your LaTeX code in the place you 
% 	want to insert the figure BUT LEAVE IT AS A COMMENT (with the %
% 	at the beginning of the line) since it will not work on athena: 
%
% \begin{figure}[h]
% \begin{center}
% \mbox{\psfig{figure=notes-nn-fig-mm.ps}}
% \caption{A very nice picture.}
% \label{fig:mm}
% \end{center}
% \end{figure}
%
% 	of course the picture will not show up when you compile and 
% 	print your notes, but don't worry the TA will take care of it. 
% 4) When you mail the TA the LaTeX code mail him/her the PostScript
%	files for the figures as well 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Your notes start here!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section {Outline}

This lecture covers the idea of {\em Random Walkers}, and how to
couple random walks. Coupling is a technique ofr bounding time for a
random walker to reach stationary distribution.  This is useful for
uniform random generation as well as counting.

\section {Introduction example}

\begin{example} The Amazing Ronitt performs a magic trick!!!

Take a pack of cards with only the aces and cards up to five.
Think of a random $i \in [1..5]$

Repeat
\begin{itemize}
\item count off $i$ cards, where each card has a number $\in [1..5]$.
\item new $i \longleftarrow$ number on $i^{th}$ card..
\end{itemize}
until deck runs out. Then output current $i$.
\end{example}

It turns out that although this seems completely random, several paths
through the deck converge, and so in general, one final card can be
much more likely to be the result than any other for a particular
shuffle. In the case of:

\begin{center}
3 A 4 5 5 4 4 3 3 3 2 A 5 2 A 5 2 A 2 4
\end{center}

In this example, the last ``2'' is more likely to be the result than
any other card. We shall now see why, and how we can use this to our
advantage.

\section{Random Walkers}

\begin{definition} A {\bf random walk} starts at some state, and walks around until it is
at a stable state.
\end{definition}

\subsection{Reaching Steady State Distribution}
A few symbols to define
\begin{itemize}
\item $\tilde{\pi}$ is the stationary distribution.
\item $\pi^t$ is the distribution at time $t = \pi^0A^t =
(........\pi^t(j)......)$ where $\pi^t(j)$ is the probability of being
at node $j$ at time $t$.
\item $\tilde{\pi} = \mathop{\lim}\limits_{t \rightarrow \infty} \pi^t$
\end{itemize}

For most purposes, however, we don't really want to walk forever to
get to $\tilde{\pi}$. All that is important to us is to get ``very
close'' to it. Before we can do that, however, we need to know what
``close'' means. to do this, we define a total variation distance,
then, from that, we can compute the mixing time for Markov chains.

\begin{definition} {\bf Total Variation Distance} is the distance
between distribution $D_1$ and $D_2$, and is computed in the following
manner:
\[ |D_1, D_2|_1 = \sum_{i \in V} |D_1(i) - D_2(i)| \]
\end{definition}

\begin{definition} {\bf Mixing Time} is a measure of how long it takes
for two distributions to get ``close'' on a graph.
\[ T(\epsilon) = \min t, \forall \pi^0,  |\tilde{\pi}, \pi^t|_1 < \epsilon \]
\end {definition}

\begin {definition} A Markov chain is {\bf rapidly mixing} if
\[ \forall \pi^0, \forall t > t_0 \in O(\log|v|,
\log(1 / \epsilon)) : |\tilde{\pi}, \pi^t|_1 < \epsilon \]
\end {definition}

An example of a rapidly mixing graph includes the set of graphs that
are completely connected and have self-loops.

An example of a non-rapidly mixing graph is a circle.


\subsection {Stationary Distribution Observations}
\begin {observation} 
A random walker at stationary distribution remains at stationary
distribution:
\[ \tilde{\pi}P^t = (\tilde{\pi}P)P^{t-1} = \tilde{\pi} \]
\end {observation}
\begin {observation}if we start at an arbitrary state and walk until we meet a
random walker, then we are also at stationary distribution. We can't
follow or aim for each other, but we are allowed to coordinate the
walks.
\end {observation}

\section {The Coupling Argument}

\begin {definition}
{\bf coupling time} is the expected time it takes to bring two walkers
together (assuming coordination).
\end {definition}

\begin {example} Example with a hypercube containing self-loops.

$n = 2^d$ nodes in the hypercube.
\[
P_{ij} = \left\{
\begin{array}{l}
\frac{1}{2d} \mbox{ if } (i,j) \in H \\
\frac{1}{2} \mbox{ if } i=j \\
0 \mbox{ otherwise }
\end{array} \right . \]

Walk according to P for some number of steps (to be calculated
later). \[ X = <X_1, X_2, ... , X_d> \] \[ Y = <Y_1, Y_2, ... , Y_d>
\] Using the following coordinator's algorithm :
\begin {itemize}

\item pick a direction randomly and uniformly $i \in_{\cal R}
\{1..d\}$

\item if $X_i = Y_i$, then flip coin 
\begin {itemize}
\item Heads: $X$,$Y$ move along $i$.
\item Tails: $X$, $Y$ stay in place.
\end {itemize}
Note that $X$ and $Y$ are always the same distance apart.

\item if $x_i \neq y_i$, then flip coin
\begin {itemize}
\item Heads: $X$ moves along $i$, $Y$ stays in place
\item Tails: $X$ stays in place, $Y$ moves along $i$
\end {itemize}
Note that $x$ and $y$ are getting closer to each other.
\end {itemize}

\begin {enumerate}
\item We have that $X$ and $Y$ move according to correct probabilities.
\item Once $i$ is chosen, for all the following steps, $X_i =
Y_i$. Therefore, $X$ and $Y$ are coupled after all possible directions
are chosen at least once. This takes on average of $d \ln d$ steps.
\end{enumerate}
We now have discovered a way to make two walkers meet while still
apparently being random, and this is done in a short number of steps.
\end {example}

\begin {theorem} coupling rates.
 
\begin {itemize}
\item $T$ is the expected coupling time.
\item $P^t$ is the distribution at time $t$.
\item $\tilde{\pi}$ is the static distribution.
\end {itemize}
then \[ \frac{1}{2}|P^t - \tilde{\pi}|_1 \leq \frac{T}{t} \]
\begin {lemma} to above

\begin {itemize}
\item $X$, $Y$ are distributions over $S$. (These distributions may be
correlated if so desired).
\item $x$ and $y$ are drawn from $X$ and $Y$ respectively.
\end {itemize}
\[ \Pr[x \neq y ] \geq |X-Y|_1 \]
\begin {proof} of lemma

\[\forall a \in S, P(a) \equiv \Pr[x = y = a]\]  Note that $P(a) \leq
X(a)$ and  $P(a)\leq Y(a) $.
\[ \Pr[x \neq y ] = 1 - \sum_{a \in S}{P(a)} =
\frac{1}{2}[(\sum_a{X(a) - P(a)}) + (\sum_a{Y(a) - P(a)})] \]
\[ = \frac{1}{2} [ \sum_a {|X(a) - P(a)}| + \sum_a{|P(a) - Y(a)|}] \]
\[ \geq \frac{1}{2} \sum_a{|X(a) - Y(a)|} = \frac{1}{2} |X - Y|_1 \]
\end {proof}
\end {lemma}

\begin {proof} of theorem.
\[ \frac{1}{2}|P_t - \pi|_1 \leq \Pr_{x \in P^t, y \in \tilde{\pi}} [
x \neq y ] = \Pr[ \mbox{not coupled at time t}] \leq \frac{T}{t}. \]

(This was calculated using Markov's Thm.) To get $\epsilon$ discrepancy,
simply walk $\frac{T}{\epsilon}$ steps.
\end {proof}
\end {theorem}

\begin {example} Generating random spanning trees of an undirected
graph G.

We need to think of this problem in terms of Markov chains, so think
of this as directed edges towards a root, where a vertex is the pair
(spanning tree, root).

Repeat:

\begin {enumerate}
\item choose a random edge from G out of root x to some node y.
\item add this edge to the spanning tree. This creates a cycle
\item delete the edge coming out of y. y becomes the new root, and the
cycle is broken
\end {enumerate}

However, this is not an undirected Markov chain. So our formulas will
fail. Choosing an edge already in the tree implies reversing the
direction of the edge. The out-degree of the vertex in the Markov
Chain = in-degree = degree in G of root node (since the original is
undirected). This implies that the stationary distribution in any
vertex is $\frac {\mbox{degree(root)}}{\mbox{total number of vertices
in chain}} $
\[ \Pr[\mbox{ generate } T ] = \sum_{r \in V} \Pr [ \mbox{gen}(T,r)] =
\sum_r \frac {deg(r)}{\mbox{number of vertices}} \]
\[ = \frac{2*\mbox{number of edges}}{\mbox{total number of vertices in chain}} \]


Coordinator Strategy:
\begin {enumerate}
\item run the two processes independently until roots collide.
\item force roots to move identically after (1).
\end {enumerate}
coupling time = $E[\mbox{time for roots to collide}] + E[\mbox{coupling} | \mbox{identical
roots}]$

outgoing edge changed iff vertex is a root node. If the roots are the
same, then when an edge is chosen, the roots have the same outgoing
edge. therefore, after (1), once a vertex is root, its out edges are
the same in both walkers.

Once all vertices are visited, the walkers are coupled.

\[ \mbox{amount of time needed to move after collision} \leq
E[\mbox{cover time of G}] \]
\end {example}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Your notes end here!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
