Introduction
In this post I will present the details of the resampling filter that was presented in my previous post. More specifically, I will describe the Gibbs sampling algorithm #4 as it was described in Radford M. Neal's paper. This was the basic algorithm that was used in the previous post to augment the resampling process of the FastSLAM algorithm. In Neal's paper the information is presented in a nice and neat but also very abstract manner, so it's difficult for an uninitiated reader to follow all the details. Therefore, information on this post will also take elements from this paper from Herman Kamper as well as this from Xiaodong Yu. In the rest post I will first describe the algorithm as presented by Neal and then I will talk about it's details more explicitly. Finally I will present how these are applied to the aforementioned filter. Again some background on DPMM and MCMC methods is assumed.
Initial description of the collapsed CRP algorithm.
The description of the algorithm taken as is from Neal's paper is the following:
Finally, in a conjugate context, we can often intergrate analytically over the \(\begin{equation} \phi_{c} \end{equation}\), eliminating them from the algorithm. The state of the Markov chain then consists only of the \(\begin{equation} c_{i} \end{equation}\) which we update by Gibbs sampling using the following conditional probabilitites.
\begin{cases}if\ c=c_{j}\ for\ some\ j\neq i:\ P(c_i=c)= b\frac{n_{-i,c}}{n-1+a}\int_{}{}F(y_i,\phi)dH_{-i,c}(\phi) \\\ \ \ \ \ \ \ \ \ \ P(c_i\neq c_j\ for\ all\ i \neq\ j|c_{-i},y_i)=b\frac{a}{n-1+a}\int_{}{}F(y_i,\phi)dG_0(\phi)\end{cases}
Here, \(\begin{equation} H_{-i,c} \end{equation}\) is the posterior distribution of \(\begin{equation} \phi \end{equation}\) based on the prior \(\begin{equation} G_0 \end{equation}\) and all observations \(\begin{equation} y_j \end{equation}\) for which \(\begin{equation} j \neq i \end{equation}\) and \(\begin{equation} c_j =c \end{equation}\).
The third Gibbs sampling method can be summarized as follows:
Algorithm 3: Collapsed Gibbs sampling
Let the state of the Markov chain consist of \(\begin{equation} c_1,...,c_n \end{equation}\). Repeatedly sample as follows:
Let the state of the Markov chain consist of \(\begin{equation} c_1,...,c_n \end{equation}\). Repeatedly sample as follows:
- For \(\begin{equation} i=1,...n \end{equation}\) Draw a new value from \(\begin{equation} c_i|c_{-i},y_i \end{equation}\) as defined by the previous equation.
Yes, that's all, no joking here.
.png)
.png)