(after receiving no answers from /r/statistics, I decided to post here, where my question may be more relevant)
I'm on a personal quest to write an implementation of Gibbs-sampled Latent Dirichlet Allocation, and I believe I understand each of the necessary parts individually:
- Bayesian Networks / Graphical Models / Template Models
- Differences between ML/MAP/Bayesian Estimators
- Approximate Inference, Sampling, MCMC, Gibbs Sampling
I've been working from a menagerie of different papers about each topic (the best and most informative of which is "Parameter Estimation for Text Analysis" by Gregor Heinrich), but I am reluctant to simply take for granted the collapsed Gibbs sampling step derived in the paper.
I've yet to find a good resource that goes through all the steps--first deriving a graphical model / BN for a relatively simple task of interest, next showing how Gibbs sampling can be applied to infer parameters of the model, yielding an intuitive result. LDA isn't particularly suited for this, as there are too many interacting parts.
Does anyone know of a good resource I could use to learn better how Gibbs sampling is applied to parameter estimation for relatively simple models? (Of course, contrived examples are welcome!)
My interest lies more in Machine Learning than pure Statistics, if that influences your answer. Thanks!
[link][3 comments]