Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62874

Is there a difference between computing parameters of GMMs and how EM-Algorithm works?

$
0
0

Hey all,

So I have been studying up on the Expectation-Maximization algorithm, (EM), but I also know of Gaussian Mixture Models, (GMMs), that are used to describe data that is a Mixture of Gaussians. (MoG).

Some of what I will say here might be inaccurate, so please correct me, but my question is basically, what is the difference, if any, on the way the EM Algorithm works, VS the maximum likelihood algorithm in GMMs?

I know that in EM, we are first computing the latent variables, (probability weights if we are clustering), and then we compute the new mu's and the new covariance matrix. We then repeat with the computation of the updated latent variables, etc etc.

So I get that, but in GMMs, arent we also kinda doing the same thing? How does this differ from how we would compute the parameters of a GMM for specific data?

If there is a difference between computation of parameters of GMM and how EM-algorithm works, what is it? Are they the same thing?

Thanks!

submitted by Ayakalam
[link][14 comments]

Viewing all articles
Browse latest Browse all 62874

Trending Articles