Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62858

Are Kohonen SOMs all that different from K-means?

$
0
0

Hey all,

So recently took it upon myself to learn Kohonen SOMs by myself. I did this over the thanksgiving break while trying to dodge my 4 year old cousins from finding me in the house, so I am quite new to it.

That being said, I am already familiar with K-means.

I have some questions about it in general I would like some feedback on:

1) KSOMs adjust the actual weight vectors to the actual points in the data set over time. So overtime, in a case with say two clusters of points and two output units, the first output units' weight vector will point to one of the points in the first cluster, and the second output units' weight vector will point to one of the points in the second cluster. My question here is, ... isnt this very similar to K-means?

2) Seeing as how (I think) this is very similar to K-means, what advantages might it confer over K-means? If I am completely off, then how is it different from K-means? In other words, why would I opt to use KSOMs over K-means?

3) Regarding the fact that they adjust the weight vectors to become one of the actual points over time, is this a 'correlation' of some sort? At the end of the day, a novel point will come along, and its dot product will be taken with the (two in this example) output layer units. The one with largest score wins. Did I just do a correlation? Is dot-product = correlation?

4) I also see some parallels with perceptrons. I know that perceptrons are used for supervised learning, where a vector is trained such that is adequately splits a space into categories. Are KSOMs a sort of, unsupervised version of perceptrons?

I am trying to tie all those things together in my head. Thanks!!

submitted by Ayakalam
[link][comment]

Viewing all articles
Browse latest Browse all 62858

Trending Articles