Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62611

Linear classification with nearest neighbors element?

$
0
0

I'm new to machine learning, but I have a fair amount of experience with the Perceptron learning algorithm. It seems to me that one of the main problems with this method is that it picks a random misclassified point- this could lead to the algorithm being fitted to noise. For instance, if there were five points that had actual values of +1 and one in the middle that had a value of -1, that point should not be used to update the Perceptron's weights.

I know that a certain amount of noise-fitting is inevitable, but what if a nearest-neighbors algorithm were used to determine with which values to update the weights? Would this be an effective way of reducing noise's effect on the final hypothesis, or would it not really change much? Thanks!

submitted by jpercussionist
[link][comment]

Viewing all articles
Browse latest Browse all 62611

Trending Articles