I have a binary classification problem where I have data of different degrees of reliability. I have one data set, on the order of 500 samples, which I believe to be perfectly reliable. I have another data set, on the order of 5,000 samples which I would estimate to be 95% reliable, with the 5% error not distributed uniformly (I believe it will be biased towards some types of errors, but I do not know which types these are). Finally I have an unlabeled set of data on the order of 50,000 samples. I am currently using an SVM on the smallest data set in order to classify the data.
Are there any good techniques for dealing with this? I have looked at semi-supervised learning for the unlabeled data set, but the decision boundary lies in a high density region (Two trained humans asked to label samples might not place the boundary in the exact same place, but would be able to give a nearly identical ranking of samples, from most like Class 0 to least like Class 0). The simplest solution would be to weight the perfect data more and use the larger set as well, choosing the weights via cross validation, but I'm not sure if there's a better solution.
[link][comment]