Appeal to expertise: I'm thinking about a turn-based game classification problem where my example data categorises states and moves as "good" or "bad." Pretty standard stuff. I'm most familiar with neural nets, but interested in trying other approaches too.
For a neural net, I guess I'd do this by labeling "good" as 1 and "bad" as 0, and then just training it on a database of good and bad moves. (E.g. based on recorded games with moves labeled by losers and winners.) I would then threshold the outcome to decide whether a move is good or not.
However, I'm considering an extension where I can take into account some kind of confidence level in the result. In most cases I'm not 100% sure whether a move is good or bad, so labeling them exactly 1 or 0 doesn't seem right. My question is, if I can estimate my confidence in "goodness" of a move, what techniques are out there that can take into account "unsure" examples?
One method that I thought of could be to have two outputs, one for "goodness likelihood" and one for "badness likelihood." Then, for a "good" example I'd make "badness = 0" and "goodness = confidence of a good move", and vice-versa for "bad" examples.
Is this a good approach? What other methods are there, perhaps for other kinds of classifiers, for taking into account imperfectly known training data?
[link][7 comments]