Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62700

Is there a machine learning algorithm that can inject randomness in proportion to it's uncertainty about a predicted outcome?

$
0
0

As machine learning algorithms are trained on more and more data, their prediction for any given set of input attributes is likely to converge to a specific prediction.

While they have less data (or, perhaps just less data about the particular combination of attributes we're interested in), there is greater "uncertainty" about what the prediction might ultimately be given abundant training data.

I'd like to have an algorithm that could inject randomness into it's predictions in direct proportion to it's uncertainty about its prediction.

You could think of this as a generalization of the beta distribution. Let's say you had no input attributes, and 5 of your outcomes were 1, and 10 were 0. The most likely outcome probability is 1/3rd, but it could be 1/2 or 1/4. We could generate random numbers within this uncertainty by looking at Beta Distribution (5, 10) (note the "random sample from the distribution" provided by wolfram alpha).

However this only works with no input attributes, what I need is more of a "contextual beta distribution".

Any ideas?

edit: I guess I'm hoping for an extension to an existing supervised learning example. For example, I've considered a bayesian network learner where the value of P(A|B) is determined using a beta random variable.

submitted by sanity
[link] [25 comments]

Viewing all articles
Browse latest Browse all 62700

Trending Articles