Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62750

How does the Bayes Factor account for degrees of freedom?

$
0
0

For example, if I have two possible models, a uniform distribution over X<-[0,1] and Y<-[0,1], and a uniform distribution over X<-[0,1] and Y=X fixed (i.e. one DoF) - and I observe that (0.5,0.5) - how is it that I favour the latter model?

Given that the probability density in both is the same P(0.5,0.5|M)=1 ?

EDIT: After talking with colleagues, I think this is actually a problem of using densities like this. As they are not comparable between the two models (since they are not defined with respect to the same measure). Note this only occurs in the noiseless case (e.g. if model 2 also has gaussian noise in X and Y, then they are comparable again).

So then in the noiseless case we say, if we observe X=Y then model 2 (as model 1 is vanishingly unlikely if we make noiseless (infinite precision) observations), and if X!=Y then model 1 is the only possible model.

In the noisy case one can just calculate the respective likelihoods as usual (with a prior on the noise).

submitted by jamesmcm
[link][3 comments]

Viewing all articles
Browse latest Browse all 62750

Trending Articles