Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62888

Emerging probabilites in Naive Bayes rather than static priors; am I on the right track or lost in the woods?

$
0
0

I am writing my thesis on Naive Bayes classification in a time-series context (customer churn... sorry, I know I'll annoy some of you).

My understanding is that the typical approach is to train a model on a sample set, using the Bayesian prior as the occurrence of that label in the set and then yada, yada. Then one would apply the model to the real-time date.

I see customer (player) change in behavior as the most important aspect (demographics and other static parameters seem to have little information). So, I am trying to get time-series data into the mix. However, it also occurs to me that the probabilities of churn associated with each player should emerge, not be retrained at each iteration. So, I would set new player likelihood to be equal to the baseline churn, but thereafter the prior probability at each iteration (daily batches) would be the previous day's forecast. The probability would then be updated based on the observed behaviour.

As far as I can tell, this is common practice with Bayesian statistics, however, it doesn't seem to be a common way of applying naive bayes in machine learning. Can anyone point me to either research or examples that show what I am doing is clearly not novel or discuss why my approach isn't currently used?

Any help is greatly appreciated!

submitted by kezalb
[link][comment]

Viewing all articles
Browse latest Browse all 62888

Trending Articles