I've been reading about ROC curves - they are plots of true positive rate of a classifier vs. the false positive rate.
I'm wondering whether increasing the TPR will always increase the FPR (i.e the ROC curve will be increasing)? It definitely would for any kind of smooth classifier that we "discretize" by some kind of threshold, but is that true for all kinds of classifiers?
Also, to combat this problem, are there classifiers that output YES/NO/DONT KNOW. In that case, you could increase TPR without increasing FPR if you had two independent tests for an instance being true or not.
that was a long ramble, so thank you for reading through! any answers or pointers to where i can learn about this are much appreciated.
[link][5 comments]