People often reference the NFLT when talking about machine learning algorithms, especially when talking about whether one technique is better than another.
I can't quite understand the implications, it seems as though it says that if you pick machine learning learning problems uniformly at random, then no algorithm will outperform chance.
What has this got to do with humans? If our problems were selected uniformly at random then we would never have even bothered creating the field, much less civilisation.
To me the NFLT does not seem to contradict the existence of the supper dupper deep boosted support vector algorithm that dominates all other algorithms on problems of interested to humans,
Is this right?
[link][5 comments]