Deep Networks are touted for their ability to exploit large amounts of data. And this need not be labeled data, but even unlabeled data used for unsupervised pre training helps in discriminative tasks.
Have people worked on cases where there is a dearth of labeled data, but it is hard to even have any consensus on 'appropriate' unlabeled data is hard (Whatever this may mean). I understand this may seem a bit contrived, but I guess what I am basically driving at is trying to learn with small datasets.
So what is the current paradigm on getting the most mileage from your dataset?
Would you recommend any works regarding this in the context of deep learning?
Or maybe something from outside deep learning some work that you feel is a good step in this direction?
Thanks in advance.
[link][1 comment]