http://youtu.be/LZnAFO5gkOQ?t=37m12s
Specifically the the stuff he says about it being well understood that there are high dimensional functions which cannot be approximated, and about neural nets approximating a "class" of function and that we know nothing about the functional spaces they can approximate.
And then the panelist to his right says that we do know, but he gets cut off.
Can anyone can give some more insight about these questions? Also where can I learn more about this stuff?
Also, one final question that may be stupid, but do we know for a fact that everything that happens in deep learning can be considered some sort of mathematical function?
[link][2 comments]