I've been reading about compressive sensing, looking at some tutorials / slides / papers.
All of the tutorials start with nyquist frequencies and other signal processing talk, treating samples as discrete frequency values. Couldn't find any papers that explain it from a non-DSP perspective.
What I think I know:
Most real data is sparse and that compressive sensing randomly samples your input with some (learnt?) bases to compress them to give an error bound that is extremely small.
What I dont know but want to know:
If the bases are learnt, how are they learnt? Matrix factorization? Any very simple explanation on how its learnt? And maybe a link/paper for just understanding the learning process?
How are the bases that are learnt in compressive sensing different from ones learnt from autoencoders (with sparsity enforced)? How are they different from kmeans centroids?
If you can, can you explain how it is different in terms of one commonly used machine learning model? (so that it is easy to understand with a comparison)
Are there any applications apart from reconstructing noisy data, saving bandwidth etc.?
If you can answer any of these questions at all, or link to appropriate slides/blog entries etc. I'd be greatful. I took a look at some blog entries on Nuit Blanche. Thanks.
[link][26 comments]