Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 63240

Can you explain compressive sensing in a few words from a machine learning perspective?

$
0
0

I've been reading about compressive sensing, looking at some tutorials / slides / papers.

All of the tutorials start with nyquist frequencies and other signal processing talk, treating samples as discrete frequency values. Couldn't find any papers that explain it from a non-DSP perspective.

What I think I know:

Most real data is sparse and that compressive sensing randomly samples your input with some (learnt?) bases to compress them to give an error bound that is extremely small.

What I dont know but want to know:

  • If the bases are learnt, how are they learnt? Matrix factorization? Any very simple explanation on how its learnt? And maybe a link/paper for just understanding the learning process?

  • How are the bases that are learnt in compressive sensing different from ones learnt from autoencoders (with sparsity enforced)? How are they different from kmeans centroids?

  • If you can, can you explain how it is different in terms of one commonly used machine learning model? (so that it is easy to understand with a comparison)

  • Are there any applications apart from reconstructing noisy data, saving bandwidth etc.?

If you can answer any of these questions at all, or link to appropriate slides/blog entries etc. I'd be greatful. I took a look at some blog entries on Nuit Blanche. Thanks.

submitted by r-sync
[link][26 comments]

Viewing all articles
Browse latest Browse all 63240

Trending Articles