Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62811

Manifold/subspace for neural network parameters?

$
0
0

I have come across a lot of work that tries to exploit the fact that data (images/speech) lie on a common manifold. This has led to a lot of work in the area of trying to learn classifier parameters (neural networks/kernel based methods) such that one could learn the underlying data manifold.

I am aware of methods like Joint Factor Analysis (http://www1.icsi.berkeley.edu/Speech/presentations/AFRL_ICSI_visit2_JFA_tutorial_icsitalk.pdf) and Subspace Gaussian Mixture Models (http://research.microsoft.com/pubs/80931/ubmdoc.pdf) in speaker recognition and speech recognition where parameters of the models themselves are thought to like in a low-dimensional subspace.

For learning efficient parametrization of Neural Networks has anyone ever posited a hypothesis that for a certain task or class of tasks it is possible that weights of multiple layers of a deep neural network could lie on a manifold, and perhaps have some structure to them? If yes any pointers to papers in that direction would be very helpful.

submitted by speechMachine
[link][4 comments]

Viewing all articles
Browse latest Browse all 62811

Trending Articles