Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62611

when it comes to large scale svm or kdtree training and testing, how do you back the result with a database?

$
0
0

I'm trying to figure out how to back a large scale svm (more than 100,000 images/ classes) with a database either a nosql kv or a relational database like postgresql.

The idea of keeping this data in something like a matlab model is less than appealing.

I've seen this paper: http://grids.ucs.indiana.edu/ptliupages/publications/Study%20on%20Parallel%20SVM%20Based%20on%20MapReduce.pdf

But even if I throw hadoop into the mix and start using map reduce. I don't feel like I follow how I am escaping a file based dataset.

Also, I've come across parallel svm: https://code.google.com/p/psvm

I guess I just feel like there is something that I'm just not getting.

Any ideas? Am I just over thinking it?

submitted by econnerd
[link][3 comments]

Viewing all articles
Browse latest Browse all 62611

Trending Articles