I have a question for you all regarding image classification. Say we have a set of training images, a set of test images, a finite collection of target classes, and an image feature, which we can extract from the images (e.g. a color patch, a SIFT descriptor,...). With a model such as bag-of-words, an approach is to aggregate all the features of all the training images, cluster these features, and then create a relative distribution of the clusters for each image (by looking at the closest cluster for each feature of the image). The relative distributions of the labeled trainingdata can be used to train a model, which can then be tested on the test images.
My question is, what are the procedural steps for training and testing images using Latent Dirichlet Allocation? I am familiar with the basics of LDA itself, but I cannot picture the procedure itself at this moment. More specifically, how do you go from image feature to image classifiers and how do you determine the target classes of the test images (i.e. how do you use LDA for non-binary classification)?
[link] [4 comments]