In a learning task (web page classification), I have used 3 classifiers on a training set. Namely,
*classifier 1 uses feature set A (extracted with feature_extractor_A), eg. using features from HTML structure
*classifier 2 with feature set B (extracted with feature_extractor_B), eg. features from words in each page
*classifier 3 with feature set C (extracted with feature_extractor_C), eg. using features from byte contents of each page
As you see, all of the 3 classifiers are trained with the same instances, but using different feature extractors (and hence different feature sets).
I have then used the output of these 3 classifiers, and used them as inputs for a meta-classifier (i.e. random forest).
I'm wondering what is this system formally called in machine learning literature? I think the term stacked generalization is used when their feature sets are the same for all classifiers (see figure 7. in this link: http://www.scholarpedia.org/article/Ensemble_learning )
Is my method still called a stacked generalization method? If not, what is it?
[link][6 comments]