Quantcast
Viewing all articles
Browse latest Browse all 62716

Had some questions on understanding multiple-instance learning (MIL)

Multiple-instance learning (MIL) has me sort of scratching my head. I feel like I'm not understanding something.

So I get the problem formulation:

  • Labeling can be labor unfeasible or impossible so we go for a "weakly supervised" approach
  • Instead of labeling all instances we place instances into bags
  • If a bag contains at least one positive instance the whole bag is labeled positive
  • If a bag has no positive instances the whole bag is labeled negative

Now, from here I'm not sure what the goal is. Are we trying to train a bag-level classifier that will, with sufficient training, work on instance-level classification? Or are we trying to train an instance-level classifier that will work at the bag-level? Or maybe we could be trying to do both?

From what I can tell if we have a good instance-level classifier we automatically have a bag-level classifier.

How is data prepared for multiple-instance learning on images? Do we label the object inside the image and then randomly generate a series of image patches that contain the object to get a set of positive instances? Are those instances then distributed amongst bags to make a bunch of positive bags? Are they kept together as one really positive bag? Do the negative instances just come from whatever, inside the same image or not?

If the bags were a fixed structure (e.g. always n-instances in the same order or something like that) then is the multiple-instance learning problem effectively a problem of feature selection?

How is MIL related to semi-supervised learning like label propagation?

Sorry for the stream-of-complaints quality of the post. Honestly, if someone could point me in the direction of some good resources for multiple-instance learning I'd be greatly appreciative.

Thanks!

submitted by delarhi
[link][comment]

Viewing all articles
Browse latest Browse all 62716

Trending Articles