Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Studying The Effect of MIL Pooling Filters on MIL Tasks (2006.01561v1)

Published 2 Jun 2020 in cs.CV and cs.LG

Abstract: There are different multiple instance learning (MIL) pooling filters used in MIL models. In this paper, we study the effect of different MIL pooling filters on the performance of MIL models in real world MIL tasks. We designed a neural network based MIL framework with 5 different MIL pooling filters: max',mean', attention',distribution' and distribution with attention'. We also formulated 5 different MIL tasks on a real world lymph node metastases dataset. We found that the performance of our framework in a task is different for different filters. We also observed that the performances of the five pooling filters are also different from task to task. Hence, the selection of a correct MIL pooling filter for each MIL task is crucial for better performance. Furthermore, we noticed that models withdistribution' and distribution with attention' pooling filters consistently perform well in almost all of the tasks. We attribute this phenomena to the amount of information captured bydistribution' based pooling filters. While point estimate based pooling filters, like max' andmean', produce point estimates of distributions, distribution' based pooling filters capture the full information in distributions. Lastly, we compared the performance of our neural network model withdistribution' pooling filter with the performance of the best MIL methods in the literature on classical MIL datasets and our model outperformed the others.

Citations (5)

Summary

We haven't generated a summary for this paper yet.