Papers
Topics
Authors
Recent
Search
2000 character limit reached

mil-benchmarks: Standardized Evaluation of Deep Multiple-Instance Learning Techniques

Published 4 May 2021 in cs.LG | (2105.01443v1)

Abstract: Multiple-instance learning is a subset of weakly supervised learning where labels are applied to sets of instances rather than the instances themselves. Under the standard assumption, a set is positive only there is if at least one instance in the set which is positive. This paper introduces a series of multiple-instance learning benchmarks generated from MNIST, Fashion-MNIST, and CIFAR10. These benchmarks test the standard, presence, absence, and complex assumptions and provide a framework for future benchmarks to be distributed. I implement and evaluate several multiple-instance learning techniques against the benchmarks. Further, I evaluate the Noisy-And method with label noise and find mixed results with different datasets. The models are implemented in TensorFlow 2.4.1 and are available on GitHub. The benchmarks are available from PyPi as mil-benchmarks and on GitHub.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.