Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ensemble Pruning based on Objection Maximization with a General Distributed Framework (1806.04899v3)

Published 13 Jun 2018 in cs.LG, cs.AI, and stat.ML

Abstract: Ensemble pruning, selecting a subset of individual learners from an original ensemble, alleviates the deficiencies of ensemble learning on the cost of time and space. Accuracy and diversity serve as two crucial factors while they usually conflict with each other. To balance both of them, we formalize the ensemble pruning problem as an objection maximization problem based on information entropy. Then we propose an ensemble pruning method including a centralized version and a distributed version, in which the latter is to speed up the former. At last, we extract a general distributed framework for ensemble pruning, which can be widely suitable for most of the existing ensemble pruning methods and achieve less time consuming without much accuracy degradation. Experimental results validate the efficiency of our framework and methods, particularly concerning a remarkable improvement of the execution speed, accompanied by gratifying accuracy performance.

Citations (38)

Summary

We haven't generated a summary for this paper yet.