Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Arithmetical Binary Decision Tree Traversals (2209.04825v8)

Published 11 Sep 2022 in cs.LG, cs.DS, cs.NA, and math.NA

Abstract: This paper introduces a series of methods for traversing binary decision trees using arithmetic operations. We present a suite of binary tree traversal algorithms that leverage novel representation matrices to flatten the full binary tree structure and embed the aggregated internal node Boolean tests into a single binary vector. Our approach, grounded in maximum inner product search, offers new insights into decision tree.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. Reducing multiclass to binary: a unifying approach for margin classifiers. Journal of Machine Learning Research, 1(2):113–141, 2001.
  2. Fast ranking with additive ensembles of oblivious and non-oblivious regression trees. ACM Transactions on Information Systems (TOIS), 35(2):1–31, 2016.
  3. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2(1):263–286, 1994.
  4. Decoding of ternary error correcting output codes. pages 753–763, 2006.
  5. On the decoding process in ternary error-correcting output codes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010.
  6. Learning piece-wise linear models from large scale data for ad click prediction. arXiv: Machine Learning, 2017.
  7. One loss for all: Deep hashing with a single cosine similarity based learning objective. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 24286–24298. Curran Associates, Inc., 2021.
  8. Maximum-margin hamming hashing. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 8251–8260, 2019.
  9. Transformers are RNNs: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning (ICML), 2020.
  10. Error-correcting output coding corrects bias and variance. In In Proceedings of the Twelfth International Conference on Machine Learning, pages 313–321. Morgan Kaufmann, 1995.
  11. GPU-based parallelization of QuickScorer to speed-up document ranking with tree ensembles. In 7th Italian Information Retrieval Workshop, IIR 2016, volume 1653. CEUR-WS, 2016.
  12. QuickScorer: A fast algorithm to rank documents with additive ensembles of regression trees. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 73–82, 2015.
  13. QuickScorer: Efficient traversal of large ensembles of decision trees. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 383–387. Springer, 2017.
  14. Key-value attention mechanism for neural machine translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 290–295, Taipei, Taiwan, November 2017. Asian Federation of Natural Language Processing.
  15. Efficient traversal of decision tree ensembles with FPGAs. Journal of Parallel and Distributed Computing, 155:38–49, 2021.
  16. Shitoumu. On quick scorer predication of the additive ensembles of regression trees model. Website. Accessed March 22, 2022.
  17. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc., 2017.
  18. NBDT: Neural-backed decision trees, 2020.
  19. Beyond sparsity: Tree regularization of deep models for interpretability. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI’18/IAAI’18/EAAI’18. AAAI Press, 2018.
  20. Optimizing for interpretability in deep neural networks with tree regularization. J. Artif. Int. Res., 72:1–37, jan 2022.
  21. RapidScorer: fast tree ensemble evaluation by maximizing compactness in data level parallelization. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 941–950, 2018.
  22. Transition matrix representation of trees with transposed convolutions. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM). SIAM, 2022.
  23. Jinxiong Zhang. Decision machines: Interpreting decision tree as a model combination method. ArXiv, abs/2101.11347, 2021.
  24. Jinxiong Zhang. Yet another representation of binary decision trees: A mathematical demonstration. ArXiv, abs/2101.07077, 2021.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com