Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Robustness of Gradient-Boosted Decision Trees through One-Hot Encoding and Regularization (2304.13761v3)

Published 26 Apr 2023 in stat.ML and cs.LG

Abstract: Gradient-boosted decision trees (GBDT) are widely used and highly effective machine learning approach for tabular data modeling. However, their complex structure may lead to low robustness against small covariate perturbation in unseen data. In this study, we apply one-hot encoding to convert a GBDT model into a linear framework, through encoding of each tree leaf to one dummy variable. This allows for the use of linear regression techniques, plus a novel risk decomposition for assessing the robustness of a GBDT model against covariate perturbations. We propose to enhance the robustness of GBDT models by refitting their linear regression forms with $L_1$ or $L_2$ regularization. Theoretical results are obtained about the effect of regularization on the model performance and robustness. It is demonstrated through numerical experiments that the proposed regularization approach can enhance the robustness of the one-hot-encoded GBDT models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shijie Cui (4 papers)
  2. Agus Sudjianto (34 papers)
  3. Aijun Zhang (26 papers)
  4. Runze Li (93 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.