Dice Question Streamline Icon: https://streamlinehq.com

Interpretability of tree-aware branching policies

Ascertain how the neural branching policy of Zarpellon et al. (2021), which parameterizes branch-and-bound search trees using solver statistics and search-tree descriptors, leverages these features to compute branching variable scores; identify the contributions of specific features and characterize decision behavior across different parts of the tree.

Information Square Streamline Icon: https://streamlinehq.com

Background

To improve generalization across heterogeneous MILP classes, Zarpellon et al. (2021) propose incorporating search-tree context and solver statistics into the variable scoring process. While this approach shows promise, the survey notes that it is unclear how the proposed features are used by the neural scoring function, reflecting a broader challenge of interpretability in learned branching strategies.

Clarifying the mapping from features to scores would help understand when and why tree-aware policies succeed or fail, guide feature design, and potentially inform adaptive strategies that switch behaviors depending on tree state.

References

It is unclear how the features proposed in \citet{Zarpellon2021} are used to score branching candidates,\footnote{This is because neural networks lack explanability.} but certainly this information opens the door to branching rules that switch among different behaviors at different parts of the tree, or stages of the solving process.

Machine Learning Augmented Branch and Bound for Mixed Integer Linear Programming (2402.05501 - Scavuzzo et al., 8 Feb 2024) in Section “Branching,” subsubsection “Towards a general branching rule,” concluding paragraph