Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Cuts in Small Signal Scenarios -- Enhanced Sneutrino Detectability Using Machine Learning (2108.03125v4)

Published 6 Aug 2021 in hep-ph and stat.ML

Abstract: We investigate enhancing the sensitivity of new physics searches at the LHC by machine learning in the case of background dominance and a high degree of overlap between the observables for signal and background. We use two different models, XGBoost and a deep neural network, to exploit correlations between observables and compare this approach to the traditional cut-and-count method. We consider different methods to analyze the models' output, finding that a template fit generally performs better than a simple cut. By means of a Shapley decomposition, we gain additional insight into the relationship between event kinematics and the machine learning model output. We consider a supersymmetric scenario with a metastable sneutrino as a concrete example, but the methodology can be applied to a much wider class of models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. M. Feickert and B. Nachman, A Living Review of Machine Learning for Particle Physics. arXiv:2102.02770 [hep-ph].
  2. CERN Yellow Reports: Monographs. CERN, Geneva, 2020. https://cds.cern.ch/record/2749422.
  3. https://cds.cern.ch/record/2688062.
  4. T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pp. 785–794. ACM, New York, NY, USA, 2016. http://doi.acm.org/10.1145/2939672.2939785.
  5. Springer Series in Statistics. Springer New York Inc., New York, NY, USA, 2001.
  6. L. S. Shapley, A value for n-person games. Contributions to the Theory of Games 2 (1953) 307–317.
  7. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds., pp. 4765–4774. Curran Associates, Inc., 2017. http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf.
  8. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in neural information processing systems, pp. 4765–4774. 2017.
  9. E. Štrumbelj and I. Kononenko, Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems 41 (2013) 647–665.
  10. A. Datta, S. Sen, and Y. Zick, “Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems,” in 2016 IEEE symposium on security and privacy (SP), pp. 598–617, IEEE. 2016.
  11. arXiv:2007.06011 [stat.ML].
  12. F. Huettner and M. Sunder, Axiomatic arguments for decomposing goodness of fit according to Shapley and Owen values. Electronic Journal of Statistics 6 (2012) 1239–1250.
Citations (12)

Summary

We haven't generated a summary for this paper yet.