Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 92 tok/s
Gemini 2.5 Pro 59 tok/s Pro
GPT-5 Medium 22 tok/s
GPT-5 High 29 tok/s Pro
GPT-4o 94 tok/s
GPT OSS 120B 471 tok/s Pro
Kimi K2 212 tok/s Pro
2000 character limit reached

Uncovering a chirally suppressed mechanism of $0νββ$ decay with LHC searches (2202.01237v2)

Published 2 Feb 2022 in hep-ph, hep-ex, nucl-ex, and nucl-th

Abstract: $\Delta L =2$ lepton number violation (LNV) at the TeV scale could provide an alternative interpretation of positive signal(s) in future neutrinoless double beta $(0\nu\beta\beta)$ decay experiments. An interesting class of models from this point of view are those that at low energies give rise to dimension-9 vector operators and a dimension-7 operator, both of whose $0\nu\beta\beta$-decay rates are "chirally suppressed". We study and compare the sensitivities of $0\nu\beta\beta$-decay experiments and LHC searches to a simplified model in this class of TeV-scale LNV that is also $SU(2)_L \times U(1)_Y$ gauge invariant. The searches for $0\nu\beta\beta$ decay, which are here diluted by a chiral suppression of the vector operators, are found to be less constraining than LHC searches whose reach is increased by the assumed kinematic accessibility of the mediator particles. For the chirally suppressed dimension-7 operator generated by TeV-scale mediators, in contrast, $0\nu\beta\beta$-decay searches place strong constraints on the size of the new Yukawa coupling. Signals of this model at the LHC and $0\nu\beta\beta$-decay experiments are entirely uncorrelated with the observed neutrinos masses, as these new sources of LNV give negligible contributions to the latter. We find the prospects for the high-luminosity LHC and ton-scale $0\nu\beta\beta$-decay experiments to uncover the chirally-suppressed mechanism with TeV-scale LNV to be promising. We also comment on the sensitivity of the $0\nu\beta\beta$-decay lifetime to certain unknown low-energy constants that in the case of dimension-9 {\it scalar} operators are expected to be large due to non-perturbative renormalization.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube