2000 character limit reached
Protein Language Model Zero-Shot Fitness Predictions are Improved by Inference-only Dropout (2506.14793v1)
Published 31 May 2025 in cs.LG
Abstract: Protein LLMs (PLMs) such as ESM2 have been shown to be capable of zero-shot prediction of critical scalar properties of proteins (fitness). In this work, we show that injecting a dropout layer at inference time between a PLM's featurizer/embedding layer and its transformer, and averaging its output akin to Monte-Carlo dropout increases zero-shot performance on a subset of the ProteinGym dataset. This is the case even when the model was not trained with dropouts to begin with, and does not require retraining or finetuning of the PLM. A dropout of 0.1 seems performant across all models.