Dice Question Streamline Icon: https://streamlinehq.com

Effectiveness of Pretraining Data Filtering Against Adversarial Fine-Tuning

Determine the effectiveness of pretraining data filtering that excludes dual-use biological sequences (such as eukaryotic viral data) in open-weight bio-foundation models at mitigating dual-use risks, particularly under a threat model in which adversaries can fine-tune the released model weights for malicious use.

Information Square Streamline Icon: https://streamlinehq.com

Background

Open-weight bio-foundation models are increasingly released with pretraining data filtered to remove dual-use biological information (e.g., eukaryotic viral sequences) to mitigate misuse risks. However, adversaries can fine-tune these open weights, potentially restoring filtered capabilities. A clear, systematic assessment of whether such filtering remains robust under adversarial fine-tuning is missing, motivating an explicit investigation of its real-world effectiveness.

References

However, the effectiveness of such an approach remains unclear, particularly against determined actors who might fine-tune these models for malicious use.

Best Practices for Biorisk Evaluations on Open-Weight Bio-Foundation Models (2510.27629 - Wei et al., 31 Oct 2025) in Abstract (page 1)