Adapting generative sequence model training to exploit positive-only measurement (q=1)
Develop adaptations of sequence-generative modeling algorithms that directly refine the generative distribution p(x) (including conditioning and reward-based fine-tuning) to exploit measurement allocation q=1, where all sequencing is allocated to active sequences, in large-scale screening experiments so that these algorithms can benefit from the information gains associated with positive-only data collection.
References
It is unclear how best to adapt these algorithms to reap the information gains of setting q=1.
— Accelerated Learning on Large Scale Screens using Generative Library Models
(2510.16612 - Weinstein et al., 18 Oct 2025) in Discussion, Future directions