Deterministic inference across tensor parallel sizes

Establish deterministic inference guarantees for large language models across varying tensor parallel (TP) sizes so that identical inputs yield identical outputs regardless of TP configuration; in particular, address reinforcement learning pipelines where the rollout engine uses multi-GPU tensor parallelism while the training engine uses Fully Sharded Data Parallel (TP = 1).

Background

Deterministic inference is essential for reliable LLM evaluation, multi-agent coordination, and reinforcement learning, but current serving systems exhibit nondeterminism due to the non-associativity of floating-point arithmetic and differing reduction orders across GPUs. Prior work on batch-invariant operations mitigates batch-size-related variance but does not address tensor parallel (TP) variance.

In practical RL pipelines, rollout engines (e.g., vLLM) commonly use multi-GPU TP for throughput, while training engines (e.g., FSDP) often run with TP = 1. This mismatch produces probability discrepancies and destabilizes training, motivating the need to guarantee determinism across TP sizes.

References

While prior work has addressed batch-size–related nondeterminism through batch-invariant kernels, determinism across different TP sizes remains an open problem, particularly in RL settings, where the training engine typically uses Fully Sharded Data Parallel (i.e., TP = 1) while the rollout engine relies on multi-GPU TP to maximize the inference throughput, creating a natural mismatch between the two.