Deterministic inference across tensor parallel sizes
Establish deterministic inference guarantees for large language models across varying tensor parallel (TP) sizes so that identical inputs yield identical outputs regardless of TP configuration; in particular, address reinforcement learning pipelines where the rollout engine uses multi-GPU tensor parallelism while the training engine uses Fully Sharded Data Parallel (TP = 1).
Sponsor
References
While prior work has addressed batch-size–related nondeterminism through batch-invariant kernels, determinism across different TP sizes remains an open problem, particularly in RL settings, where the training engine typically uses Fully Sharded Data Parallel (i.e., TP = 1) while the rollout engine relies on multi-GPU TP to maximize the inference throughput, creating a natural mismatch between the two.