Feasibility of Monitoring Consumer Computers for AI Governance

Ascertain whether widespread consumer computing devices can be effectively monitored at scale to enforce restrictions on AI training and inference, in scenarios where algorithmic progress reduces the compute required for dangerous capability levels, so as to maintain the verifiability of an international ASI-prevention agreement.

Background

The paper argues that algorithmic progress could reduce the compute needed to train powerful models, potentially shifting dangerous capability thresholds from data centers to consumer hardware. If that occurred, verification systems focused on data centers would no longer suffice.

Monitoring millions of consumer devices would raise technical and ethical concerns. The authors explicitly flag uncertainty about whether such monitoring is even possible, motivating the need to determine feasibility before relying on this pathway to sustain verification.

References

It is unclear whether consumer computers even could be monitored, but regardless of effectiveness, it is certainly not morally desirable to do so.

An International Agreement to Prevent the Premature Creation of Artificial Superintelligence  (2511.10783 - Scher et al., 13 Nov 2025) in Section: Why this plan in particular? — Why are you banning research? (First risk: fast grind of algorithmic progress)