Engines of Parsimony: Part II; Performance Trade-offs for Communicating Reversible Computers (2011.04054v3)
Abstract: In Part I of this series, the limits on the sustained performance of large reversible computers were investigated and found to scale as $\sqrt{AV}$ where $A$ is the convex bounding surface area of the system and $V$ its internal volume, compared to $A$ for an irreversible computer. This analysis neglected to consider interactions between components of the system however, instead focussing on raw computational power. In this part we extend this analysis to consider synchronisation events such as communication between independent reversible processors subject to a limiting supply of free energy. It is found that, whilst asynchronous computation can proceed at a rate $b\lambda$, synchronisation events proceed at the much slower rate $\sim b2\lambda$; in these rate expressions, $\lambda$ is the gross transition rate for each processor and $b\sim\sqrt{A/V}\ll1$ is the 'computational bias' measuring the net fraction of transitions which are successful. Whilst derived for Brownian reversible computers, this result applies to all forms of reversible computer, including Quantum computers. In fact this result is an upper bound, and one must choose the phase space geometry of the synchronisation events carefully to avoid even worse performance. In the limit of large computers, communication will therefore tend to freeze out as $b\to0$; if, however, one is willing to restrict the number of processors permitted to share state at any given time then this rate can be ameliorated and performance on par with asynchronous computation can be recovered.