ImageNet-scale Evaluation of the Terminator Architecture

Evaluate the Terminator architecture, including the Slow-Fast Neural Encoding block and the HyperZZW operator, on the ImageNet dataset to ascertain its large-scale image classification performance, which was not assessed in the paper due to computational constraints.

Background

The paper proposes the Terminator architecture that replaces residual learning with a Slow-Fast Neural Encoding (SFNE) block using coordinate-based implicit MLPs to generate hyper-kernels and a HyperZZW operator to obtain context-dependent fast weights via elementwise multiplication. The approach is evaluated on several benchmarks, including sMNIST, pMNIST, sCIFAR10, CIFAR10, CIFAR100, and STL10, showing competitive or superior performance with fewer parameters and faster convergence.

However, the authors note that they did not run experiments on the ImageNet dataset because of limited computing resources. Consequently, the architecture’s scalability and performance on standard large-scale image classification remain unreported within this work, leaving an empirical evaluation on ImageNet unresolved.

References

Due to limited computing resources, we were unable to conduct experiments on ImageNet dataset.

HyperZ$\cdot$Z$\cdot$W Operator Connects Slow-Fast Networks for Full Context Interaction  (2401.17948 - Zhang, 2024) in Section 6 (Conclusion), final paragraph