ScalerEval: Automated and Consistent Evaluation Testbed for Auto-scalers in Microservices (2504.08308v1)
Abstract: Auto-scaling is an automated approach that dynamically provisions resources for microservices to accommodate fluctuating workloads. Despite the introduction of many sophisticated auto-scaling algorithms, evaluating auto-scalers remains time-consuming and labor-intensive, as it requires the implementation of numerous fundamental interfaces, complex manual operations, and in-depth domain knowledge. Besides, frequent human intervention can inevitably introduce operational errors, leading to inconsistencies in the evaluation of different auto-scalers. To address these issues, we present ScalerEval, an end-to-end automated and consistent testbed for auto-scalers in microservices. ScalerEval integrates essential fundamental interfaces for implementation of auto-scalers and further orchestrates a one-click evaluation workflow for researchers. The source code is publicly available at \href{https://github.com/WHU-AISE/ScalerEval}{https://github.com/WHU-AISE/ScalerEval}.